CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 219

_id ddss9214
id ddss9214
authors Friedman, A.
year 1993
title A decision-making process for choice of a flexible internal partition option in multi-unit housing using decision theory techniques
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary Recent demographic changes have increased the heterogeneity of user groups in the North American housing market. Smaller households (e.g. elderly, single parent) have non-traditional spatial requirements that cannot be accommodated within the conventional house layout. This has created renewed interest in Demountable/Flexible internal partition systems. However, the process by which designers decide which project or user groups are most suited for the use of these systems is quite often complex, non-linear, uncertain and dynamic, since the decisions involve natural processes and human values that are apparently random. The anonymity of users when mass housing projects are conceptualized, and the uncertainty as to the alternative to be selected by the user, given his/her constantly changing needs, are some contributing factors to this effect. Decision Theory techniques, not commonly used by architects, can facilitate the decision-making process through a systematic evaluation of alternatives by means of quantitative methods in order to reduce uncertainty in probabilistic events or in cases when data is insufficient. The author used Decision Theory in the selection of flexible partition systems. The study involved a multi-unit, privately initiated housing project in Montreal, Canada, where real site conditions and costs were used. In this paper, the author outlines the fundamentals of Decision Theory and demonstrates the use of Expected Monetary Value and Weighted Objective Analysis methods and their outcomes in the design of a Montreal housing project. The study showed that Decision Theory can be used as an effective tool in housing design once the designer knows how to collect basic data.
series DDSS
last changed 2003/08/07 16:36

_id 2312
authors Carrara, G., Kalay Y.E. and Novembri, G.
year 1992
title Multi-modal Representation of Design Knowledge
doi https://doi.org/10.52842/conf.ecaade.1992.055
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 55-66
summary Explicit representation of design knowledge is needed if scientific methods are to be applied in design research, and if computers are to be used in the aid of design education and practice. The representation of knowledge in general, and design knowledge in particular, have been the subject matter of computer science, design methods, and computer-aided design research for quite some time. Several models of design knowledge representation have been developed over the last 30 years, addressing specific aspects of the problem. This paper describes a different approach to design knowledge representation that recognizes the multimodal nature of design knowledge. It uses a variety of computational tools to encode different kinds of design knowledge, including the descriptive (objects), the prescriptive (goals) and the operational (methods) kinds. The representation is intended to form a parsimonious, communicable and presentable knowledge-base that can be used as a tool for design research and education as well as for CAAD.
keywords Design Methods, Design Process Goals, Knowledge Representation, Semantic Networks
series eCAADe
email
last changed 2022/06/07 07:55

_id 4bd2
authors Carrara, G., Kalay, Y.E. and Novembri, G.
year 1992
title A Computational Framework for Supporting Creative Architectural Design
source New York: John Wiley & Sons, 1992. pp. 17-34 : ill. includes Bibliography
summary Design can be considered a process leading to the definition of a physical form that achieves a certain predefined set of performance criteria. The process comprises three distinct operations: (1) Definition of the desired set of performance criteria (design goals); (2) generation of alternative design solutions; (3) evaluation of the expected performances of alternative design solutions, and comparing them to the predefined criteria. Difficulties arise in performing each one of the three operations, and in combining them into a purposeful unified process. Computational techniques were developed to assist each of the three operations. A comprehensive and successful computational design assistant will have to recognize the limitations of current computational techniques, and incorporate a symbiosis between the machine and the human designer. This symbiosis comprises allocating design tasks between the designer and the computer in a manner that is most appropriate for the task at hand. The task allocation must, therefore, be done dynamically, responding to the changing circumstances of the design process. This report proposes a framework for such a symbiotic partnership, which comprises four major components: (1) User interface and design process control; (2) design goals; (3) evaluators; (4) database
keywords architecture, knowledge base, systems, design process, control
series CADline
email
last changed 2003/06/02 14:41

_id c804
authors Richens, P.
year 1994
title Does Knowledge really Help?
source G. Carrara and Y.E. Kalay (Eds.), Knowledge-Based Computer-Aided Architectural Design, Elsevier
summary The Martin Centre CADLAB has recently been established to investigate software techniques that could be of practical importance to architects within the next five years. In common with most CAD researchers, we are interested in the earlier, conceptual, stages of design, where commercial CAD systems have had little impact. Our approach is not Knowledge-Based, but rather focuses on using the computer as a medium for design and communication. This leads to a concentration on apparently superficial aspects such as visual appearance, the dynamics of interaction, immediate feedback, plasticity. We try to avoid building-in theoretical attitudes, and to reduce the semantic content of our systems to a low level on the basis that flexibility and intelligence are inversely related; and that flexibility is more important. The CADLAB became operational in January 1992. First year work in three areas – building models, experiencing architecture, and making drawings – is discussed.
series other
more http://www.arct.cam.ac.uk/research/pubs/
last changed 2003/03/05 13:19

_id 2db4
authors Schmitt, Gerhard
year 1992
title Design for Performance
source New York: John Wiley & Sons, 1992. pp. 83-100 : ill. includes bibliography Design for performance describes a generative approach toward fulfilling qualitative and quantitative design requirements based on specification and existing cases. The term design applies to the architectural domain: the term performance includes the aesthetic, quantitative, and qualitative behavior of an artifact. In achieving architectural quality while adhering to measurable criteria, design for performance has representational, computational, and practical advantages over traditional methods, in particular over post-facto single- and multicriteria analysis and evaluation. In this paper a proposal for a working model and a partial implementation of this model are described. architecture / evaluation / performance / synthesis / design / representation / prediction / integration. Ô h)0*0*0*°° ÔŒ21. Schneekloth, Lynda H., Rajendra K. Jain and Gary E. Day. 'Wind Study of Pedestrian Environments.' February, 1989. 30, [2] p. : ill. includes bibliography and index.
summary This report summarizes Part 1 of the research on wind conditions affecting pedestrian environments for the State University of New York at Buffalo. Part 1 reports on existing conditions in the main part of the North Campus in Amherst. Procedures and methods are outlined, the profile of the current situation reported, and a special study on the proposed Natural Science and Math Building are included
keywords architecture, research, evaluation, analysis, simulation, hardware
series CADline
last changed 1999/02/12 15:09

_id 68c8
authors Flemming, U., Coyne, R. and Fenves, S. (et al.)
year 1994
title SEED: A Software Environment to Support the Early Phases in Building Design
source Proceeding of IKM '94, Weimar, Germany, pp. 5-10
summary The SEED project intends to develop a software environment that supports the early phases in building design (Flemming et al., 1993). The goal is to provide support, in principle, for the preliminary design of buildings in all aspects that can gain from computer support. This includes using the computer not only for analysis and evaluation, but also more actively for the generation of designs, or more accurately, for the rapid generation of design representations. A major motivation for the development of SEED is to bring the results of two multi-generational research efforts focusing on `generative' design systems closer to practice: 1. LOOS/ABLOOS, a generative system for the synthesis of layouts of rectangles (Flemming et al., 1988; Flemming, 1989; Coyne and Flemming, 1990; Coyne, 1991); 2. GENESIS, a rule-based system that supports the generation of assemblies of 3-dimensional solids (Heisserman, 1991; Heisserman and Woodbury, 1993). The rapid generation of design representations can take advantage of special opportunities when it deals with a recurring building type, that is, a building type dealt with frequently by the users of the system. Design firms - from housing manufacturers to government agencies - accumulate considerable experience with recurring building types. But current CAD systems capture this experience and support its reuse only marginally. SEED intends to provide systematic support for the storing and retrieval of past solutions and their adaptation to similar problem situations. This motivation aligns aspects of SEED closely with current work in Artificial Intelligence that focuses on case-based design (see, for example, Kolodner, 1991; Domeshek and Kolodner, 1992; Hua et al., 1992).
series other
email
last changed 2003/04/23 15:14

_id 2c22
authors O'Neill, Michael J.
year 1992
title Neural Network Simulation as a Computer- Aided design Tool For Predicting Wayfinding Performance
source New York: John Wiley & Sons, 1992. pp. 347-366 : ill. includes bibliography
summary Complex public facilities such as libraries, hospitals, and governmental buildings often present problems to users who must find their way through them. Research shows that difficulty in wayfinding has costs in terms of time, money, public safety, and stress that results from being lost. While a wide range of architectural research supports the notion that ease of wayfinding should be a criterion for good design, architects have no method for evaluating how well their building designs will support the wayfinding task. People store and retrieve information about the layout of the built environment in a knowledge representation known as the cognitive map. People depend on the information stored in the cognitive map to find their way through buildings. Although there are numerous simulations of the cognitive map, the mechanisms of these models are not constrained by what is known about the neurophysiology of the brain. Rather, these models incorporate search mechanisms that act on semantically encoded information about the environment. In this paper the author describes the evaluation and application of an artificial neural network simulation of the cognitive map as a means of predicting wayfinding behavior in buildings. This simulation is called NAPS-PC (Network Activity Processing Simulator--PC version). This physiologically plausible model represents knowledge about the layout of the environment through a network of inter-connected processing elements. The performance of NAPS-PC was evaluated against actual human wayfinding performance. The study found that the simulation generated behavior that matched the performance of human participants. After the validation, NAPS-PC was modified so that it could read environmental information directly from AutoCAD (a popular micro-computer-based CAD software package) drawing files, and perform 'wayfinding' tasks based on that environmental information. This prototype tool, called AutoNet, is conceptualized as a means of allowing designers to predict the wayfinding performance of users in a building before it is actually built
keywords simulation, cognition, neural networks, evaluation, floor plans, applications, wayfinding, layout, building
series CADline
last changed 2003/06/02 13:58

_id 3ff5
authors Abbo, I.A., La Scalea, L., Otero, E. and Castaneda, L.
year 1992
title Full-Scale Simulations as Tool for Developing Spatial Design Ability
source Proceedings of the 4rd European Full-Scale Modelling Conference / Lausanne (Switzerland) 9-12 September 1992, Part C, pp. 7-10
summary Spatial Design Ability has been defined as the capability to anticipate effects (psychological impressions on potential observers or users) produced by mental manipulation of elements of architectural or urban spaces. This ability, of great importance in choosing the appropriate option during the design process, is not specifically developed in schools of architecture and is partially obtained as a by-product of drawing, designing or architectural criticism. We use our Laboratory as a tool to present spaces to people so that they can evaluate them. By means of a series of exercises, students confront their anticipations with the psychological impressions produced in other people. For this occasion, we present an experience in which students had to propose a space for an exhibition hag in which architectural projects (student thesis) were to be shown. Following the Spatial Design Ability Development Model which we have been using for several years, students first get acquainted with the use of evaluation instruments for psychological impressions as well as with research methodology. In this case, due to the short period available, we reduced research to investigate the effects produced by the manipulation of only 2 independents variables: students manipulated first the form of the roof, walls and interiors elements, secondly, color and texture of those elements. They evaluated spatial quality, character and the other psychological impressions that manipulations produced in people. They used three dimensional scale models 1/10 and 1/1.
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
email
more http://info.tuwien.ac.at/efa
last changed 2003/08/25 10:12

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id 578d
authors Helpenstein, H. (Ed.)
year 1993
title CAD geometry data exchange using STEP
source Berlin: Springer-Verlag
summary With increasing demand for data exchange in computer integrated manufacturing, a neutral connection between dissimilar systems is needed. After a few national and European attempts, a worldwide standardization of product data has been developed. Standard ISO 10303 (STEP - STandard for Exchange of Product data) produced in its first version those parts that are relevant for CAD geometrical data. A European consortium of 14 CAD vendors and users was supported by the ESPRIT programme to influence the emerging standard and implement early applications for it. Over the years 1989-1992, project CADEX (CAD geometry data EXchange) worked out application protocols as a contribution to STEP; developed a software toolkit that reads, writes, and manipulates STEP data; and, based on this toolkit, implemented data exchange processors for ten different CAD and FEA systems. This book reports the work done in project CADEX and describes all its results in detail.
series other
last changed 2003/04/23 15:14

_id ddss9208
id ddss9208
authors Lucardie, G.L.
year 1993
title A functional approach to realizing decision support systems in technical regulation management for design and construction
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary Technical building standards defining the quality of buildings, building products, building materials and building processes aim to provide acceptable levels of safety, health, usefulness and energy consumption. However, the logical consistency between these goals and the set of regulations produced to achieve them is often hard to identify. Not only the large quantities of highly complex and frequently changing building regulations to be met, but also the variety of user demands and the steadily increasing technical information on (new) materials, products and buildings have produced a very complex set of knowledge and data that should be taken into account when handling technical building regulations. Integrating knowledge technology and database technology is an important step towards managing the complexity of technical regulations. Generally, two strategies can be followed to integrate knowledge and database technology. The main emphasis of the first strategy is on transferring data structures and processing techniques from one field of research to another. The second approach is concerned exclusively with the semantic structure of what is contained in the data-based or knowledge-based system. The aim of this paper is to show that the second or knowledge-level approach, in particular the theory of functional classifications, is more fundamental and more fruitful. It permits a goal-directed rationalized strategy towards analysis, use and application of regulations. Therefore, it enables the reconstruction of (deep) models of regulations, objects and of users accounting for the flexibility and dynamics that are responsible for the complexity of technical regulations. Finally, at the systems level, the theory supports an effective development of a new class of rational Decision Support Systems (DSS), which should reduce the complexity of technical regulations and restore the logical consistency between the goals of technical regulations and the technical regulations themselves.
series DDSS
last changed 2003/08/07 16:36

_id avocaad_2001_17
id avocaad_2001_17
authors Ying-Hsiu Huang, Yu-Tung Liu, Cheng-Yuan Lin, Yi-Ting Cheng, Yu-Chen Chiu
year 2001
title The comparison of animation, virtual reality, and scenario scripting in design process
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary Design media is a fundamental tool, which can incubate concrete ideas from ambiguous concepts. Evolved from freehand sketches, physical models to computerized drafting, modeling (Dave, 2000), animations (Woo, et al., 1999), and virtual reality (Chiu, 1999; Klercker, 1999; Emdanat, 1999), different media are used to communicate to designers or users with different conceptual levels¡@during the design process. Extensively employed in design process, physical models help designers in managing forms and spaces more precisely and more freely (Millon, 1994; Liu, 1996).Computerized drafting, models, animations, and VR have gradually replaced conventional media, freehand sketches and physical models. Diversely used in the design process, computerized media allow designers to handle more divergent levels of space than conventional media do. The rapid emergence of computers in design process has ushered in efforts to the visual impact of this media, particularly (Rahman, 1992). He also emphasized the use of computerized media: modeling and animations. Moreover, based on Rahman's study, Bai and Liu (1998) applied a new design media¡Xvirtual reality, to the design process. In doing so, they proposed an evaluation process to examine the visual impact of this new media in the design process. That same investigation pointed towards the facilitative role of the computerized media in enhancing topical comprehension, concept realization, and development of ideas.Computer technology fosters the growth of emerging media. A new computerized media, scenario scripting (Sasada, 2000; Jozen, 2000), markedly enhances computer animations and, in doing so, positively impacts design processes. For the three latest media, i.e., computerized animation, virtual reality, and scenario scripting, the following question arises: What role does visual impact play in different design phases of these media. Moreover, what is the origin of such an impact? Furthermore, what are the similarities and variances of computing techniques, principles of interaction, and practical applications among these computerized media?This study investigates the similarities and variances among computing techniques, interacting principles, and their applications in the above three media. Different computerized media in the design process are also adopted to explore related phenomenon by using these three media in two projects. First, a renewal planning project of the old district of Hsinchu City is inspected, in which animations and scenario scripting are used. Second, the renewal project is compared with a progressive design project for the Hsinchu Digital Museum, as designed by Peter Eisenman. Finally, similarity and variance among these computerized media are discussed.This study also examines the visual impact of these three computerized media in the design process. In computerized animation, although other designers can realize the spatial concept in design, users cannot fully comprehend the concept. On the other hand, other media such as virtual reality and scenario scripting enable users to more directly comprehend what the designer's presentation.Future studies should more closely examine how these three media impact the design process. This study not only provides further insight into the fundamental characteristics of the three computerized media discussed herein, but also enables designers to adopt different media in the design stages. Both designers and users can more fully understand design-related concepts.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 6ef4
authors Carrara, Gianfranco and Kalay, Yehuda E.
year 1992
title Multi-Model Representation of Design Knowledge
doi https://doi.org/10.52842/conf.acadia.1992.077
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 77-88
summary Explicit representation of design knowledge is needed if scientific methods are to be applied in design research, and if comPuters are to be used in the aid of design education and practice. The representation of knowledge in general, and design knowledge in particular, have been the subject matter of computer science, design methods, and computer- aided design research for quite some time. Several models of design knowledge representation have been developed over the last 30 years, addressing specific aspects of the problem. This paper describes a different approach to design knowledge representation that recognizes the Multi-modal nature of design knowledge. It uses a variety of computational tools to encode different kinds of design knowledge, including the descriptive (objects), the prescriptive (goals) and the operational (methods) kinds. The representation is intended to form a parsimonious, communicable and presentable knowledge-base that can be used as a tool for design research and education as well as for CAAD.
keywords Design Methods, Design Process, Goals, Knowledge Representation, Semantic Networks
series ACADIA
email
last changed 2022/06/07 07:55

_id ddss9211
id ddss9211
authors Gilleard, J. and Olatidoye, O.
year 1993
title Graphical interfacing to a conceptual model for estimating the cost of residential construction
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary This paper presents a method for determining elemental square foot costs and cost significance for residential construction. Using AutoCAD's icon menu and dialogue box' facilities, a non-expert may graphically select (i) residential configuration; (ii) construction quality level; (iii) geographical location; (iv) square foot area; and finally, (v) add-ons, e.g. porches and decks, basement, heating and cooling equipment, garages and carports etc. in order to determine on-site builder's costs. Subsequent AutoLisp routines facilitate data transfer to a Lotus 1-2-3 spreadsheet where an elemental cost breakdown for the project may be determined. Finally, using Lotus 1-2-3 macros, computed data is transferred back to AutoCAD, where all cost significant items are graphically highlighted.
series DDSS
last changed 2003/08/07 16:36

_id acaa
authors Kalay, Yehuda E.
year 1992
title Evaluating and Predicting Design Performance
source New York: John Wiley & Sons, 1992. pp. 399-404
summary This article is the conclusion chapter of the book by the same title. Evaluation can be defined as measuring the fit between achieved or expected performances to stated criteria. Prediction is the process whereby expected performance characteristics are simulated, or otherwise made tangible, when evaluation is applied to hypothetical design solutions. The multifaceted nature of design solutions precludes optimization of any one performance characteristic. Rather, a good design solution will strike a balance in the degree to which any performance criterion is achieved, such that overall performance will be maximized. This paper discusses the nature of evaluation and prediction, their multilevel and multifaceted dimensions, and some of the approaches that have been proposed to perform quantitative and qualitative evaluations
keywords evaluation, performance, prediction, multicriteria, architecture, design process
series CADline
email
last changed 2003/06/02 13:58

_id caadria2004_k-1
id caadria2004_k-1
authors Kalay, Yehuda E.
year 2004
title CONTEXTUALIZATION AND EMBODIMENT IN CYBERSPACE
doi https://doi.org/10.52842/conf.caadria.2004.005
source CAADRIA 2004 [Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] Seoul Korea 28-30 April 2004, pp. 5-14
summary The introduction of VRML (Virtual Reality Markup Language) in 1994, and other similar web-enabled dynamic modeling software (such as SGI’s Open Inventor and WebSpace), have created a rush to develop on-line 3D virtual environments, with purposes ranging from art, to entertainment, to shopping, to culture and education. Some developers took their cues from the science fiction literature of Gibson (1984), Stephenson (1992), and others. Many were web-extensions to single-player video games. But most were created as a direct extension to our new-found ability to digitally model 3D spaces and to endow them with interactive control and pseudo-inhabitation. Surprisingly, this technologically-driven stampede paid little attention to the core principles of place-making and presence, derived from architecture and cognitive science, respectively: two principles that could and should inform the essence of the virtual place experience and help steer its development. Why are the principles of place-making and presence important for the development of virtual environments? Why not simply be content with our ability to create realistically-looking 3D worlds that we can visit remotely? What could we possibly learn about making these worlds better, had we understood the essence of place and presence? To answer these questions we cannot look at place-making (both physical and virtual) from a 3D space-making point of view alone, because places are not an end unto themselves. Rather, places must be considered a locus of contextualization and embodiment that ground human activities and give them meaning. In doing so, places acquire a meaning of their own, which facilitates, improves, and enriches many aspects of our lives. They provide us with a means to interpret the activities of others and to direct our own actions. Such meaning is comprised of the social and cultural conceptions and behaviors imprinted on the environment by the presence and activities of its inhabitants, who in turn, ‘read’ by them through their own corporeal embodiment of the same environment. This transactional relationship between the physical aspects of an environment, its social/cultural context, and our own embodiment of it, combine to create what is known as a sense of place: the psychological, physical, social, and cultural framework that helps us interpret the world around us, and directs our own behavior in it. In turn, it is our own (as well as others’) presence in that environment that gives it meaning, and shapes its social/cultural character. By understanding the essence of place-ness in general, and in cyberspace in particular, we can create virtual places that can better support Internet-based activities, and make them equal to, in some cases even better than their physical counterparts. One of the activities that stands to benefit most from understanding the concept of cyber-places is learning—an interpersonal activity that requires the co-presence of others (a teacher and/or fellow learners), who can point out the difference between what matters and what does not, and produce an emotional involvement that helps students learn. Thus, while many administrators and educators rush to develop webbased remote learning sites, to leverage the economic advantages of one-tomany learning modalities, these sites deprive learners of the contextualization and embodiment inherent in brick-and-mortar learning institutions, and which are needed to support the activity of learning. Can these qualities be achieved in virtual learning environments? If so, how? These are some of the questions this talk will try to answer by presenting a virtual place-making methodology and its experimental implementation, intended to create a sense of place through contextualization and embodiment in virtual learning environments.
series CAADRIA
type normal paper
last changed 2022/06/07 07:52

_id ddss9217
id ddss9217
authors Kim, Y.S. and Brawne, M.
year 1993
title An approach to evaluating exhibition spaces in art galleries
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary There are certain building types in which movement of people is the most significant evaluation factor. Among these are art galleries and museums. Unlike other building types, which are often explicated by investigating the relationship between people and people, and between people and the built environment, art galleries and museums are a building type in which the social relationship between people hardly exists and peoples movement through space, that is, the functional relationship between people and space, is one of the most significant factors for their description. The typical museum experience is through direct, sequential, and visual contact with static objects on display as the visitor moves. Therefore, the movement pattern of the visitors must exert a significant influence on achieving the specific goal of a museum. There is a critical need for predicting the consequences of particular spatial configurations with respect to visitors movement. In this sense, it is the intention of this paper to find out the relationship between the spatial configuration of exhibition space and the visitors' movement pattern.
series DDSS
last changed 2003/08/07 16:36

_id ddss9215
id ddss9215
authors Mortola, E. and Giangrande, A.
year 1993
title A trichotomic segmentation procedure to evaluate projects in architecture
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary This paper illustrates a model used to construct the evaluation module for An Interface for Designing (AID), a system to aid architectural design. The model can be used at the end of every cycle of analysis-synthesis-evaluation in the intermediate phases of design development. With the aid of the model it is possible to evaluate the quality of a project in overall terms to establish whether the project is acceptable, whether it should be elaborated ex-novo, or whether it is necessary to begin a new cycle to improve it. In this last case, it is also possible to evaluate the effectiveness of the possible actions and strategies for improvement. The model is based on a procedure of trichotomic segmentation, developed with MCDA (Multi-Criteria Decision Aid), which uses the outranking relation to compare the project with some evaluation profiles taken as projects of reference. An application of the model in the teaching field will also be described.
series DDSS
last changed 2003/08/07 16:36

_id 3105
authors Novak, T.P., Hoffman, D.L., and Yung, Y.-F.
year 1996
title Modeling the structure of the flow experience
source INFORMS Marketing Science and the Internet Mini-Conference, MIT
summary The flow construct (Csikszentmihalyi 1977) has recently been proposed by Hoffman and Novak (1996) as essential to understanding consumer navigation behavior in online environments such as the World Wide Web. Previous researchers (e.g. Csikszentmihalyi 1990; Ghani, Supnick and Rooney 1991; Trevino and Webster 1992; Webster, Trevino and Ryan 1993) have noted that flow is a useful construct for describing more general human-computer interactions. Hoffman and Novak define flow as the state occurring during network navigation which is: 1) characterized by a seamless sequence of responses facilitated by machine interactivity, 2) intrinsically enjoyable, 3) accompanied by a loss of self-consciousness, and 4) selfreinforcing." To experience flow while engaged in an activity, consumers must perceive a balance between their skills and the challenges of the activity, and both their skills and challenges must be above a critical threshold. Hoffman and Novak (1996) propose that flow has a number of positive consequences from a marketing perspective, including increased consumer learning, exploratory behavior, and positive affect."
series other
last changed 2003/04/23 15:50

_id ddss9210
id ddss9210
authors Poortman, E.R.
year 1993
title Ratios for cost control
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary The design of buildings takes place in phases representing a development from rough to precision planning. Estimates are made in order to test whether the result is still within the budget set by the client or developer. In this way, the decisions taken during the design phase can be quantified and expressed in monetary terms. To prevent blaming the wrong person when an overrun is discovered, the cost control process has to be improved. For that purpose, two new procedures have been developed: (i) a new translation activity; and (ii) ratios by which quantities can be characterized. 'Translation is the opposite of estimation. A monetary budget is converted -'translated' - into quantities, reflecting the desired quality of the building materials. The financial constraints of the client are thus converted into quantities - the building components used by the designers. Characteristic quantity figures play an important role in this activity. In working out an estimate, the form factor (i.e., the ratio between two characteristic values of a building component) has to be determined. The unit cost is then tested against that ratio. The introduction of the 'translation' activity and the use of characteristic quantity figures and form factors enhance existing estimation methods. By implementing these procedures, cost control becomes considerably more reliable.
series DDSS
last changed 2003/08/07 16:36

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 10HOMELOGIN (you are user _anon_911761 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002