CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 246

_id b4c4
authors Carrara, G., Fioravanti, A. and Novembri, G.
year 2000
title A framework for an Architectural Collaborative Design
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 57-60
doi https://doi.org/10.52842/conf.ecaade.2000.057
summary The building industry involves a larger number of disciplines, operators and professionals than other industrial processes. Its peculiarity is that the products (building objects) have a number of parts (building elements) that does not differ much from the number of classes into which building objects can be conceptually subdivided. Another important characteristic is that the building industry produces unique products (de Vries and van Zutphen, 1992). This is not an isolated situation but indeed one that is spreading also in other industrial fields. For example, production niches have proved successful in the automotive and computer industries (Carrara, Fioravanti, & Novembri, 1989). Building design is a complex multi-disciplinary process, which demands a high degree of co-ordination and co-operation among separate teams, each having its own specific knowledge and its own set of specific design tools. Establishing an environment for design tool integration is a prerequisite for network-based distributed work. It was attempted to solve the problem of efficient, user-friendly, and fast information exchange among operators by treating it simply as an exchange of data. But the failure of IGES, CGM, PHIGS confirms that data have different meanings and importance in different contexts. The STandard for Exchange of Product data, ISO 10303 Part 106 BCCM, relating to AEC field (Wix, 1997), seems to be too complex to be applied to professional studios. Moreover its structure is too deep and the conceptual classifications based on it do not allow multi-inheritance (Ekholm, 1996). From now on we shall adopt the BCCM semantic that defines the actor as "a functional participant in building construction"; and we shall define designer as "every member of the class formed by designers" (architects, engineers, town-planners, construction managers, etc.).
keywords Architectural Design Process, Collaborative Design, Knowledge Engineering, Dynamic Object Oriented Programming
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:55

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id 6e99
authors Hoffer, Erin Rae
year 1992
title Creating the Electronic Design Studio: Development of a Heterogeneous Networked Environment at Harvard's Graduate School of Design
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 225-240
doi https://doi.org/10.52842/conf.ecaade.1992.225
summary The migration of design education to reliance on computer-based techniques requires new ways of thinking about environments which can effectively support a diverse set of activities. Both from a spatial standpoint and a computing resource standpoint, design studios must be inevitably reconfigured to support new tools and reflect new ways of communicating. At Harvard's GSD, a commitment to incorporating computer literacy as a fundamental component of design education enables us to confront these issues through the implementation of a heterogeneous network imbedded in an electronic design environment. This evolving prototype of a new design studio, its development and its potential, will be the subject of this paper. A new style design environment is built upon an understanding of traditional techniques, and layered with an awareness of new tools and methods. Initially we borrow from existing metaphors which govern our interpretation of the way designers work. Next we seek to extend our thinking to include allied or related metaphors such as the library metaphor which informs collections of software and data, or the laboratory metaphor which informs workspace groupings, or the transportation metaphor which informs computer-based communications such as electronic mail or bulletin boards, or the utility services metaphor which informs the provision of network services and equipment. Our evaluation of this environment is based on direct feedback from its users, both faculty and students, and on subjective observation of the qualitative changes in communication which occur between and among these groups and individuals. Ultimately, the network must be judged as a framework for learning and evaluation, and its success depends both on its ability to absorb our existing metaphors for the process of design, and to prefigure the emerging metaphors to be envisioned in the future.

series eCAADe
last changed 2022/06/07 07:50

_id ddss9208
id ddss9208
authors Lucardie, G.L.
year 1993
title A functional approach to realizing decision support systems in technical regulation management for design and construction
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary Technical building standards defining the quality of buildings, building products, building materials and building processes aim to provide acceptable levels of safety, health, usefulness and energy consumption. However, the logical consistency between these goals and the set of regulations produced to achieve them is often hard to identify. Not only the large quantities of highly complex and frequently changing building regulations to be met, but also the variety of user demands and the steadily increasing technical information on (new) materials, products and buildings have produced a very complex set of knowledge and data that should be taken into account when handling technical building regulations. Integrating knowledge technology and database technology is an important step towards managing the complexity of technical regulations. Generally, two strategies can be followed to integrate knowledge and database technology. The main emphasis of the first strategy is on transferring data structures and processing techniques from one field of research to another. The second approach is concerned exclusively with the semantic structure of what is contained in the data-based or knowledge-based system. The aim of this paper is to show that the second or knowledge-level approach, in particular the theory of functional classifications, is more fundamental and more fruitful. It permits a goal-directed rationalized strategy towards analysis, use and application of regulations. Therefore, it enables the reconstruction of (deep) models of regulations, objects and of users accounting for the flexibility and dynamics that are responsible for the complexity of technical regulations. Finally, at the systems level, the theory supports an effective development of a new class of rational Decision Support Systems (DSS), which should reduce the complexity of technical regulations and restore the logical consistency between the goals of technical regulations and the technical regulations themselves.
series DDSS
last changed 2003/08/07 16:36

_id 2c22
authors O'Neill, Michael J.
year 1992
title Neural Network Simulation as a Computer- Aided design Tool For Predicting Wayfinding Performance
source New York: John Wiley & Sons, 1992. pp. 347-366 : ill. includes bibliography
summary Complex public facilities such as libraries, hospitals, and governmental buildings often present problems to users who must find their way through them. Research shows that difficulty in wayfinding has costs in terms of time, money, public safety, and stress that results from being lost. While a wide range of architectural research supports the notion that ease of wayfinding should be a criterion for good design, architects have no method for evaluating how well their building designs will support the wayfinding task. People store and retrieve information about the layout of the built environment in a knowledge representation known as the cognitive map. People depend on the information stored in the cognitive map to find their way through buildings. Although there are numerous simulations of the cognitive map, the mechanisms of these models are not constrained by what is known about the neurophysiology of the brain. Rather, these models incorporate search mechanisms that act on semantically encoded information about the environment. In this paper the author describes the evaluation and application of an artificial neural network simulation of the cognitive map as a means of predicting wayfinding behavior in buildings. This simulation is called NAPS-PC (Network Activity Processing Simulator--PC version). This physiologically plausible model represents knowledge about the layout of the environment through a network of inter-connected processing elements. The performance of NAPS-PC was evaluated against actual human wayfinding performance. The study found that the simulation generated behavior that matched the performance of human participants. After the validation, NAPS-PC was modified so that it could read environmental information directly from AutoCAD (a popular micro-computer-based CAD software package) drawing files, and perform 'wayfinding' tasks based on that environmental information. This prototype tool, called AutoNet, is conceptualized as a means of allowing designers to predict the wayfinding performance of users in a building before it is actually built
keywords simulation, cognition, neural networks, evaluation, floor plans, applications, wayfinding, layout, building
series CADline
last changed 2003/06/02 13:58

_id fd02
authors Tsou, Jin-Yeu
year 1992
title Using conceptual modelling and an object-oriented environment to support building cost control during early design
source College of Architecture and Urban Planning, University of Michigan
summary This research investigated formal information modelling techniques and the object-oriented knowledge representation on the domain of building cost control during early design stages. The findings contribute to an understanding of the advantages and disadvantages of applying formal modelling techniques to the analysis of architectural problems and the representation of domain knowledge in an object-oriented environment. In this study, information modelling techniques were reviewed, formal information analysis was performed, a conceptual model based on the cost control problem domain was created, a computational model based on the object-oriented approach was developed, a mechanism to support information broadcasting for representing interrelationships was implemented, and an object-oriented cost analysis system for early design (OBCIS) was demonstrated. The conceptual model, based on the elemental proposition analysis of NIAM, supports a formal approach for analyzing the problem domain; the analysis results are represented by high-level graphical notations, based on the AEC Building System Model, to visually display the information framework of the domain. The conceptual model provides an intermediate step between the system designer's view of the domain and the internal representation of the implementation platform. The object-oriented representation provides extensive data modelling abilities to help system designers intuitively represent the semantics of the problem domain. The object-oriented representation also supports more structured and integrated modules than conventional programming approaches. Although there are many advantages to applying this technique to represent the semantics of cost control knowledge, there are several issues which need to be considered: no single satisfactory classification method can be directly applied; object-oriented systems are difficult to learn; and designing reusable classes is difficult. The dependency graph and information broadcasting implemented in this research is an attempt to represent the interrelationships between domain objects. The mechanism allows users to explicitly define the interrelationships, based on semantic requirements, among domain objects. In the conventional approach, these relationships are directly interpreted by system designers and intertwined into the programming code. There are several issues which need to be studied further: indirect dependency relationship, conflict resolution, and request-update looping based on least-commitment approach.
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 1963
authors Tweed, Chris and Woolley, Tom
year 1992
title USER PARTICIPATION IN DESIGN: TECHNIQUES FOR DIALOGUE
source Proceedings of the 4rd European Full-Scale Modelling Conference / Lausanne (Switzerland) 9-12 September 1992, Part B, pp. 17-24
summary Many projects in which users participate in the design process are merely examples of professionals communicating their ideas to their clients. Conventional computer systems can be powerful tools for helping designers to present design informations to lay audiences, but when combined with computer modelling and simulation, they create opportunities for users to construct their own sequences of images and thus explore designs from their own viewpoint. Building on extensive experience of traditional methods of user participation, this paper explores the use of narratives to create dialogues between users, designers and computers. The concept of "design stories" as a route to fully shared creativity is explained. The paper also argues that this approach is needed to bring into focus design issues that cannot be described or resolved by computer modelling alone.
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 15:41

_id avocaad_2001_17
id avocaad_2001_17
authors Ying-Hsiu Huang, Yu-Tung Liu, Cheng-Yuan Lin, Yi-Ting Cheng, Yu-Chen Chiu
year 2001
title The comparison of animation, virtual reality, and scenario scripting in design process
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary Design media is a fundamental tool, which can incubate concrete ideas from ambiguous concepts. Evolved from freehand sketches, physical models to computerized drafting, modeling (Dave, 2000), animations (Woo, et al., 1999), and virtual reality (Chiu, 1999; Klercker, 1999; Emdanat, 1999), different media are used to communicate to designers or users with different conceptual levels¡@during the design process. Extensively employed in design process, physical models help designers in managing forms and spaces more precisely and more freely (Millon, 1994; Liu, 1996).Computerized drafting, models, animations, and VR have gradually replaced conventional media, freehand sketches and physical models. Diversely used in the design process, computerized media allow designers to handle more divergent levels of space than conventional media do. The rapid emergence of computers in design process has ushered in efforts to the visual impact of this media, particularly (Rahman, 1992). He also emphasized the use of computerized media: modeling and animations. Moreover, based on Rahman's study, Bai and Liu (1998) applied a new design media¡Xvirtual reality, to the design process. In doing so, they proposed an evaluation process to examine the visual impact of this new media in the design process. That same investigation pointed towards the facilitative role of the computerized media in enhancing topical comprehension, concept realization, and development of ideas.Computer technology fosters the growth of emerging media. A new computerized media, scenario scripting (Sasada, 2000; Jozen, 2000), markedly enhances computer animations and, in doing so, positively impacts design processes. For the three latest media, i.e., computerized animation, virtual reality, and scenario scripting, the following question arises: What role does visual impact play in different design phases of these media. Moreover, what is the origin of such an impact? Furthermore, what are the similarities and variances of computing techniques, principles of interaction, and practical applications among these computerized media?This study investigates the similarities and variances among computing techniques, interacting principles, and their applications in the above three media. Different computerized media in the design process are also adopted to explore related phenomenon by using these three media in two projects. First, a renewal planning project of the old district of Hsinchu City is inspected, in which animations and scenario scripting are used. Second, the renewal project is compared with a progressive design project for the Hsinchu Digital Museum, as designed by Peter Eisenman. Finally, similarity and variance among these computerized media are discussed.This study also examines the visual impact of these three computerized media in the design process. In computerized animation, although other designers can realize the spatial concept in design, users cannot fully comprehend the concept. On the other hand, other media such as virtual reality and scenario scripting enable users to more directly comprehend what the designer's presentation.Future studies should more closely examine how these three media impact the design process. This study not only provides further insight into the fundamental characteristics of the three computerized media discussed herein, but also enables designers to adopt different media in the design stages. Both designers and users can more fully understand design-related concepts.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id caadria2004_k-1
id caadria2004_k-1
authors Kalay, Yehuda E.
year 2004
title CONTEXTUALIZATION AND EMBODIMENT IN CYBERSPACE
source CAADRIA 2004 [Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] Seoul Korea 28-30 April 2004, pp. 5-14
doi https://doi.org/10.52842/conf.caadria.2004.005
summary The introduction of VRML (Virtual Reality Markup Language) in 1994, and other similar web-enabled dynamic modeling software (such as SGI’s Open Inventor and WebSpace), have created a rush to develop on-line 3D virtual environments, with purposes ranging from art, to entertainment, to shopping, to culture and education. Some developers took their cues from the science fiction literature of Gibson (1984), Stephenson (1992), and others. Many were web-extensions to single-player video games. But most were created as a direct extension to our new-found ability to digitally model 3D spaces and to endow them with interactive control and pseudo-inhabitation. Surprisingly, this technologically-driven stampede paid little attention to the core principles of place-making and presence, derived from architecture and cognitive science, respectively: two principles that could and should inform the essence of the virtual place experience and help steer its development. Why are the principles of place-making and presence important for the development of virtual environments? Why not simply be content with our ability to create realistically-looking 3D worlds that we can visit remotely? What could we possibly learn about making these worlds better, had we understood the essence of place and presence? To answer these questions we cannot look at place-making (both physical and virtual) from a 3D space-making point of view alone, because places are not an end unto themselves. Rather, places must be considered a locus of contextualization and embodiment that ground human activities and give them meaning. In doing so, places acquire a meaning of their own, which facilitates, improves, and enriches many aspects of our lives. They provide us with a means to interpret the activities of others and to direct our own actions. Such meaning is comprised of the social and cultural conceptions and behaviors imprinted on the environment by the presence and activities of its inhabitants, who in turn, ‘read’ by them through their own corporeal embodiment of the same environment. This transactional relationship between the physical aspects of an environment, its social/cultural context, and our own embodiment of it, combine to create what is known as a sense of place: the psychological, physical, social, and cultural framework that helps us interpret the world around us, and directs our own behavior in it. In turn, it is our own (as well as others’) presence in that environment that gives it meaning, and shapes its social/cultural character. By understanding the essence of place-ness in general, and in cyberspace in particular, we can create virtual places that can better support Internet-based activities, and make them equal to, in some cases even better than their physical counterparts. One of the activities that stands to benefit most from understanding the concept of cyber-places is learning—an interpersonal activity that requires the co-presence of others (a teacher and/or fellow learners), who can point out the difference between what matters and what does not, and produce an emotional involvement that helps students learn. Thus, while many administrators and educators rush to develop webbased remote learning sites, to leverage the economic advantages of one-tomany learning modalities, these sites deprive learners of the contextualization and embodiment inherent in brick-and-mortar learning institutions, and which are needed to support the activity of learning. Can these qualities be achieved in virtual learning environments? If so, how? These are some of the questions this talk will try to answer by presenting a virtual place-making methodology and its experimental implementation, intended to create a sense of place through contextualization and embodiment in virtual learning environments.
series CAADRIA
type normal paper
last changed 2022/06/07 07:52

_id ecaadesigradi2019_449
id ecaadesigradi2019_449
authors Becerra Santacruz, Axel
year 2019
title The Architecture of ScarCity Game - The craft and the digital as an alternative design process
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 3, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 45-52
doi https://doi.org/10.52842/conf.ecaade.2019.3.045
summary The Architecture of ScarCity Game is a board game used as a pedagogical tool that challenges architecture students by involving them in a series of experimental design sessions to understand the design process of scarcity and the actual relation between the craft and the digital. This means "pragmatic delivery processes and material constraints, where the exchange between the artisan of handmade, representing local skills and technology of the digitally conceived is explored" (Huang 2013). The game focuses on understanding the different variables of the crafted design process of traditional communities under conditions of scarcity (Michel and Bevan 1992). This requires first analyzing the spatial environmental model of interaction, available human and natural resources, and the dynamic relationship of these variables in a digital era. In the first stage (Pre-Agency), the game set the concept of the craft by limiting students design exploration from a minimum possible perspective developing locally available resources and techniques. The key elements of the design process of traditional knowledge communities have to be identified (Preez 1984). In other words, this stage is driven by limited resources + chance + contingency. In the second stage (Post-Agency) students taking the architects´ role within this communities, have to speculate and explore the interface between the craft (local knowledge and low technological tools), and the digital represented by computation data, new technologies available and construction. This means the introduction of strategy + opportunity + chance as part of the design process. In this sense, the game has a life beyond its mechanics. This other life challenges the participants to exploit the possibilities of breaking the actual boundaries of design. The result is a tool to challenge conventional methods of teaching and leaning controlling a prescribed design process. It confronts the rules that professionals in this field take for granted. The game simulates a 'fake' reality by exploring in different ways with surveyed information. As a result, participants do not have anything 'real' to lose. Instead, they have all the freedom to innovate and be creative.
keywords Global south, scarcity, low tech, digital-craft, design process and innovation by challenge.
series eCAADeSIGraDi
email
last changed 2022/06/07 07:54

_id b2f9
id b2f9
authors Bhzad Sidawi and Neveen Hamza
year 2012
title INTELLIGENT KNOWLEDGE-BASED REPOSITORY TO SUPPORT INFORMED DESIGN DECISION MAKING
source ITCON journal
summary Research highlights that architectural design is a social phenomenon that is underpinned by critical analysis of design precedents and the social interaction between designers including negotiation, collaboration and communication. CAAD systems are continuously developing as essential design tools in formulating and developing ideas. Researchers such as (Rosenman, Gero and Oxman 1992) have suggested suggest that knowledge based systems can be integrated with CAAD systems to provide design knowledge that would enable recalling design precedents that maybe linked to the design constraints. Currently CAAD systems are user centric being focused on architects rather than the end product. The systems provide limited assistance in the production of innovative design. Furthermore, the attention of the designers of knowledge based systems is providing a repository rather than a system that is capable to initiate innovation. Most of the CAAD systems have web communication tools that enable designers to communicate their design ideas with colleagues and partners in business. However, none of these systems have the capability to capture useful knowledge from the design negotiations. Students of the third to fifth year at College of Architecture, University of Dammam were surveyed and interviewed to find out how far design tools, communications and resources would impact the production of innovative design projects. The survey results show that knowledge extracted from design negotiations would impact the innovative design outcome. It highlights also that present design precedents are not very helpful and design negotiations between students, tutors and other students are not documented thus fully incorporated into the design scheme. The paper argues that the future CAAD systems should be capable to recognize innovative design precedents, and incorporate knowledge that is resulted from design negotiations. This would help students to gain a critical mass of knowledge that would underpin informed design decisions.
series journal paper
type normal paper
email
more http://www.itcon.org/cgi-bin/works/Show?2012_20
last changed 2012/09/19 13:41

_id 4129
authors Fargas, Josep and Papazian, Pegor
year 1992
title Metaphors in Design: An Experiment with a Frame, Two Lines and Two Rectangles
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 13-22
doi https://doi.org/10.52842/conf.acadia.1992.013
summary The research we will discuss below originated from an attempt to examine the capacity of designers to evaluate an artifact, and to study the feasibility of replicating a designer's moves intended to make an artifact more expressive of a given quality. We will present the results of an interactive computer experiment, first developed at the MIT Design Research Seminar, which is meant to capture the subject’s actions in a simple design task as a series of successive "moves"'. We will propose that designers use metaphors in their interaction with design artifacts and we will argue that the concept of metaphors can lead to a powerful theory of design activity. Finally, we will show how such a theory can drive the project of building a design system.

When trying to understand how designers work, it is tempting to examine design products in order to come up with the principles or norms behind them. The problem with such an approach is that it may lead to a purely syntactical analysis of design artifacts, failing to capture the knowledge of the designer in an explicit way, and ignoring the interaction between the designer and the evolving design. We will present a theory about design activity based on the observation that knowledge is brought into play during a design task by a process of interpretation of the design document. By treating an evolving design in terms of the meanings and rules proper to a given way of seeing, a designer can reduce the complexity of a task by focusing on certain of its aspects, and can manipulate abstract elements in a meaningful way.

series ACADIA
email
last changed 2022/06/07 07:55

_id 2b7a
authors Ferguson, H., Rockwood, A. and Cox, J.
year 1992
title Topological Design of Sculptured Surfaces
source Computer Graphics, no. 26, pp.149-156
summary Topology is primal geometry. Our design philosophy embodies this principle. We report on a new surface &sign perspective based on a "marked" polygon for each object. The marked polygon captures the topology of the object surface. We construct multiply periodic mappings from polygon to sculptured surface. The mappings arise naturally from the topology and other design considerations. Hence we give a single domain global parameteriration for surfaces with handles. Examples demonstrate the design of sculptured objects and their ntanufimture.
series journal paper
last changed 2003/04/23 15:50

_id 68c8
authors Flemming, U., Coyne, R. and Fenves, S. (et al.)
year 1994
title SEED: A Software Environment to Support the Early Phases in Building Design
source Proceeding of IKM '94, Weimar, Germany, pp. 5-10
summary The SEED project intends to develop a software environment that supports the early phases in building design (Flemming et al., 1993). The goal is to provide support, in principle, for the preliminary design of buildings in all aspects that can gain from computer support. This includes using the computer not only for analysis and evaluation, but also more actively for the generation of designs, or more accurately, for the rapid generation of design representations. A major motivation for the development of SEED is to bring the results of two multi-generational research efforts focusing on `generative' design systems closer to practice: 1. LOOS/ABLOOS, a generative system for the synthesis of layouts of rectangles (Flemming et al., 1988; Flemming, 1989; Coyne and Flemming, 1990; Coyne, 1991); 2. GENESIS, a rule-based system that supports the generation of assemblies of 3-dimensional solids (Heisserman, 1991; Heisserman and Woodbury, 1993). The rapid generation of design representations can take advantage of special opportunities when it deals with a recurring building type, that is, a building type dealt with frequently by the users of the system. Design firms - from housing manufacturers to government agencies - accumulate considerable experience with recurring building types. But current CAD systems capture this experience and support its reuse only marginally. SEED intends to provide systematic support for the storing and retrieval of past solutions and their adaptation to similar problem situations. This motivation aligns aspects of SEED closely with current work in Artificial Intelligence that focuses on case-based design (see, for example, Kolodner, 1991; Domeshek and Kolodner, 1992; Hua et al., 1992).
series other
email
last changed 2003/04/23 15:14

_id ddss9214
id ddss9214
authors Friedman, A.
year 1993
title A decision-making process for choice of a flexible internal partition option in multi-unit housing using decision theory techniques
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary Recent demographic changes have increased the heterogeneity of user groups in the North American housing market. Smaller households (e.g. elderly, single parent) have non-traditional spatial requirements that cannot be accommodated within the conventional house layout. This has created renewed interest in Demountable/Flexible internal partition systems. However, the process by which designers decide which project or user groups are most suited for the use of these systems is quite often complex, non-linear, uncertain and dynamic, since the decisions involve natural processes and human values that are apparently random. The anonymity of users when mass housing projects are conceptualized, and the uncertainty as to the alternative to be selected by the user, given his/her constantly changing needs, are some contributing factors to this effect. Decision Theory techniques, not commonly used by architects, can facilitate the decision-making process through a systematic evaluation of alternatives by means of quantitative methods in order to reduce uncertainty in probabilistic events or in cases when data is insufficient. The author used Decision Theory in the selection of flexible partition systems. The study involved a multi-unit, privately initiated housing project in Montreal, Canada, where real site conditions and costs were used. In this paper, the author outlines the fundamentals of Decision Theory and demonstrates the use of Expected Monetary Value and Weighted Objective Analysis methods and their outcomes in the design of a Montreal housing project. The study showed that Decision Theory can be used as an effective tool in housing design once the designer knows how to collect basic data.
series DDSS
last changed 2003/08/07 16:36

_id ea96
authors Hacfoort, Eek J. and Veldhuisen, Jan K.
year 1992
title A Building Design and Evaluation System
source New York: John Wiley & Sons, 1992. pp. 195-211 : ill. table. includes bibliography
summary Within the field of architectural design there is a growing awareness of imbalance among the professionalism, the experience, and the creativity of the designers' response to the up-to-date requirements of all parties interested in the design process. The building design and evaluating system COSMOS makes it possible for various participants to work within their own domain, so that separated but coordinated work can be done. This system is meant to organize the initial stage of the design process, where user-defined functions, geometry, type of construction, and building materials are decided. It offers a tool to design a building to calculate a number of effects and for managing the information necessary to evaluate the design decisions. The system is provided with data and sets of parameters for describing the conditions, along with their properties, of the main building functions of a selection of well-known building types. The architectural design is conceptualized as being a hierarchy of spatial units, ranking from building blocks down to specific rooms or spaces. The concept of zoning is used as a means of calculating and directly evaluating the structure of the design without working out the details. A distinction is made between internal and external calculations and evaluations during the initial design process. During design on screen, an estimation can be recorded of building costs, energy costs, acoustics, lighting, construction, and utility. Furthermore, the design can be exported to a design application program, in this case AutoCAD, to make and show drawings in more detail. Through the medium of a database, external calculation and evaluation of building costs, life-cycle costs, energy costs, interior climate, acoustics, lighting, construction, and utility are possible in much more advanced application programs
keywords evaluation, applications, integration, architecture, design, construction, building, energy, cost, lighting, acoustics, performance
series CADline
last changed 2003/06/02 13:58

_id 32eb
authors Henry, Daniel
year 1992
title Spatial Perception in Virtual Environments : Evaluating an Architectural Application
source University of Washington
summary Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the "Walkthrough" at the University of North Carolina. The set-up at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 square feet) and it does not require expensive and space taking hardware devices (such as the treadmill device for simulating walking). Now that the potential for using virtual environments in this architectural application is clear, it becomes important to verify that this tool succeeds in accurately representing space as intended. The purpose of this study is to verify that the perception of spaces is the same, in both simulated and real environment. It is hoped that the findings of this study will guide and accelerate the process by which the technology makes its way into the field of architecture.
keywords Space Perception; Space (Architecture); Computer Simulation
series thesis:MSc
last changed 2003/02/12 22:37

_id cf2009_poster_09
id cf2009_poster_09
authors Hsu, Yin-Cheng
year 2009
title Lego Free-Form? Towards a Modularized Free-Form Construction
source T. Tidafi and T. Dorta (eds) Joining Languages Cultures and Visions: CAADFutures 2009 CD-Rom
summary Design Media is the tool designers use for concept realization (Schon and Wiggins, 1992; Liu, 1996). Design thinking of designers is deeply effected by the media they tend to use (Zevi, 1981; Liu, 1996; Lim, 2003). Historically, architecture is influenced by the design media that were available within that era (Liu, 1996; Porter and Neale, 2000; Smith, 2004). From the 2D plans first used in ancient egypt, to the 3D physical models that came about during the Renaissance period, architecture reflects the media used for design. When breakthroughs in CAD/CAM technologies were brought to the world in the twentieth century, new possibilities opened up for architects.
keywords CAD/CAM free-form construction, modularization
series CAAD Futures
type poster
last changed 2009/07/08 22:12

_id 56e9
authors Huang, Tao-Kuang
year 1992
title A Graphical Feedback Model for Computerized Energy Analysis during the Conceptual Design Stage
source Texas A&M University
summary During the last two decades, considerable effort has been placed on the development of building design analysis tools. Architects and designers have begun to take advantage of computers to generate and examine design alternatives. However, because it has been difficult to adapt computer technologies to the visual orientation of the building designer, the majority of computer applications have been limited to numerical analysis and office automation tasks. Only recently, because of advances in hardware and software techniques, computers have entered into a new phase in the development of architectural design. haveters are now able to interactively display graphics solutions to architectural related problems, which is fundamental to the design process. The majority of research programs in energy efficient design have sharpened people's understanding of energy principles and their application of those principles. Energy conservation concepts, however, have not been widely used. A major problem in the implementation of these principles is that energy principles their applications are abstract, hard to visualize and separated from the architectural design process. Furthermore, one aspect of energy analysis may contain thousands of pieces of numerical information which often leads to confusion on the part of designers. If these difficulties can be overcome, it would bring a great benefit to the advancement of energy conservation concepts. This research explores the concept of an integrated computer graphics program to support energy efficient design. It focuses on (1) the integration of energy efficiently and architectural design, and (2) the visualization of building energy use through graphical interfaces during the conceptual design stage. It involves (1) the discussion of frameworks of computer-aided architectural design and computer-aided energy efficient building design, and (2) the development of an integrated computer prototype program with a graphical interface that helps the designer create building layouts, analyze building energy interactively and receive visual feedbacks dynamically. The goal is to apply computer graphics as an aid to visualize the effects of energy related decisions and therefore permit the designer to visualize and understand energy conservation concepts in the conceptual phase of architectural design.
series thesis:PhD
last changed 2003/02/12 22:37

_id 88ca
authors Kane, Andy and Szalapaj, Peter
year 1992
title Teaching Design By Analysis of Precedents
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 477-496
doi https://doi.org/10.52842/conf.ecaade.1992.477
summary Designers, using their intuitive understanding of the decomposition of particular design objects, whether in terms of structural, functional, or some other analytical framework, should be able to interact with computational environments such that the understanding they achieve in turn invokes changes or transformations to the spatial properties of design proposals. Decompositions and transformations of design precedents can be a very useful method of enabling design students to develop analytical strategies. The benefit of an analytical approach is that it can lead to a structured understanding of design precedents. This in turn allows students to develop their own insights and ideas which are central to the activity of designing. The creation of a 3-D library of user-defined models of precedents in a computational environment permits an under-exploited method of undertaking analysis, since by modelling design precedents through the construction of 3-D Computer-Aided Architectural Design (CAAD) models, and then analytically decomposing them in terms of relevant features, significant insights into the nature of designs can be achieved. Using CAAD systems in this way, therefore, runs counter to the more common approach of detailed modelling, rendering and animation; which produces realistic pictures that do not reflect the design thinking that went into their production. The significance of the analytical approach to design teaching is that it encourages students to represent design ideas, but not necessarily the final form of design objects. The analytical approach therefore, allows students to depict features and execute tasks that are meaningful with respect to design students' own knowledge of particular domains. Such computational interaction can also be useful in helping students explore the consequences of proposed actions in actual design contexts.
series eCAADe
last changed 2022/06/07 07:52

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 12HOMELOGIN (you are user _anon_613589 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002