CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 247

_id avocaad_2001_20
id avocaad_2001_20
authors Shen-Kai Tang
year 2001
title Toward a procedure of computer simulation in the restoration of historical architecture
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In the field of architectural design, “visualization¨ generally refers to some media, communicating and representing the idea of designers, such as ordinary drafts, maps, perspectives, photos and physical models, etc. (Rahman, 1992; Susan, 2000). The main reason why we adopt visualization is that it enables us to understand clearly and to control complicated procedures (Gombrich, 1990). Secondly, the way we get design knowledge is more from the published visualized images and less from personal experiences (Evans, 1989). Thus the importance of the representation of visualization is manifested.Due to the developments of computer technology in recent years, various computer aided design system are invented and used in a great amount, such as image processing, computer graphic, computer modeling/rendering, animation, multimedia, virtual reality and collaboration, etc. (Lawson, 1995; Liu, 1996). The conventional media are greatly replaced by computer media, and the visualization is further brought into the computerized stage. The procedure of visual impact analysis and assessment (VIAA), addressed by Rahman (1992), is renewed and amended for the intervention of computer (Liu, 2000). Based on the procedures above, a great amount of applied researches are proceeded. Therefore it is evident that the computer visualization is helpful to the discussion and evaluation during the design process (Hall, 1988, 1990, 1992, 1995, 1996, 1997, 1998; Liu, 1997; Sasada, 1986, 1988, 1990, 1993, 1997, 1998). In addition to the process of architectural design, the computer visualization is also applied to the subject of construction, which is repeatedly amended and corrected by the images of computer simulation (Liu, 2000). Potier (2000) probes into the contextual research and restoration of historical architecture by the technology of computer simulation before the practical restoration is constructed. In this way he established a communicative mode among archeologists, architects via computer media.In the research of restoration and preservation of historical architecture in Taiwan, many scholars have been devoted into the studies of historical contextual criticism (Shi, 1988, 1990, 1991, 1992, 1995; Fu, 1995, 1997; Chiu, 2000). Clues that accompany the historical contextual criticism (such as oral information, writings, photographs, pictures, etc.) help to explore the construction and the procedure of restoration (Hung, 1995), and serve as an aid to the studies of the usage and durability of the materials in the restoration of historical architecture (Dasser, 1990; Wang, 1998). Many clues are lost, because historical architecture is often age-old (Hung, 1995). Under the circumstance, restoration of historical architecture can only be proceeded by restricted pictures, written data and oral information (Shi, 1989). Therefore, computer simulation is employed by scholars to simulate the condition of historical architecture with restricted information after restoration (Potier, 2000). Yet this is only the early stage of computer-aid restoration. The focus of the paper aims at exploring that whether visual simulation of computer can help to investigate the practice of restoration and the estimation and evaluation after restoration.By exploring the restoration of historical architecture (taking the Gigi Train Station destroyed by the earthquake in last September as the operating example), this study aims to establish a complete work on computer visualization, including the concept of restoration, the practice of restoration, and the estimation and evaluation of restoration.This research is to simulate the process of restoration by computer simulation based on visualized media (restricted pictures, restricted written data and restricted oral information) and the specialized experience of historical architects (Potier, 2000). During the process of practicing, communicates with craftsmen repeatedly with some simulated alternatives, and makes the result as the foundation of evaluating and adjusting the simulating process and outcome. In this way we address a suitable and complete process of computer visualization for historical architecture.The significance of this paper is that we are able to control every detail more exactly, and then prevent possible problems during the process of restoration of historical architecture.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id avocaad_2001_17
id avocaad_2001_17
authors Ying-Hsiu Huang, Yu-Tung Liu, Cheng-Yuan Lin, Yi-Ting Cheng, Yu-Chen Chiu
year 2001
title The comparison of animation, virtual reality, and scenario scripting in design process
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary Design media is a fundamental tool, which can incubate concrete ideas from ambiguous concepts. Evolved from freehand sketches, physical models to computerized drafting, modeling (Dave, 2000), animations (Woo, et al., 1999), and virtual reality (Chiu, 1999; Klercker, 1999; Emdanat, 1999), different media are used to communicate to designers or users with different conceptual levels¡@during the design process. Extensively employed in design process, physical models help designers in managing forms and spaces more precisely and more freely (Millon, 1994; Liu, 1996).Computerized drafting, models, animations, and VR have gradually replaced conventional media, freehand sketches and physical models. Diversely used in the design process, computerized media allow designers to handle more divergent levels of space than conventional media do. The rapid emergence of computers in design process has ushered in efforts to the visual impact of this media, particularly (Rahman, 1992). He also emphasized the use of computerized media: modeling and animations. Moreover, based on Rahman's study, Bai and Liu (1998) applied a new design media¡Xvirtual reality, to the design process. In doing so, they proposed an evaluation process to examine the visual impact of this new media in the design process. That same investigation pointed towards the facilitative role of the computerized media in enhancing topical comprehension, concept realization, and development of ideas.Computer technology fosters the growth of emerging media. A new computerized media, scenario scripting (Sasada, 2000; Jozen, 2000), markedly enhances computer animations and, in doing so, positively impacts design processes. For the three latest media, i.e., computerized animation, virtual reality, and scenario scripting, the following question arises: What role does visual impact play in different design phases of these media. Moreover, what is the origin of such an impact? Furthermore, what are the similarities and variances of computing techniques, principles of interaction, and practical applications among these computerized media?This study investigates the similarities and variances among computing techniques, interacting principles, and their applications in the above three media. Different computerized media in the design process are also adopted to explore related phenomenon by using these three media in two projects. First, a renewal planning project of the old district of Hsinchu City is inspected, in which animations and scenario scripting are used. Second, the renewal project is compared with a progressive design project for the Hsinchu Digital Museum, as designed by Peter Eisenman. Finally, similarity and variance among these computerized media are discussed.This study also examines the visual impact of these three computerized media in the design process. In computerized animation, although other designers can realize the spatial concept in design, users cannot fully comprehend the concept. On the other hand, other media such as virtual reality and scenario scripting enable users to more directly comprehend what the designer's presentation.Future studies should more closely examine how these three media impact the design process. This study not only provides further insight into the fundamental characteristics of the three computerized media discussed herein, but also enables designers to adopt different media in the design stages. Both designers and users can more fully understand design-related concepts.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 7291
authors Arvesen, Liv
year 1992
title Measures and the Unmeasurable
source Proceedings of the 4rd European Full-Scale Modelling Conference / Lausanne (Switzerland) 9-12 September 1992, Part C, pp. 11-16
summary Nowhere do we ever find a similar environment as the one related to the tea ceremony. We may learn from the teamasters as we may learn from our masters of architecture. Directly and indirectly we are influenced by our surroundings which have been proved by research and which we ourselves experience in our daily life. The full scale experiments have been made on this subject. Related to the nervous mind the experiments were concentrated of form expressing safety and peace.
keywords Full-scale Modeling,Model Simulation, Real Environments
series other
more http://info.tuwien.ac.at/efa
last changed 2003/08/25 10:12

_id b2f9
id b2f9
authors Bhzad Sidawi and Neveen Hamza
year 2012
title INTELLIGENT KNOWLEDGE-BASED REPOSITORY TO SUPPORT INFORMED DESIGN DECISION MAKING
source ITCON journal
summary Research highlights that architectural design is a social phenomenon that is underpinned by critical analysis of design precedents and the social interaction between designers including negotiation, collaboration and communication. CAAD systems are continuously developing as essential design tools in formulating and developing ideas. Researchers such as (Rosenman, Gero and Oxman 1992) have suggested suggest that knowledge based systems can be integrated with CAAD systems to provide design knowledge that would enable recalling design precedents that maybe linked to the design constraints. Currently CAAD systems are user centric being focused on architects rather than the end product. The systems provide limited assistance in the production of innovative design. Furthermore, the attention of the designers of knowledge based systems is providing a repository rather than a system that is capable to initiate innovation. Most of the CAAD systems have web communication tools that enable designers to communicate their design ideas with colleagues and partners in business. However, none of these systems have the capability to capture useful knowledge from the design negotiations. Students of the third to fifth year at College of Architecture, University of Dammam were surveyed and interviewed to find out how far design tools, communications and resources would impact the production of innovative design projects. The survey results show that knowledge extracted from design negotiations would impact the innovative design outcome. It highlights also that present design precedents are not very helpful and design negotiations between students, tutors and other students are not documented thus fully incorporated into the design scheme. The paper argues that the future CAAD systems should be capable to recognize innovative design precedents, and incorporate knowledge that is resulted from design negotiations. This would help students to gain a critical mass of knowledge that would underpin informed design decisions.
series journal paper
type normal paper
email
more http://www.itcon.org/cgi-bin/works/Show?2012_20
last changed 2012/09/19 13:41

_id a93f
authors Eisenman, P.
year 1992
title Visions unfolding: architecture in the age of electronic media
source Domus, 1/92
summary During the fifty years sinee the Second World War, a paradigm shift has taken place that should have profoundly affected architecture: this was the shift from the mechanicai paradigm to the electrorlic one. This change can be simply understood by comparing the impact of the role of the human subject on such primary modes of reproduction as the photograph and the fax; the photograph within the mechanical paradigm, the fax within the electronic one. In photographic reproduction the subiect still maintains a controlled interaction with the object. A photograph can be developed with more or less contrast, texture or clarity.
series journal paper
last changed 2003/04/23 15:50

_id esaulov02_paper_eaea2007
id esaulov02_paper_eaea2007
authors Esaulov, G.V.
year 2008
title Videomodeling in Architecture. Introduction into Concerned Problems
source Proceedings of the 8th European Architectural Endoscopy Association Conference
summary Since the very 1st year Russian Academy of Architecture and building sciences that was established in 1992 by the Presidents’ decree as the higher scientific and creative organization in the country has always paid much attention to supporting and developing fundamental investigations in architecture, town-planning, building sciences, professional education and creative practice. Study of the birth process of the architectural idea and searching for tools assisting the architect’s creative activity and opportunities for adequate transfer of architectural image to potential consumer – relate to the number of problems which constantly bother the architectural community. Before turning to the conference, let us set certain conditions that have a significant impact on the development of architectural and construction activity in modern Russia.
series EAEA
more http://info.tuwien.ac.at/eaea
last changed 2008/04/29 20:46

_id 4857
authors Escola Tecnica Superior D'arquitectura de Barcelona (Ed.)
year 1992
title CAAD Instruction: The New Teaching of an Architect?
source eCAADe Conference Proceedings / Barcelona (Spain) 12-14 November 1992, 551 p.
doi https://doi.org/10.52842/conf.ecaade.1992
summary The involvement of computer graphic systems in the transmission of knowledge in the areas of urban planning and architectural design will bring a significant change to the didactic programs and methods of those schools which have decided to adopt these new instruments. Workshops of urban planning and architectural design will have to modify their structures, and teaching teams will have to revise their current programs. Some european schools and faculties of architecture have taken steps in this direction. Others are willing to join them.

This process is only delayed by the scarcity of material resources, and by the slowness with which a sufficient number of teachers are adopting these methods.

ECAADE has set out to analyze the state of this issue during its next conference, and it will be discussed from various points of view. From this confrontation of ideas will come, surely, the guidelines for progress in the years to come.

The different sessions will be grouped together following these four themes:

(A.) Multimedia and Course Work / State of the art of the synthesis of graphical and textual information favored by new available multimedia computer programs. Their repercussions on academic programs. (B.) The New Design Studio / Physical characteristics, data concentration and accessibility of a computerized studio can be better approached in a computerized workshop. (C.) How to manage the new education system / Problems and possibilities raised, from the practical and organizational points of view, of architectural education by the introduction of computers in the classrooms. (D.) CAAI. Formal versus informal structure / How will the traditional teaching structure be affected by the incidence of these new systems in which the access to knowledge and information can be obtained in a random way and guided by personal and subjective criteria.

series eCAADe
email
last changed 2022/06/07 07:49

_id e412
authors Fargas, Josep and Papazian, Pegor
year 1992
title Modeling Regulations and Intentions for Urban Development: The Role of Computer Simulation in the Urban Design Studio
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 201-212
doi https://doi.org/10.52842/conf.ecaade.1992.201
summary In this paper we present a strategy for modeling urban development in order to study the role of urban regulations and policies in the transformation of cities. We also suggest a methodology for using computer models as experimental tools in the urban design studio in order to make explicit the factors involved in shaping cities, and for the automatic visualization of projected development. The structure of the proposed model is based on different modules which represent, on the one hand, the rules regulating the physical growth of a city and, on the other hand, heuristics corresponding to different interests such as Real Estate Developers, City Hall Planners, Advocacy and Community Groups, and so on. Here we present a case study dealing with the Boston Redevelopment Authority zoning code for the Midtown Cultural District of Boston. We introduce a computer program which develops the district, adopting a particular point of view regarding urban regulation. We then generalize the notion of this type of computer modeling and simulation, and draw some conclusions about its possible uses in the teaching and practice of design.
series eCAADe
email
last changed 2022/06/07 07:55

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id 6cfd
authors Harfmann, Anton C. and Majkowski, Bruce R.
year 1992
title Component-Based Spatial Reasoning
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 103-111
doi https://doi.org/10.52842/conf.acadia.1992.103
summary The design process and ordering of individual components through which architecture is realized relies on the use of abstract "models" to represent a proposed design. The emergence and use of these abstract "models" for building representation has a long history and tradition in the field of architecture. Models have been made and continue to be made for the patron, occasionally the public, and as a guide for the builders. Models have also been described as a means to reflect on the design and to allow the design to be in dialogue with the creator.

The term "model" in the above paragraph has been used in various ways and in this context is defined as any representation through which design intent is expressed. This includes accurate/ rational or abstract drawings (2- dimensional and 3-dimensional), physical models (realistic and abstract) and computer models (solid, void and virtual reality). The various models that fall within the categories above have been derived from the need to "view" the proposed design in various ways in order to support intuitive reasoning about the proposal and for evaluation purposes. For example, a 2-dimensional drawing of a floor plan is well suited to support reasoning about spatial relationships and circulation patterns while scaled 3-dimensional models facilitate reasoning about overall form, volume, light, massing etc. However, the common denominator of all architectural design projects (if the intent is to construct them in actual scale, physical form) are the discrete building elements from which the design will be constructed. It is proposed that a single computational model representing individual components supports all of the above "models" and facilitates "viewing"' the design according to the frame of reference of the viewer.

Furthermore, it is the position of the authors that all reasoning stems from this rudimentary level of modeling individual components.

The concept of component representation has been derived from the fact that a "real" building (made from individual components such as nuts, bolts and bar joists) can be "viewed" differently according to the frame of reference of the viewer. Each individual has the ability to infer and abstract from the assemblies of components a variety of different "models" ranging from a visceral, experiential understanding to a very technical, physical understanding. The component concept has already proven to be a valuable tool for reasoning about assemblies, interferences between components, tracing of load path and numerous other component related applications. In order to validate the component-based modeling concept this effort will focus on the development of spatial understanding from the component-based model. The discussions will, therefore, center about the representation of individual components and the development of spatial models and spatial reasoning from the component model. In order to frame the argument that spatial modeling and reasoning can be derived from the component representation, a review of the component-based modeling concept will precede the discussions of spatial issues.

series ACADIA
email
last changed 2022/06/07 07:49

_id ed78
authors Jog, Bharati
year 1993
title Integration of Computer Applications in the Practice of Architecture
source Education and Practice: The Critical Interface [ACADIA Conference Proceedings / ISBN 1-880250-02-0] Texas (Texas / USA) 1993, pp. 89-97
doi https://doi.org/10.52842/conf.acadia.1993.089
summary Computer Applications in Architecture is emerging as an important aspect of our profession. The field, which is often referred to as Computer-Aided Architectural Design (CAAD) has had a notable impact on the profession and academia in recent years. A few professionals have predicted that as slide rules were replaced by calculators, in the coming years drafting boards and parallel bars will be replaced by computers. On the other hand, many architects do not anticipate such a drastic change in the coming decade as present CAD systems are supporting only a few integral aspects of architectural design. However, all agree that architecture curricula should be modified to integrate CAAD education.

In 1992-93, in the Department of Architecture of the 'School of Architecture and interior Design' at the University of Cincinnati, a curriculum committee was formed to review and modify the entire architecture curriculum. Since our profession and academia relate directly to each other, the author felt that while revising the curriculum, the committee should have factual information about CAD usage in the industry. Three ways to obtain such information were thought of, namely (1) conducting person to person or telephone interviews with the practitioners (2) requesting firms to give open- ended feed back and (3) surveying firms by sending a questionnaire. Of these three, the most effective, efficient and suitable method to obtain such information was an organized survey through a questionnaire. In mid December 1992, a survey was organized which was sponsored by the School of Architecture and Interior Design, the Center for the Study of the Practice of Architecture (CSPA) and the University Division of Professional Practice, all from the University of Cincinnati.

This chapter focuses on the results of this survey. A brief description of the survey design is also given. In the next section a few surveys organized in recent years are listed. In the third section the design of this survey is presented. The survey questions and their responses are given in the fourth section. The last section presents the conclusions and brief recommendations regarding computer curriculum in architecture.

series ACADIA
last changed 2022/06/07 07:52

_id caadria2004_k-1
id caadria2004_k-1
authors Kalay, Yehuda E.
year 2004
title CONTEXTUALIZATION AND EMBODIMENT IN CYBERSPACE
source CAADRIA 2004 [Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] Seoul Korea 28-30 April 2004, pp. 5-14
doi https://doi.org/10.52842/conf.caadria.2004.005
summary The introduction of VRML (Virtual Reality Markup Language) in 1994, and other similar web-enabled dynamic modeling software (such as SGI’s Open Inventor and WebSpace), have created a rush to develop on-line 3D virtual environments, with purposes ranging from art, to entertainment, to shopping, to culture and education. Some developers took their cues from the science fiction literature of Gibson (1984), Stephenson (1992), and others. Many were web-extensions to single-player video games. But most were created as a direct extension to our new-found ability to digitally model 3D spaces and to endow them with interactive control and pseudo-inhabitation. Surprisingly, this technologically-driven stampede paid little attention to the core principles of place-making and presence, derived from architecture and cognitive science, respectively: two principles that could and should inform the essence of the virtual place experience and help steer its development. Why are the principles of place-making and presence important for the development of virtual environments? Why not simply be content with our ability to create realistically-looking 3D worlds that we can visit remotely? What could we possibly learn about making these worlds better, had we understood the essence of place and presence? To answer these questions we cannot look at place-making (both physical and virtual) from a 3D space-making point of view alone, because places are not an end unto themselves. Rather, places must be considered a locus of contextualization and embodiment that ground human activities and give them meaning. In doing so, places acquire a meaning of their own, which facilitates, improves, and enriches many aspects of our lives. They provide us with a means to interpret the activities of others and to direct our own actions. Such meaning is comprised of the social and cultural conceptions and behaviors imprinted on the environment by the presence and activities of its inhabitants, who in turn, ‘read’ by them through their own corporeal embodiment of the same environment. This transactional relationship between the physical aspects of an environment, its social/cultural context, and our own embodiment of it, combine to create what is known as a sense of place: the psychological, physical, social, and cultural framework that helps us interpret the world around us, and directs our own behavior in it. In turn, it is our own (as well as others’) presence in that environment that gives it meaning, and shapes its social/cultural character. By understanding the essence of place-ness in general, and in cyberspace in particular, we can create virtual places that can better support Internet-based activities, and make them equal to, in some cases even better than their physical counterparts. One of the activities that stands to benefit most from understanding the concept of cyber-places is learning—an interpersonal activity that requires the co-presence of others (a teacher and/or fellow learners), who can point out the difference between what matters and what does not, and produce an emotional involvement that helps students learn. Thus, while many administrators and educators rush to develop webbased remote learning sites, to leverage the economic advantages of one-tomany learning modalities, these sites deprive learners of the contextualization and embodiment inherent in brick-and-mortar learning institutions, and which are needed to support the activity of learning. Can these qualities be achieved in virtual learning environments? If so, how? These are some of the questions this talk will try to answer by presenting a virtual place-making methodology and its experimental implementation, intended to create a sense of place through contextualization and embodiment in virtual learning environments.
series CAADRIA
type normal paper
last changed 2022/06/07 07:52

_id caadria2014_071
id caadria2014_071
authors Li, Lezhi; Renyuan Hu, Meng Yao, Guangwei Huang and Ziyu Tong
year 2014
title Sculpting the Space: A Circulation Based Approach to Generative Design in a Multi-Agent System
source Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer-Aided Architectural Design Research in Asia (CAADRIA 2014) / Kyoto 14-16 May 2014, pp. 565–574
doi https://doi.org/10.52842/conf.caadria.2014.565
summary This paper discusses an MAS (multiagent system) based approach to generating architectural spaces that afford better modes of human movement. To achieve this, a pedestrian simulation is carried out to record the data with regard to human spatial experience during the walking process. Unlike common practices of performance oriented generation where final results are achieved through cycles of simulation and comparison, what we propose here is to let human’s movement exert direct influence on space. We made this possible by asking "humans" to project simulation data on architectural surroundings, and thus cause the layout to change for the purpose of affording what we designate as good spatial experiences. A generation experiment of an exhibition space is implemented to explore this approach, in which tentative rules of such spatial manipulation are proposed and tested through space syntax analyse. As the results suggested, by looking at spatial layouts through a lens of human behaviour, this projection-and-generation method provides some insight into space qualities that other methods could not have offered.
keywords Performance oriented generative design; projection; multi-agent system; pedestrian simulation; space syntax
series CAADRIA
email
last changed 2022/06/07 07:59

_id 8cf3
authors Müller, Volker
year 1992
title Reint-Ops: A Tool Supporting Conceptual Design
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 221-232
doi https://doi.org/10.52842/conf.acadia.1992.221
summary Reasoning is influenced by our perception of the environment. New aspects of our environment help to provoke new thoughts. Thus, changes of what is perceived can be assumed to stimulate the generation of new ideas, as well. In CAD, computerized three-dimensional models of physical entities are produced. Their representation on the monitor is determined by our viewing position and by the rendering method used. Especially the wire-frame representations of views lend themselves to a variety of readings, due to coincident and intersecting lines. Methods by which wire-frame views can be processed to extract the shapes that they contain have been investigated and developed. The extracted shapes can be used as a base for the generation of derived entities through various operations that are called Reinterpretation Operations. They have been implemented as a prototypical extension (named Reint-Ops) to an existing modeling shell. ReintOps is a highly interactive exploratory CAD tool, which allows the user to customize criteria and factors which are used in the reinterpretation process. This tool can be regarded as having a potential to support conceptual design investigations.
keywords CAD, Three-dimensional Model, Wireframe Representation, Shape Extraction, Generation of Derived Entities, Reinterpretation, Conceptual Design
series ACADIA
email
last changed 2022/06/07 07:59

_id aa6d
authors Nichols, Foster Jr., Canete, Isabel J. and Tuladhar, Sagun
year 1992
title Designing for Pedestrians : A CAD-Network Analysis Approach
source New York: John Wiley & Sons, 1992. pp. 379-398 : ill. includes a short bibliography
summary Microcomputer techniques have been developed that combine CAD drawings with transportation network analysis software that uses spreadsheets and stand-alone programs activated from the DOS operating system. The CAD feature simplifies and improves the methods used to design pedestrian circulation facilities and evaluate the impact of new development on existing pedestrian flows. Through the use of customized software, the need for manual data entry is reduced, and the graphical display of analysis results in most intermediate steps in the process are automated. Three hypothetical case studies are presented, concentrating on proposed pedestrian circulation improvements at Penn Station, New York
keywords evaluation, networks, management, CAD, analysis, applications, planning, transportation, prediction, simulation, CAD
series CADline
last changed 2003/06/02 13:58

_id 054b
authors Peitgen, H.-O., Jürgens, H. and Saupe, D.
year 1992
title Fractals for the Classroom. Part 1: Introduction to Fractals and Chaos
source Springer Verlag, New York
summary Fractals for the Classroom breaks new ground as it brings an exciting branch of mathematics into the classroom. The book is a collection of independent chapters on the major concepts related to the science and mathematics of fractals. Written at the mathematical level of an advanced secondary student, Fractals for the Classroom includes many fascinating insights for the classroom teacher and integrates illustrations from a wide variety of applications with an enjoyable text to help bring the concepts alive and make them understandable to the average reader. This book will have a tremendous impact upon teachers, students, and the mathematics education of the general public. With the forthcoming companion materials, including four books on strategic classroom activities and lessons with interactive computer software, this package will be unparalleled.
series other
last changed 2003/04/23 15:14

_id c804
authors Richens, P.
year 1994
title Does Knowledge really Help?
source G. Carrara and Y.E. Kalay (Eds.), Knowledge-Based Computer-Aided Architectural Design, Elsevier
summary The Martin Centre CADLAB has recently been established to investigate software techniques that could be of practical importance to architects within the next five years. In common with most CAD researchers, we are interested in the earlier, conceptual, stages of design, where commercial CAD systems have had little impact. Our approach is not Knowledge-Based, but rather focuses on using the computer as a medium for design and communication. This leads to a concentration on apparently superficial aspects such as visual appearance, the dynamics of interaction, immediate feedback, plasticity. We try to avoid building-in theoretical attitudes, and to reduce the semantic content of our systems to a low level on the basis that flexibility and intelligence are inversely related; and that flexibility is more important. The CADLAB became operational in January 1992. First year work in three areas – building models, experiencing architecture, and making drawings – is discussed.
series other
more http://www.arct.cam.ac.uk/research/pubs/
last changed 2003/03/05 13:19

_id daff
authors Richens, P.
year 1994
title CAD Research at the Martin Centre
source Automation in Construction, No. 3
summary The Martin Centre CADLAB has recently been established to investigate software techniques that could be of practical importance to architects within the next five years. In common with most CAD researchers, we are interested in the earlier, conceptual, stages of design, where commercial CAD systems have had little impact. Our approach is not Knowledge-Based, but rather focuses on using the computer as a medium for design and communication. This leads to a concentration on apparently superficial aspects such as visual appearance, the dynamics of interaction, immediate feedback, plasticity. We try to avoid building-in theoretical attitudes, and to reduce the semantic content of our systems to a low level on the basis that flexibility and intelligence are inversely related; and that flexibility is more important. The CADLAB became operational in January 1992. First year work in three areas – building models, experiencing architecture, and making drawings – is discussed.
series journal
email
more http://www.arct.cam.ac.uk/research/pubs/pdfs/rich94a.pdf
last changed 2000/03/05 19:05

_id e87d
authors Schierle, G. Goetz
year 1992
title Computer Aided Design for Wind and Seismic Forces
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 187-194
doi https://doi.org/10.52842/conf.acadia.1992.187
summary A computer program, Lateral Design Graphs (LDG), to consider lateral wind and seismic forces in the early design stages, is presented. LDG provides numeric data and graphs to visualize the effect of building height, shape, and framing system on lateral forces. Many critical decisions effecting lateral forces and elements to resist them are made at early design stages. Costly changes or reduced safety may result if they are not considered. For example, building height, shape and configuration impact lateral forces and building safety; so does the placement of shear walls in line with space needs. But the complex and time consuming nature of lateral force design by hand makes early consideration often not practical. Therefore the objectives of LDG are to: 1) visualize the cause and effect of lateral forces; 2) make the design process more transparent; 3) develop informed intuition; 4) facilitate trade-off studies at an early stage; 5) help to teach design for lateral forces.
series ACADIA
email
last changed 2022/06/07 07:57

_id 831d
authors Seebohm, Thomas
year 1992
title Discoursing on Urban History Through Structured Typologies
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 157-175
doi https://doi.org/10.52842/conf.acadia.1992.157
summary How can urban history be studied with the aid of three-dimensional computer modeling? One way is to model known cities at various times in history, using historical records as sources of data. While such studies greatly enhance the understanding of the form and structure of specific cities at specific points in time, it is questionable whether such studies actually provide a true understanding of history. It can be argued that they do not because such studies only show a record of one of many possible courses of action at various moments in time. To gain a true understanding of urban history one has to place oneself back in historical time to consider all of the possible courses of action which were open in the light of the then current situation of the city, to act upon a possible course of action and to view the consequences in the physical form of the city. Only such an understanding of urban history can transcend the memory of the actual and hence the behavior of the possible. Moreover, only such an understanding can overcome the limitations of historical relativism, which contends that historical fact is of value only in historical context, with the realization, due to Benedetto Croce and echoed by Rudolf Bultmann, that the horizon of "'deeper understanding" lies in "'the actuality of decision"' (Seebohm and van Pelt 1990).

One cannot conduct such studies on real cities except, perhaps, as a point of departure at some specific point in time to provide an initial layout for a city knowing that future forms derived by the studies will diverge from that recorded in history. An entirely imaginary city is therefore chosen. Although the components of this city at the level of individual buildings are taken from known cities in history, this choice does not preclude alternative forms of the city. To some degree, building types are invariants and, as argued in the Appendix, so are the urban typologies into which they may be grouped. In this imaginary city students of urban history play the role of citizens or groups of citizens. As they defend their interests and make concessions, while interacting with each other in their respective roles, they determine the nature of the city as it evolves through the major periods of Western urban history in the form of threedimensional computer models.

My colleague R.J. van Pelt and I presented this approach to the study of urban history previously at ACADIA (Seebohm and van Pelt 1990). Yet we did not pay sufficient attention to the manner in which such urban models should be structured and how the efforts of the participants should be coordinated. In the following sections I therefore review what the requirements are for three-dimensional modeling to support studies in urban history as outlined both from the viewpoint of file structure of the models and other viewpoints which have bearing on this structure. Three alternative software schemes of progressively increasing complexity are then discussed with regard to their ability to satisfy these requirements. This comparative study of software alternatives and their corresponding file structures justifies the present choice of structure in relation to the simpler and better known generic alternatives which do not have the necessary flexibility for structuring the urban model. Such flexibility means, of course, that in the first instance the modeling software is more timeconsuming to learn than a simple point and click package in accord with the now established axiom that ease of learning software tools is inversely related to the functional power of the tools. (Smith 1987).

series ACADIA
email
last changed 2022/06/07 07:56

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 12HOMELOGIN (you are user _anon_504021 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002