CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 224

_id 60e7
authors Bailey, Rohan
year 2000
title The Intelligent Sketch: Developing a Conceptual Model for a Digital Design Assistant
doi https://doi.org/10.52842/conf.acadia.2000.137
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 137-145
summary The computer is a relatively new tool in the practice of Architecture. Since its introduction, there has been a desire amongst designers to use this new tool quite early in the design process. However, contrary to this desire, most Architects today use pen and paper in the very early stages of design to sketch. Architects solve problems by thinking visually. One of the most important tools that the Architect has at his disposal in the design process is the hand sketch. This iterative way of testing ideas and informing the design process with images fundamentally directs and aids the architect’s decision making. It has been said (Schön and Wiggins 1992) that sketching is about the reflective conversation designers have with images and ideas conveyed by the act of drawing. It is highly dependent on feedback. This “conversation” is an area worthy of investigation. Understanding this “conversation” is significant to understanding how we might apply the computer to enhance the designer’s ability to capture, manipulate and reflect on ideas during conceptual design. This paper discusses sketching and its relation to design thinking. It explores the conversations that designers engage in with the media they use. This is done through the explanation of a protocol analysis method. Protocol analysis used in the field of psychology, has been used extensively by Eastman et al (starting in the early 70s) as a method to elicit information about design thinking. In the pilot experiment described in this paper, two persons are used. One plays the role of the “hand” while the other is the “mind”- the two elements that are involved in the design “conversation”. This variation on classical protocol analysis sets out to discover how “intelligent” the hand should be to enhance design by reflection. The paper describes the procedures entailed in the pilot experiment and the resulting data. The paper then concludes by discussing future intentions for research and the far reaching possibilities for use of the computer in architectural studio teaching (as teaching aids) as well as a digital design assistant in conceptual design.
keywords CAAD, Sketching, Protocol Analysis, Design Thinking, Design Education
series ACADIA
last changed 2022/06/07 07:54

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id 10b7
authors Aukstakalnis, Steve and Blatner, David
year 1992
title Silicon Mirage: The Art and Science of Virtual Reality
source Peachpit Press
summary An introduction to virtual reality covers every aspect of the revolutionary new technology and its many possible applications, from computer games to air traffic control. Original. National ad/promo.
series other
last changed 2003/04/23 15:14

_id a6d8
authors Baletic, Bojan
year 1992
title Information Codes of Mutant Forms
doi https://doi.org/10.52842/conf.ecaade.1992.173
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 173-186
summary If we assume that the statements from this quote are true, than we have to ask ourselves the question: "Should we teach architecture as we do?" This paper describes our experience in developing a knowledge base using a neural network system to serve as a "intelligent assistant" to students and other practicing architects in the conceptual phase of their work on housing design. Our approach concentrated on rising the awareness of the designer about the problem, not by building rules to guide him to a solution, but by questioning the categories and typologies by which he classifies and understands a problem. This we achieve through examples containing mutant forms, imperfect rules, gray zones between black and white, that carry the seeds of new solutions.
series eCAADe
email
last changed 2022/06/07 07:54

_id ecaadesigradi2019_449
id ecaadesigradi2019_449
authors Becerra Santacruz, Axel
year 2019
title The Architecture of ScarCity Game - The craft and the digital as an alternative design process
doi https://doi.org/10.52842/conf.ecaade.2019.3.045
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 3, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 45-52
summary The Architecture of ScarCity Game is a board game used as a pedagogical tool that challenges architecture students by involving them in a series of experimental design sessions to understand the design process of scarcity and the actual relation between the craft and the digital. This means "pragmatic delivery processes and material constraints, where the exchange between the artisan of handmade, representing local skills and technology of the digitally conceived is explored" (Huang 2013). The game focuses on understanding the different variables of the crafted design process of traditional communities under conditions of scarcity (Michel and Bevan 1992). This requires first analyzing the spatial environmental model of interaction, available human and natural resources, and the dynamic relationship of these variables in a digital era. In the first stage (Pre-Agency), the game set the concept of the craft by limiting students design exploration from a minimum possible perspective developing locally available resources and techniques. The key elements of the design process of traditional knowledge communities have to be identified (Preez 1984). In other words, this stage is driven by limited resources + chance + contingency. In the second stage (Post-Agency) students taking the architects´ role within this communities, have to speculate and explore the interface between the craft (local knowledge and low technological tools), and the digital represented by computation data, new technologies available and construction. This means the introduction of strategy + opportunity + chance as part of the design process. In this sense, the game has a life beyond its mechanics. This other life challenges the participants to exploit the possibilities of breaking the actual boundaries of design. The result is a tool to challenge conventional methods of teaching and leaning controlling a prescribed design process. It confronts the rules that professionals in this field take for granted. The game simulates a 'fake' reality by exploring in different ways with surveyed information. As a result, participants do not have anything 'real' to lose. Instead, they have all the freedom to innovate and be creative.
keywords Global south, scarcity, low tech, digital-craft, design process and innovation by challenge.
series eCAADeSIGraDi
email
last changed 2022/06/07 07:54

_id eabb
authors Boeykens, St. Geebelen, B. and Neuckermans, H.
year 2002
title Design phase transitions in object-oriented modeling of architecture
doi https://doi.org/10.52842/conf.ecaade.2002.310
source Connecting the Real and the Virtual - design e-ducation [20th eCAADe Conference Proceedings / ISBN 0-9541183-0-8] Warsaw (Poland) 18-20 September 2002, pp. 310-313
summary The project IDEA+ aims to develop an “Integrated Design Environment for Architecture”. Its goal is providing a tool for the designer-architect that can be of assistance in the early-design phases. It should provide the possibility to perform tests (like heat or cost calculations) and simple simulations in the different (early) design phases, without the need for a fully detailed design or remodeling in a different application. The test for daylighting is already in development (Geebelen, to be published). The conceptual foundation for this design environment has been laid out in a scheme in which different design phases and scales are defined, together with appropriate tests at the different levels (Neuckermans, 1992). It is a translation of the “designerly” way of thinking of the architect (Cross, 1982). This conceptual model has been translated into a “Core Object Model” (Hendricx, 2000), which defines a structured object model to describe the necessary building model. These developments form the theoretical basis for the implementation of IDEA+ (both the data structure & prototype software), which is currently in progress. The research project addresses some issues, which are at the forefront of the architect’s interest while designing with CAAD. These are treated from the point of view of a practicing architect.
series eCAADe
email
last changed 2022/06/07 07:52

_id cef3
authors Bridges, Alan H.
year 1992
title Computing and Problem Based Learning at Delft University of Technology Faculty of Architecture
doi https://doi.org/10.52842/conf.ecaade.1992.289
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 289-294
summary Delft University of Technology, founded in 1842, is the oldest and largest technical university in the Netherlands. It provides education for more than 13,000 students in fifteen main subject areas. The Faculty of Architecture, Housing, Urban Design and Planning is one of the largest faculties of the DUT with some 2000 students and over 500 staff members. The course of study takes four academic years: a first year (Propaedeuse) and a further three years (Doctoraal) leading to the "ingenieur" qualification. The basic course material is delivered in the first two years and is taken by all students. The third and fourth years consist of a smaller number of compulsory subjects in each of the department's specialist areas together with a wide range of option choices. The five main subject areas the students may choose from for their specialisation are Architecture, Building and Project Management, Building Technology, Urban Design and Planning, and Housing.

The curriculum of the Faculty has been radically revised over the last two years and is now based on the concept of "Problem-Based Learning". The subject matter taught is divided thematically into specific issues that are taught in six week blocks. The vehicles for these blocks are specially selected and adapted case studies prepared by teams of staff members. These provide a focus for integrating specialist subjects around a studio based design theme. In the case of second year this studio is largely computer-based: many drawings are produced by computer and several specially written computer applications are used in association with the specialist inputs.

This paper describes the "block structure" used in second year, giving examples of the special computer programs used, but also raises a number of broader educational issues. Introduction of the block system arose as a method of curriculum integration in response to difficulties emerging from the independent functioning of strong discipline areas in the traditional work groups. The need for a greater level of selfdirected learning was recognised as opposed to the "passive information model" of student learning in which the students are seen as empty vessels to be filled with knowledge - which they are then usually unable to apply in design related contexts in the studio. Furthermore, the value of electives had been questioned: whilst enabling some diversity of choice, they may also be seen as diverting attention and resources from the real problems of teaching architecture.

series eCAADe
email
last changed 2022/06/07 07:54

_id b4c4
authors Carrara, G., Fioravanti, A. and Novembri, G.
year 2000
title A framework for an Architectural Collaborative Design
doi https://doi.org/10.52842/conf.ecaade.2000.057
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 57-60
summary The building industry involves a larger number of disciplines, operators and professionals than other industrial processes. Its peculiarity is that the products (building objects) have a number of parts (building elements) that does not differ much from the number of classes into which building objects can be conceptually subdivided. Another important characteristic is that the building industry produces unique products (de Vries and van Zutphen, 1992). This is not an isolated situation but indeed one that is spreading also in other industrial fields. For example, production niches have proved successful in the automotive and computer industries (Carrara, Fioravanti, & Novembri, 1989). Building design is a complex multi-disciplinary process, which demands a high degree of co-ordination and co-operation among separate teams, each having its own specific knowledge and its own set of specific design tools. Establishing an environment for design tool integration is a prerequisite for network-based distributed work. It was attempted to solve the problem of efficient, user-friendly, and fast information exchange among operators by treating it simply as an exchange of data. But the failure of IGES, CGM, PHIGS confirms that data have different meanings and importance in different contexts. The STandard for Exchange of Product data, ISO 10303 Part 106 BCCM, relating to AEC field (Wix, 1997), seems to be too complex to be applied to professional studios. Moreover its structure is too deep and the conceptual classifications based on it do not allow multi-inheritance (Ekholm, 1996). From now on we shall adopt the BCCM semantic that defines the actor as "a functional participant in building construction"; and we shall define designer as "every member of the class formed by designers" (architects, engineers, town-planners, construction managers, etc.).
keywords Architectural Design Process, Collaborative Design, Knowledge Engineering, Dynamic Object Oriented Programming
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:55

_id 2325
authors Chilton, John C.
year 1992
title Computer Aided Structural Design in Architectural Instruction
doi https://doi.org/10.52842/conf.ecaade.1992.443
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 443-450
summary In schools of architecture there is a tendency to associate the use of computers solely with the production of graphic images as part of the architectural design process. However, if the architecture is to work as a building it is also essential that technical aspects of the design are adequately investigated. One of the problem areas for most architectural students is structural design and they are often reluctant to use hand calculations to determine sizes of structural elements within their projects. In recent years, much of the drudgery of hand calculation has been removed from the engineer by the use of computers, and this has, hopefully, allowed a more thorough investigation of conceptual ideas and alternatives. The same benefit is now becoming available to architectural students. This is in the form of structural analysis and design programs that can be used, even by those having a limited knowledge of structural engineering, to assess the stability of designs and obtain approximate sizes for individual structural elements. The paper discusses how the use of such programs is taught, within the School of Architecture at Nottingham. Examples will be given of how they can assist students in the architectural design process. In particular, the application of GLULAM, a program for estimating sizes of laminated timber elements and SAND, a structural analysis and design package, will be described.
series eCAADe
last changed 2022/06/07 07:55

_id 4857
authors Escola Tecnica Superior D'arquitectura de Barcelona (Ed.)
year 1992
title CAAD Instruction: The New Teaching of an Architect?
doi https://doi.org/10.52842/conf.ecaade.1992
source eCAADe Conference Proceedings / Barcelona (Spain) 12-14 November 1992, 551 p.
summary The involvement of computer graphic systems in the transmission of knowledge in the areas of urban planning and architectural design will bring a significant change to the didactic programs and methods of those schools which have decided to adopt these new instruments. Workshops of urban planning and architectural design will have to modify their structures, and teaching teams will have to revise their current programs. Some european schools and faculties of architecture have taken steps in this direction. Others are willing to join them.

This process is only delayed by the scarcity of material resources, and by the slowness with which a sufficient number of teachers are adopting these methods.

ECAADE has set out to analyze the state of this issue during its next conference, and it will be discussed from various points of view. From this confrontation of ideas will come, surely, the guidelines for progress in the years to come.

The different sessions will be grouped together following these four themes:

(A.) Multimedia and Course Work / State of the art of the synthesis of graphical and textual information favored by new available multimedia computer programs. Their repercussions on academic programs. (B.) The New Design Studio / Physical characteristics, data concentration and accessibility of a computerized studio can be better approached in a computerized workshop. (C.) How to manage the new education system / Problems and possibilities raised, from the practical and organizational points of view, of architectural education by the introduction of computers in the classrooms. (D.) CAAI. Formal versus informal structure / How will the traditional teaching structure be affected by the incidence of these new systems in which the access to knowledge and information can be obtained in a random way and guided by personal and subjective criteria.

series eCAADe
email
last changed 2022/06/07 07:49

_id ddss9211
id ddss9211
authors Gilleard, J. and Olatidoye, O.
year 1993
title Graphical interfacing to a conceptual model for estimating the cost of residential construction
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary This paper presents a method for determining elemental square foot costs and cost significance for residential construction. Using AutoCAD's icon menu and dialogue box' facilities, a non-expert may graphically select (i) residential configuration; (ii) construction quality level; (iii) geographical location; (iv) square foot area; and finally, (v) add-ons, e.g. porches and decks, basement, heating and cooling equipment, garages and carports etc. in order to determine on-site builder's costs. Subsequent AutoLisp routines facilitate data transfer to a Lotus 1-2-3 spreadsheet where an elemental cost breakdown for the project may be determined. Finally, using Lotus 1-2-3 macros, computed data is transferred back to AutoCAD, where all cost significant items are graphically highlighted.
series DDSS
last changed 2003/08/07 16:36

_id 32eb
authors Henry, Daniel
year 1992
title Spatial Perception in Virtual Environments : Evaluating an Architectural Application
source University of Washington
summary Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the "Walkthrough" at the University of North Carolina. The set-up at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 square feet) and it does not require expensive and space taking hardware devices (such as the treadmill device for simulating walking). Now that the potential for using virtual environments in this architectural application is clear, it becomes important to verify that this tool succeeds in accurately representing space as intended. The purpose of this study is to verify that the perception of spaces is the same, in both simulated and real environment. It is hoped that the findings of this study will guide and accelerate the process by which the technology makes its way into the field of architecture.
keywords Space Perception; Space (Architecture); Computer Simulation
series thesis:MSc
last changed 2003/02/12 22:37

_id ascaad2006_paper18
id ascaad2006_paper18
authors Huang, Chie-Chieh
year 2006
title An Approach to 3D Conceptual Modelling
source Computing in Architecture / Re-Thinking the Discourse: The Second International Conference of the Arab Society for Computer Aided Architectural Design (ASCAAD 2006), 25-27 April 2006, Sharjah, United Arab Emirates
summary This article presents a 3D user interface required by the development of conceptual modeling. This 3D user interface provides a new structure for solving the problems of difficult interface operations and complicated commands due to the application of CAD 2D interface for controlling 3D environment. The 3D user interface integrates the controlling actions of “seeing – moving –seeing” while designers are operating CAD (Schön and Wiggins, 1992). Simple gestures are used to control the operations instead. The interface also provides a spatial positioning method which helps designers to eliminate the commands of converting a coordinate axis. The study aims to discuss the provision of more intuitively interactive control through CAD so as to fulfil the needs of designers. In our practices and experiments, a pair of LED gloves equipped with two CCD cameras for capturing is used to sense the motions of hands and positions in 3D. In addition, circuit design is applied to convert the motions of hands including selecting, browsing, zoom in / zoom out and rotating to LED switches in different colours so as to identify images.
series ASCAAD
email
last changed 2007/04/08 19:47

_id 56e9
authors Huang, Tao-Kuang
year 1992
title A Graphical Feedback Model for Computerized Energy Analysis during the Conceptual Design Stage
source Texas A&M University
summary During the last two decades, considerable effort has been placed on the development of building design analysis tools. Architects and designers have begun to take advantage of computers to generate and examine design alternatives. However, because it has been difficult to adapt computer technologies to the visual orientation of the building designer, the majority of computer applications have been limited to numerical analysis and office automation tasks. Only recently, because of advances in hardware and software techniques, computers have entered into a new phase in the development of architectural design. haveters are now able to interactively display graphics solutions to architectural related problems, which is fundamental to the design process. The majority of research programs in energy efficient design have sharpened people's understanding of energy principles and their application of those principles. Energy conservation concepts, however, have not been widely used. A major problem in the implementation of these principles is that energy principles their applications are abstract, hard to visualize and separated from the architectural design process. Furthermore, one aspect of energy analysis may contain thousands of pieces of numerical information which often leads to confusion on the part of designers. If these difficulties can be overcome, it would bring a great benefit to the advancement of energy conservation concepts. This research explores the concept of an integrated computer graphics program to support energy efficient design. It focuses on (1) the integration of energy efficiently and architectural design, and (2) the visualization of building energy use through graphical interfaces during the conceptual design stage. It involves (1) the discussion of frameworks of computer-aided architectural design and computer-aided energy efficient building design, and (2) the development of an integrated computer prototype program with a graphical interface that helps the designer create building layouts, analyze building energy interactively and receive visual feedbacks dynamically. The goal is to apply computer graphics as an aid to visualize the effects of energy related decisions and therefore permit the designer to visualize and understand energy conservation concepts in the conceptual phase of architectural design.
series thesis:PhD
last changed 2003/02/12 22:37

_id caadria2004_k-1
id caadria2004_k-1
authors Kalay, Yehuda E.
year 2004
title CONTEXTUALIZATION AND EMBODIMENT IN CYBERSPACE
doi https://doi.org/10.52842/conf.caadria.2004.005
source CAADRIA 2004 [Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] Seoul Korea 28-30 April 2004, pp. 5-14
summary The introduction of VRML (Virtual Reality Markup Language) in 1994, and other similar web-enabled dynamic modeling software (such as SGI’s Open Inventor and WebSpace), have created a rush to develop on-line 3D virtual environments, with purposes ranging from art, to entertainment, to shopping, to culture and education. Some developers took their cues from the science fiction literature of Gibson (1984), Stephenson (1992), and others. Many were web-extensions to single-player video games. But most were created as a direct extension to our new-found ability to digitally model 3D spaces and to endow them with interactive control and pseudo-inhabitation. Surprisingly, this technologically-driven stampede paid little attention to the core principles of place-making and presence, derived from architecture and cognitive science, respectively: two principles that could and should inform the essence of the virtual place experience and help steer its development. Why are the principles of place-making and presence important for the development of virtual environments? Why not simply be content with our ability to create realistically-looking 3D worlds that we can visit remotely? What could we possibly learn about making these worlds better, had we understood the essence of place and presence? To answer these questions we cannot look at place-making (both physical and virtual) from a 3D space-making point of view alone, because places are not an end unto themselves. Rather, places must be considered a locus of contextualization and embodiment that ground human activities and give them meaning. In doing so, places acquire a meaning of their own, which facilitates, improves, and enriches many aspects of our lives. They provide us with a means to interpret the activities of others and to direct our own actions. Such meaning is comprised of the social and cultural conceptions and behaviors imprinted on the environment by the presence and activities of its inhabitants, who in turn, ‘read’ by them through their own corporeal embodiment of the same environment. This transactional relationship between the physical aspects of an environment, its social/cultural context, and our own embodiment of it, combine to create what is known as a sense of place: the psychological, physical, social, and cultural framework that helps us interpret the world around us, and directs our own behavior in it. In turn, it is our own (as well as others’) presence in that environment that gives it meaning, and shapes its social/cultural character. By understanding the essence of place-ness in general, and in cyberspace in particular, we can create virtual places that can better support Internet-based activities, and make them equal to, in some cases even better than their physical counterparts. One of the activities that stands to benefit most from understanding the concept of cyber-places is learning—an interpersonal activity that requires the co-presence of others (a teacher and/or fellow learners), who can point out the difference between what matters and what does not, and produce an emotional involvement that helps students learn. Thus, while many administrators and educators rush to develop webbased remote learning sites, to leverage the economic advantages of one-tomany learning modalities, these sites deprive learners of the contextualization and embodiment inherent in brick-and-mortar learning institutions, and which are needed to support the activity of learning. Can these qualities be achieved in virtual learning environments? If so, how? These are some of the questions this talk will try to answer by presenting a virtual place-making methodology and its experimental implementation, intended to create a sense of place through contextualization and embodiment in virtual learning environments.
series CAADRIA
type normal paper
last changed 2022/06/07 07:52

_id e7c8
authors Kalisperis, Loukas N., Steinman, Mitch and Summers, Luis H.
year 1992
title Design Knowledge, Environmental Complexity in Nonorthogonal Space
source New York: John Wiley & Sons, 1992. pp. 273-291 : ill. includes bibliography
summary Mechanization and industrialization of society has resulted in most people spending the greater part of their lives in enclosed environments. Optimal design of indoor artificial climates is therefore of increasing importance. Wherever artificial climates are created for human occupation, the aim is that the environment be designed so that individuals are in thermal comfort. Current design methodologies for radiant panel heating systems do not adequately account for the complexities of human thermal comfort, because they monitor air temperature alone and do not account for thermal neutrality in complex enclosures. Thermal comfort for a person is defined as that condition of mind which expresses satisfaction with the thermal environment. Thermal comfort is dependent on Mean Radiant Temperature and Operative Temperature among other factors. In designing artificial climates for human occupancy the interaction of the human with the heated surfaces as well the surface-to-surface heat exchange must be accounted for. Early work in the area provided an elaborate and difficult method for calculating radiant heat exchange for simplistic and orthogonal enclosures. A new improved method developed by the authors for designing radiant panel heating systems based on human thermal comfort and mean radiant temperature is presented. Through automation and elaboration this method overcomes the limitations of the early work. The design procedure accounts for human thermal comfort in nonorthogonal as well as orthogonal spaces based on mean radiant temperature prediction. The limitation of simplistic orthogonal geometries has been overcome with the introduction of the MRT-Correction method and inclined surface-to-person shape factor methodology. The new design method increases the accuracy of calculation and prediction of human thermal comfort and will allow designers to simulate complex enclosures utilizing the latest design knowledge of radiant heat exchange to increase human thermal comfort
keywords applications, architecture, building, energy, systems, design, knowledge
series CADline
last changed 2003/06/02 10:24

_id e8f0
authors Mackey, David L.
year 1992
title Mission Possible: Computer Aided Design for Everyone
doi https://doi.org/10.52842/conf.acadia.1992.065
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 65-73
summary A pragmatic model for the building of an electronic architectural design curriculum which will offer students and faculty the opportunity to fully integrate information age technologies into the educational experience is becoming increasingly desirable.

The majority of architectural programs teach technology topics through content specific courses which appear as an educational sequence within the curriculum. These technology topics have traditionally included structural design, environmental systems, and construction materials and methods. Likewise, that course model has been broadly applied to the teaching of computer aided design, which is identified as a technology topic. Computer technology has resulted in a proliferation of courses which similarly introduce the student to computer graphic and design systems through a traditional course structure.

Inevitably, competition for priority arises within the curriculum, introducing the potential risk that otherwise valuable courses and/or course content will be replaced by the "'newer" technology, and providing fertile ground for faculty and administrative resistance to computerization as traditional courses are pushed aside or seem threatened.

An alternative view is that computer technology is not a "topic", but rather the medium for creating a design (and studio) environment for informed decision making.... deciding what it is we should build. Such a viewpoint urges the development of a curricular structure, through which the impact of computer technology may be understood as that medium for design decision making, as the initial step in addressing the current and future needs of architectural education.

One example of such a program currently in place at the College of Architecture and Planning, Ball State University takes an approach which overlays, like a transparent tissue, the computer aided design content (or a computer emphasis) onto the primary curriculum.

With the exception of a general introductory course at the freshman level, computer instruction and content issues may be addressed effectively within existing studio courses. The level of operational and conceptual proficiency achieved by the student, within an electronic design studio, makes the electronic design environment selfsustaining and maintainable across the entire curriculum. The ability to broadly apply computer aided design to the educational experience can be independent of the availability of many specialized computer aided design faculty.

series ACADIA
last changed 2022/06/07 07:59

_id a582
authors Marshall, Tony B.
year 1992
title The Computer as a Graphic Medium in Conceptual Design
doi https://doi.org/10.52842/conf.acadia.1992.039
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 39-47
summary The success CAD has experienced in the architectural profession demonstrates that architects have been willing to replace traditional drafting media with computers and electronic plotters for the production of working drawings. Its expanded use in the design development phase for 3D modeling and rendering further justifies CAD's usefulness as a presentation medium. The schematic design phase however, has hardly been influenced by the evolution of CAD. Most architects simply have not come to view the computer as a viable design medium. One reason for this might be the strong correspondence between architectural CAD and plan view graphics, as used in working drawings, compared to the weak correspondence between architectural CAD and plan view graphics, as used in schematic design. The role of the actual graphic medium during schematic design should not be overlooked in the development of CAD applications.

In order to produce practical CAD applications for schematic design we must explore the computer’s potential as a form of expression and its role as a graphic medium. An examination of the use of traditional graphic media during schematic design will provide some clues regarding what capabilities CAD must provide and how a system should operate in order to be useful during conceptual design.

series ACADIA
last changed 2022/06/07 07:59

_id 8cf3
authors Müller, Volker
year 1992
title Reint-Ops: A Tool Supporting Conceptual Design
doi https://doi.org/10.52842/conf.acadia.1992.221
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 221-232
summary Reasoning is influenced by our perception of the environment. New aspects of our environment help to provoke new thoughts. Thus, changes of what is perceived can be assumed to stimulate the generation of new ideas, as well. In CAD, computerized three-dimensional models of physical entities are produced. Their representation on the monitor is determined by our viewing position and by the rendering method used. Especially the wire-frame representations of views lend themselves to a variety of readings, due to coincident and intersecting lines. Methods by which wire-frame views can be processed to extract the shapes that they contain have been investigated and developed. The extracted shapes can be used as a base for the generation of derived entities through various operations that are called Reinterpretation Operations. They have been implemented as a prototypical extension (named Reint-Ops) to an existing modeling shell. ReintOps is a highly interactive exploratory CAD tool, which allows the user to customize criteria and factors which are used in the reinterpretation process. This tool can be regarded as having a potential to support conceptual design investigations.
keywords CAD, Three-dimensional Model, Wireframe Representation, Shape Extraction, Generation of Derived Entities, Reinterpretation, Conceptual Design
series ACADIA
email
last changed 2022/06/07 07:59

_id cb5a
authors Oxman, Rivka E.
year 1992
title Multiple Operative and Interactive Modes in Knowledge-Based Design Systems
source New York: John Wiley & Sons, 1992. pp. 125-143 : ill. includes bibliography
summary A conceptual basis for the development of an expert system which is capable of integrating various modes of generation and evaluation in design is presented. This approach is based upon two sets of reasoning processes in the design system. The first enables a mapping between design requirements and solution descriptions in a generative mode of design; and the second enables a mapping between solution descriptions and performance evaluation in an evaluative and predictive mode. This concept supports a formal framework necessary for a knowledge-based design system to operate in a design partnership relation with the designer. Another fundamental concept in expert systems for design, dual direction interpretation between graphic and textual modes, is presented and elaborated. This encoding of knowledge behind the geometrical representation can be achieved in knowledge- based design systems by the development of a 'semantic interpreter' which supports a dual direction mapping process employing a geometrical knowledge, typological knowledge and evaluative knowledge. An implemented expert system for design, PREDIKT, demonstrates these concepts in the domain of kitchen design. It provides the user with a choice of alternative modes of interaction, such as: a 'design critic' for the evaluation of a design, a 'design generator' for the generation of a design, or a 'design critic-generator' for the completion of partial solutions
keywords architecture, knowledge base, design, systems, expert systems
series CADline
email
last changed 2003/06/02 10:24

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 11HOMELOGIN (you are user _anon_260199 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002