CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 246

_id e039
authors Bertin, Vito
year 1992
title Structural Transformations (Basic Architectural Unit 6)
doi https://doi.org/10.52842/conf.ecaade.1992.413
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 413-426
summary While the teaching of the phenomenon of form as well as space is normally seen within an environment of free experimentation and personal expression, other directions prove to be worth of pursuit. The proposed paper represents such an exploration. The generation of controlled complexity and structural transformations have been the title of the project which forms the base of this paper. In it, the potential for creative development of the student was explored in such a way, that as in the sciences a process can be reproduced or an exploration utilized in further experimentation. The cube as a well proven B.A.U. or basic architectural unit has again been used in our work. Even a simple object like a cube has many properties. As properties are never pure, but always related to other properties, and looking at a single property as a specific value of a variable, it is possible to link a whole field of objects. These links provide a network of paths through which exploration and development is possible. The paper represents a first step in a direction which we think will compliment the already established basic design program.

series eCAADe
email
last changed 2022/06/07 07:52

_id cf2009_poster_09
id cf2009_poster_09
authors Hsu, Yin-Cheng
year 2009
title Lego Free-Form? Towards a Modularized Free-Form Construction
source T. Tidafi and T. Dorta (eds) Joining Languages Cultures and Visions: CAADFutures 2009 CD-Rom
summary Design Media is the tool designers use for concept realization (Schon and Wiggins, 1992; Liu, 1996). Design thinking of designers is deeply effected by the media they tend to use (Zevi, 1981; Liu, 1996; Lim, 2003). Historically, architecture is influenced by the design media that were available within that era (Liu, 1996; Porter and Neale, 2000; Smith, 2004). From the 2D plans first used in ancient egypt, to the 3D physical models that came about during the Renaissance period, architecture reflects the media used for design. When breakthroughs in CAD/CAM technologies were brought to the world in the twentieth century, new possibilities opened up for architects.
keywords CAD/CAM free-form construction, modularization
series CAAD Futures
type poster
last changed 2009/07/08 22:12

_id 181b
authors Liou, Shuenn-Ren
year 1992
title A computer-based framework for analyzing and deriving the morphological structure of architectural designs
source University of Michigan
summary An approach to the acquisition and utilization of knowledge about the morphological structure of notable orthogonal building plans and other two-dimensional compositions is formulated and tested. This approach consists of two levels of abstraction within which the analysis and comparison of existing designs and the derivation of new designs can be undertaken systematically and efficiently. Specifically, the morphological structure of orthogonal building plans and other two-dimensional compositions is conceived as a language defined by shape grammar and architectural grammar corresponding to the geometric and spatial structures of the compositions. Lines constitute the shape grammar and walls and columns the architectural grammar. A computer program named ANADER is designed and implemented using the C++ object-oriented language to describe feasible compositions. It is argued that the gap between morphological analysis and synthesis is bridged partially because the proposed framework facilitates systematic comparisons of the morphological structures of two-dimensional orthogonal compositions and provides insight into the form-making process used to derive them. As an analytical system, the framework contributes to the generation of new and the assessment of existing morphological knowledge. Specifically, it is demonstrated that it is feasible to specify an existing architectural design by a set of universal rule schemata and the sequence of their application. As a generative system, the framework allows many of the tasks involved in the derivation of two-dimensional orthogonal compositions to be carried out. As well, it promotes the use of analytical results. In conclusion, it is argued that the proposed computer-based framework will provide the research and the educator with increasing opportunities for addressing persistent architectural questions in new ways. Of particular interest to this author are questions concerning the decision-making activities involved in form- and space-making as well as the description, classification, and derivation of architecutural form and space. It is suggested that, at least in reference to the cases examined, but probably also in reference to many other morphological classes, these and other related questions can be addressed systematically, efficiently, and fruitfully by using the proposed framework.  
series thesis:PhD
last changed 2003/02/12 22:37

_id 2312
authors Carrara, G., Kalay Y.E. and Novembri, G.
year 1992
title Multi-modal Representation of Design Knowledge
doi https://doi.org/10.52842/conf.ecaade.1992.055
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 55-66
summary Explicit representation of design knowledge is needed if scientific methods are to be applied in design research, and if computers are to be used in the aid of design education and practice. The representation of knowledge in general, and design knowledge in particular, have been the subject matter of computer science, design methods, and computer-aided design research for quite some time. Several models of design knowledge representation have been developed over the last 30 years, addressing specific aspects of the problem. This paper describes a different approach to design knowledge representation that recognizes the multimodal nature of design knowledge. It uses a variety of computational tools to encode different kinds of design knowledge, including the descriptive (objects), the prescriptive (goals) and the operational (methods) kinds. The representation is intended to form a parsimonious, communicable and presentable knowledge-base that can be used as a tool for design research and education as well as for CAAD.
keywords Design Methods, Design Process Goals, Knowledge Representation, Semantic Networks
series eCAADe
email
last changed 2022/06/07 07:55

_id 6ef4
authors Carrara, Gianfranco and Kalay, Yehuda E.
year 1992
title Multi-Model Representation of Design Knowledge
doi https://doi.org/10.52842/conf.acadia.1992.077
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 77-88
summary Explicit representation of design knowledge is needed if scientific methods are to be applied in design research, and if comPuters are to be used in the aid of design education and practice. The representation of knowledge in general, and design knowledge in particular, have been the subject matter of computer science, design methods, and computer- aided design research for quite some time. Several models of design knowledge representation have been developed over the last 30 years, addressing specific aspects of the problem. This paper describes a different approach to design knowledge representation that recognizes the Multi-modal nature of design knowledge. It uses a variety of computational tools to encode different kinds of design knowledge, including the descriptive (objects), the prescriptive (goals) and the operational (methods) kinds. The representation is intended to form a parsimonious, communicable and presentable knowledge-base that can be used as a tool for design research and education as well as for CAAD.
keywords Design Methods, Design Process, Goals, Knowledge Representation, Semantic Networks
series ACADIA
email
last changed 2022/06/07 07:55

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id 32eb
authors Henry, Daniel
year 1992
title Spatial Perception in Virtual Environments : Evaluating an Architectural Application
source University of Washington
summary Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the "Walkthrough" at the University of North Carolina. The set-up at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 square feet) and it does not require expensive and space taking hardware devices (such as the treadmill device for simulating walking). Now that the potential for using virtual environments in this architectural application is clear, it becomes important to verify that this tool succeeds in accurately representing space as intended. The purpose of this study is to verify that the perception of spaces is the same, in both simulated and real environment. It is hoped that the findings of this study will guide and accelerate the process by which the technology makes its way into the field of architecture.
keywords Space Perception; Space (Architecture); Computer Simulation
series thesis:MSc
last changed 2003/02/12 22:37

_id 68c8
authors Flemming, U., Coyne, R. and Fenves, S. (et al.)
year 1994
title SEED: A Software Environment to Support the Early Phases in Building Design
source Proceeding of IKM '94, Weimar, Germany, pp. 5-10
summary The SEED project intends to develop a software environment that supports the early phases in building design (Flemming et al., 1993). The goal is to provide support, in principle, for the preliminary design of buildings in all aspects that can gain from computer support. This includes using the computer not only for analysis and evaluation, but also more actively for the generation of designs, or more accurately, for the rapid generation of design representations. A major motivation for the development of SEED is to bring the results of two multi-generational research efforts focusing on `generative' design systems closer to practice: 1. LOOS/ABLOOS, a generative system for the synthesis of layouts of rectangles (Flemming et al., 1988; Flemming, 1989; Coyne and Flemming, 1990; Coyne, 1991); 2. GENESIS, a rule-based system that supports the generation of assemblies of 3-dimensional solids (Heisserman, 1991; Heisserman and Woodbury, 1993). The rapid generation of design representations can take advantage of special opportunities when it deals with a recurring building type, that is, a building type dealt with frequently by the users of the system. Design firms - from housing manufacturers to government agencies - accumulate considerable experience with recurring building types. But current CAD systems capture this experience and support its reuse only marginally. SEED intends to provide systematic support for the storing and retrieval of past solutions and their adaptation to similar problem situations. This motivation aligns aspects of SEED closely with current work in Artificial Intelligence that focuses on case-based design (see, for example, Kolodner, 1991; Domeshek and Kolodner, 1992; Hua et al., 1992).
series other
email
last changed 2003/04/23 15:14

_id ea96
authors Hacfoort, Eek J. and Veldhuisen, Jan K.
year 1992
title A Building Design and Evaluation System
source New York: John Wiley & Sons, 1992. pp. 195-211 : ill. table. includes bibliography
summary Within the field of architectural design there is a growing awareness of imbalance among the professionalism, the experience, and the creativity of the designers' response to the up-to-date requirements of all parties interested in the design process. The building design and evaluating system COSMOS makes it possible for various participants to work within their own domain, so that separated but coordinated work can be done. This system is meant to organize the initial stage of the design process, where user-defined functions, geometry, type of construction, and building materials are decided. It offers a tool to design a building to calculate a number of effects and for managing the information necessary to evaluate the design decisions. The system is provided with data and sets of parameters for describing the conditions, along with their properties, of the main building functions of a selection of well-known building types. The architectural design is conceptualized as being a hierarchy of spatial units, ranking from building blocks down to specific rooms or spaces. The concept of zoning is used as a means of calculating and directly evaluating the structure of the design without working out the details. A distinction is made between internal and external calculations and evaluations during the initial design process. During design on screen, an estimation can be recorded of building costs, energy costs, acoustics, lighting, construction, and utility. Furthermore, the design can be exported to a design application program, in this case AutoCAD, to make and show drawings in more detail. Through the medium of a database, external calculation and evaluation of building costs, life-cycle costs, energy costs, interior climate, acoustics, lighting, construction, and utility are possible in much more advanced application programs
keywords evaluation, applications, integration, architecture, design, construction, building, energy, cost, lighting, acoustics, performance
series CADline
last changed 2003/06/02 13:58

_id caadria2014_071
id caadria2014_071
authors Li, Lezhi; Renyuan Hu, Meng Yao, Guangwei Huang and Ziyu Tong
year 2014
title Sculpting the Space: A Circulation Based Approach to Generative Design in a Multi-Agent System
doi https://doi.org/10.52842/conf.caadria.2014.565
source Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer-Aided Architectural Design Research in Asia (CAADRIA 2014) / Kyoto 14-16 May 2014, pp. 565–574
summary This paper discusses an MAS (multiagent system) based approach to generating architectural spaces that afford better modes of human movement. To achieve this, a pedestrian simulation is carried out to record the data with regard to human spatial experience during the walking process. Unlike common practices of performance oriented generation where final results are achieved through cycles of simulation and comparison, what we propose here is to let human’s movement exert direct influence on space. We made this possible by asking "humans" to project simulation data on architectural surroundings, and thus cause the layout to change for the purpose of affording what we designate as good spatial experiences. A generation experiment of an exhibition space is implemented to explore this approach, in which tentative rules of such spatial manipulation are proposed and tested through space syntax analyse. As the results suggested, by looking at spatial layouts through a lens of human behaviour, this projection-and-generation method provides some insight into space qualities that other methods could not have offered.
keywords Performance oriented generative design; projection; multi-agent system; pedestrian simulation; space syntax
series CAADRIA
email
last changed 2022/06/07 07:59

_id ddss9208
id ddss9208
authors Lucardie, G.L.
year 1993
title A functional approach to realizing decision support systems in technical regulation management for design and construction
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary Technical building standards defining the quality of buildings, building products, building materials and building processes aim to provide acceptable levels of safety, health, usefulness and energy consumption. However, the logical consistency between these goals and the set of regulations produced to achieve them is often hard to identify. Not only the large quantities of highly complex and frequently changing building regulations to be met, but also the variety of user demands and the steadily increasing technical information on (new) materials, products and buildings have produced a very complex set of knowledge and data that should be taken into account when handling technical building regulations. Integrating knowledge technology and database technology is an important step towards managing the complexity of technical regulations. Generally, two strategies can be followed to integrate knowledge and database technology. The main emphasis of the first strategy is on transferring data structures and processing techniques from one field of research to another. The second approach is concerned exclusively with the semantic structure of what is contained in the data-based or knowledge-based system. The aim of this paper is to show that the second or knowledge-level approach, in particular the theory of functional classifications, is more fundamental and more fruitful. It permits a goal-directed rationalized strategy towards analysis, use and application of regulations. Therefore, it enables the reconstruction of (deep) models of regulations, objects and of users accounting for the flexibility and dynamics that are responsible for the complexity of technical regulations. Finally, at the systems level, the theory supports an effective development of a new class of rational Decision Support Systems (DSS), which should reduce the complexity of technical regulations and restore the logical consistency between the goals of technical regulations and the technical regulations themselves.
series DDSS
last changed 2003/08/07 16:36

_id a582
authors Marshall, Tony B.
year 1992
title The Computer as a Graphic Medium in Conceptual Design
doi https://doi.org/10.52842/conf.acadia.1992.039
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 39-47
summary The success CAD has experienced in the architectural profession demonstrates that architects have been willing to replace traditional drafting media with computers and electronic plotters for the production of working drawings. Its expanded use in the design development phase for 3D modeling and rendering further justifies CAD's usefulness as a presentation medium. The schematic design phase however, has hardly been influenced by the evolution of CAD. Most architects simply have not come to view the computer as a viable design medium. One reason for this might be the strong correspondence between architectural CAD and plan view graphics, as used in working drawings, compared to the weak correspondence between architectural CAD and plan view graphics, as used in schematic design. The role of the actual graphic medium during schematic design should not be overlooked in the development of CAD applications.

In order to produce practical CAD applications for schematic design we must explore the computer’s potential as a form of expression and its role as a graphic medium. An examination of the use of traditional graphic media during schematic design will provide some clues regarding what capabilities CAD must provide and how a system should operate in order to be useful during conceptual design.

series ACADIA
last changed 2022/06/07 07:59

_id cb5a
authors Oxman, Rivka E.
year 1992
title Multiple Operative and Interactive Modes in Knowledge-Based Design Systems
source New York: John Wiley & Sons, 1992. pp. 125-143 : ill. includes bibliography
summary A conceptual basis for the development of an expert system which is capable of integrating various modes of generation and evaluation in design is presented. This approach is based upon two sets of reasoning processes in the design system. The first enables a mapping between design requirements and solution descriptions in a generative mode of design; and the second enables a mapping between solution descriptions and performance evaluation in an evaluative and predictive mode. This concept supports a formal framework necessary for a knowledge-based design system to operate in a design partnership relation with the designer. Another fundamental concept in expert systems for design, dual direction interpretation between graphic and textual modes, is presented and elaborated. This encoding of knowledge behind the geometrical representation can be achieved in knowledge- based design systems by the development of a 'semantic interpreter' which supports a dual direction mapping process employing a geometrical knowledge, typological knowledge and evaluative knowledge. An implemented expert system for design, PREDIKT, demonstrates these concepts in the domain of kitchen design. It provides the user with a choice of alternative modes of interaction, such as: a 'design critic' for the evaluation of a design, a 'design generator' for the generation of a design, or a 'design critic-generator' for the completion of partial solutions
keywords architecture, knowledge base, design, systems, expert systems
series CADline
email
last changed 2003/06/02 10:24

_id 46c7
id 46c7
authors Ozel, Filiz
year 1992
title Data Modeling Needs of Life Safety Code (LSC) Compliance Applications
doi https://doi.org/10.52842/conf.acadia.1992.177
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 177-185
summary One of the most complex code compliance issues originates from the conformance of designs to Life Safety Code (NFPA 101). The development of computer based code compliance checking programs attracted the attention of building researchers and practitioners alike. These studies represent a number of approaches ranging from CAD based procedural approaches to rule based, non graphic ones, but they do not address the interaction of the rule base of such systems with graphic data bases that define the geometry of architectural objects. Automatic extraction of the attributes and the configuration of building systems requires 11 architectural object - graphic entity" data models that allow access and retrieval of the necessary data for code compliance checking. This study aims to specifically focus on the development of such a data model through the use of AutoLISP feature of AutoCAD (Autodesk Inc.) graphic system. This data model is intended to interact with a Life Safety Code rule base created through Level5-Object (Focus Inc.) expert system.

Assuming the availability of a more general building data model, one must define life and fire safety features of a building before any automatic checking can be performed. Object oriented data structures are beginning to be applied to design objects, since they allow the type versatility demanded by design applications. As one generates a functional view of the main data model, the software user must provide domain specific information. A functional view is defined as the process of generating domain specific data structures from a more general purpose data model, such as defining egress routes from wall or room object data structure. Typically in the early design phase of a project, these are related to the emergency egress design features of a building. Certain decisions such as where to provide sprinkler protection or the location of protected egress ways must be made early in the process.

series ACADIA
email
last changed 2022/06/07 08:00

_id bdbb
authors Pugh, D.
year 1992
title Designing solid objects using interactive sketch interpretation
source Computer Graphics (1992 Symposium on Interactive 3D Graphics), 25(2):117-126, Mar. 1992
summary Before the introduction of Computer Aided Design and solid modeling systems, designers had developed a set of techniques for designing solid objects by sketching their ideas on pencil and paper and refining them into workable designs. Unfortunately, these techniques are different from those for designing objects using a solid modeler. Not only does this waste avast reserve of talent and experience (people typically start drawing from the moment they can hold a crayon), but it also has a more fundamental problem: designers can use their intuition more effectively when sketching than they can when using a solid modeler. Viking is a solid modeling system whose user-interface is based on interactive sketch interpretation. Interactive sketch interpretation lets the designer create a line-drawing of a de- sired object while Viking generates a three-dimensional ob- ject description. This description is consistent with both the designer's line-drawing, and a set of geometric constraints either derived from the line-drawing or placed by the de- signer. Viking's object descriptions are fully compatible with the object descriptions used by traditional solid modelers. As a result, interactive sketch interpretation can be used with traditional solid modeling techniques, combining the advan- tages of both sketching and solid modeling.
series journal paper
last changed 2003/04/23 15:50

_id eaff
authors Shaviv, Edna and Kalay, Yehuda E.
year 1992
title Combined Procedural and Heuristic Method to Energy Conscious Building Design and Evaluation
source New York: John Wiley & Sons, 1992. pp. 305-325 : ill. includes bibliography
summary This paper describes a methodology that combines both procedural and heuristic methods by means of integrating a simulation model with a knowledge based system (KBS) for supporting all phases of energy conscious design and evaluation. The methodology is based on partitioning the design process into discrete phases and identifying the informational characteristics of each phase, as far as energy conscious design is concerned. These informational characteristics are expressed in the form of design variables (parameters) and the relationships between them. The expected energy performance of a design alternative is evaluated by a combination of heuristic and procedural methods, and the context-sensitive application of default values, when necessary. By virtue of combining knowledge based evaluations with procedural ones, this methodology allows for testing the applicability of heuristic rules in non-standard cases,Ô h)0*0*0*°° ÔŒ thereby improving the predictable powers of the evaluation
keywords design process, evaluation, energy, analysis, synthesis, integration, architecture, knowledge base, heuristics, simulation
series CADline
email
last changed 2003/06/02 10:24

_id 886c
authors Shu, Li and Flowers, W.
year 1992
title Groupware Experiences in Three- Dimensional Computer-Aided Design
source CSCW 92 Proceedings, 92
summary A system that allows people to simultaneously modify a common design in a graphically rich environment was developed to identify and examine groapware interface issues unique to three-dimensional computer-aided design. Experiments confirmed that a simultaneous mode of edit access is preferred over a turn-taking mode for two-person interactions. Also, independent points of view (e.g., isometric versus top view) between designers optimized parallel activity. Further experiments that aimed to transfer software-usage knowledge through the groupware system led to the development of the viewpoint. The viewpoint is a tool that indicates the points of view of different designers as well as provides a method of pointing effective in an environment where arbitrary, contrasting points of views are allowed.
series other
last changed 2003/04/23 15:50

_id ddss9203
id ddss9203
authors Smeets, J.
year 1993
title Housing tenancy, data management and quality control
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary This paper deals with housing tenancy, data management and quality control. The proposed method is focused on quality characteristics of housing estates in view of rentability risks. It entails a cycle of registration, analysis and implementation of measures. The starting point is the behaviour of the housing consumer in a market-oriented context. The model is framed within theories of strategic management and marketing. Systematic registration and evaluation of consumer behaviour, by means of a set of relevant process and product indicators, can yield relevant information in the four phases of the rental process: orientation, intake, dwelling and exit. This information concerns the way in which the dwelling (characterized by product indicators) fits the needs of the consumer. The systematic analysis of the process and product indicators during the phases of the rental process makes a 'strength-weakness analysis' of housing estates possible. The indicators can be presented in aggregated form by way of a 'rentability index. The 'strength-weakness analysis' steers the intervention in the quality characteristics of housing estates. The possibilities for readjustment, however, are different. The quality control system is not only an early warning system, but also has several other functions: evaluation, planning and communication. The method described here lays a solid foundation for a decision-support system in the area of housing tenancy.
series DDSS
last changed 2003/08/07 16:36

_id 592a
authors Takemura, H. and Kishino, F.
year 1992
title Cooperative work environment using virtual workspace
source Proceedings of the Conference on Computer-Supported Cooperative Work: 226-232. New York: The Association for Computing Machinery
summary A virtual environment, which is created by computer graphics and an appropriate user interface, can be used in many application fields, such as teleopetution, telecommunication and real time simulation. Furthermore, if this environment could be shared by multiple users, there would be more potential applications. Discussed in this paper is a case study of building a prototype of a cooperative work environment using a virtual environment, where more than two people can solve problemscooperatively, including design strategies and implementirig issues. An environment where two operators can directly grasp, move or release stereoscopic computer graphics images by hand is implemented. The system is built by combining head position tracking stereoscopic displays, hand gesture input devices and graphics workstations. Our design goal is to utilize this type of interface for a future teleconferencing system. In order to provide good interactivity for users, we discuss potential bottlenecks and their solutions. The system allows two users to share a virtual environment and to organize 3-D objects cooperatively.
series other
last changed 2003/04/23 15:50

_id fd02
authors Tsou, Jin-Yeu
year 1992
title Using conceptual modelling and an object-oriented environment to support building cost control during early design
source College of Architecture and Urban Planning, University of Michigan
summary This research investigated formal information modelling techniques and the object-oriented knowledge representation on the domain of building cost control during early design stages. The findings contribute to an understanding of the advantages and disadvantages of applying formal modelling techniques to the analysis of architectural problems and the representation of domain knowledge in an object-oriented environment. In this study, information modelling techniques were reviewed, formal information analysis was performed, a conceptual model based on the cost control problem domain was created, a computational model based on the object-oriented approach was developed, a mechanism to support information broadcasting for representing interrelationships was implemented, and an object-oriented cost analysis system for early design (OBCIS) was demonstrated. The conceptual model, based on the elemental proposition analysis of NIAM, supports a formal approach for analyzing the problem domain; the analysis results are represented by high-level graphical notations, based on the AEC Building System Model, to visually display the information framework of the domain. The conceptual model provides an intermediate step between the system designer's view of the domain and the internal representation of the implementation platform. The object-oriented representation provides extensive data modelling abilities to help system designers intuitively represent the semantics of the problem domain. The object-oriented representation also supports more structured and integrated modules than conventional programming approaches. Although there are many advantages to applying this technique to represent the semantics of cost control knowledge, there are several issues which need to be considered: no single satisfactory classification method can be directly applied; object-oriented systems are difficult to learn; and designing reusable classes is difficult. The dependency graph and information broadcasting implemented in this research is an attempt to represent the interrelationships between domain objects. The mechanism allows users to explicitly define the interrelationships, based on semantic requirements, among domain objects. In the conventional approach, these relationships are directly interpreted by system designers and intertwined into the programming code. There are several issues which need to be studied further: indirect dependency relationship, conflict resolution, and request-update looping based on least-commitment approach.
series thesis:PhD
email
last changed 2003/02/12 22:37

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 12HOMELOGIN (you are user _anon_345456 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002