CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 241

_id 89d9
authors Cajati, Claudio
year 1992
title The New Teaching of an Architect: The Rôle of Expert Systems in Technological Culture
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 435-442
doi https://doi.org/10.52842/conf.ecaade.1992.435
summary We already have the EEC, that is the European Economic Community. We have to build the CCE, that is the Common Cultural Europe. Architects and building engineers of any european country will be allowed to freely practise in any other country of the EEC. Of course, it is not only matter of coming down of the frontiers, of a greater labour mobility. Not even it will be enough that the university degree courses of the different countries agree to and put into effect the EEC common directives. They need rules and guidelines entering into the merits of practice: rules and guidelines which, rather than a legal and bureaucratic matter, must be the result of a common cultural and technical work, about clear and delimited questions of shared subjects, in which all the community countries be deeply concerned. Analogously, in the very field of research, the project "Human Capital and Mobility" has in view a greater european scientific and technological competitiveness, through an integration of human and material resources of different research centres, such as in shared-cost research projects and in concerted research actions. Such an integration is neither easy nor rapid. The political, social, cultural, technological peculiarities of the countries of the European Community certainly constitute an obstacle for the creation of a supernational cultural and technological pool. of common opportunities. These peculiarities, however, aren't only a restraint for the european community effort of unification and construction of shared goals, constraints, rules, methods, techniques, tools. They mean also a richness, an unrepeatable resourse: they are the result of a historical millenary stratification, which gave rise to urban and architectural contexts, to cultural and technological traditions it would be a serious mistake to waste.
series eCAADe
email
last changed 2022/06/07 07:54

_id 735a
authors Anh, Tran Hoai
year 1992
title FULL-SCALE EXPERIMENT ON KITCHEN FUNCTION IN HANOI
source Proceedings of the 4rd European Full-Scale Modelling Conference / Lausanne (Switzerland) 9-12 September 1992, Part A, pp. 19-30
summary This study is a part of a licentiate thesis on "Functional kitchen for the Vietnamese cooking way"at the Department of Architecture and Development studies, Lund University. The issues it is dealing with are: (1) Inadequacy of kitchen design in the apartment buildings in Hanoi, where the kitchen is often designed as a mere cooking place - other parts of the food making process are not given any attention. (2) Lack of standard dimensional and planning criteria for functional kitchen which can serve as bases for kitchen design. // The thesis aims at finding out indicators on functional spatial requirements for kitchen, which can serve as guide-line for designing functional kitchen for Hanoi. One of the main propositions in the thesis is that functional kitchens for Hanoi should be organised to permit the culinary activities done according to the Vietnamese urban culinary practice. This is based on the concept that the culinary activity is an expression Of culture, thus the practice of preparing meal in the present context of the urban households in Hanoi has an established pattern, method which demand a suitable area and arrangement in the kitchen. This pattern and cooking method should make up the functional requirement for kitchen in Hanoi, and be taken in to account if functional kitchen designing is to be achieved. In the context of the space-limited apartment building of Hanoi, special focus is given to find out indicators on the minimum functional spatial requirements of the kitchen works.
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 15:29

_id a6d8
authors Baletic, Bojan
year 1992
title Information Codes of Mutant Forms
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 173-186
doi https://doi.org/10.52842/conf.ecaade.1992.173
summary If we assume that the statements from this quote are true, than we have to ask ourselves the question: "Should we teach architecture as we do?" This paper describes our experience in developing a knowledge base using a neural network system to serve as a "intelligent assistant" to students and other practicing architects in the conceptual phase of their work on housing design. Our approach concentrated on rising the awareness of the designer about the problem, not by building rules to guide him to a solution, but by questioning the categories and typologies by which he classifies and understands a problem. This we achieve through examples containing mutant forms, imperfect rules, gray zones between black and white, that carry the seeds of new solutions.
series eCAADe
email
last changed 2022/06/07 07:54

_id cef3
authors Bridges, Alan H.
year 1992
title Computing and Problem Based Learning at Delft University of Technology Faculty of Architecture
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 289-294
doi https://doi.org/10.52842/conf.ecaade.1992.289
summary Delft University of Technology, founded in 1842, is the oldest and largest technical university in the Netherlands. It provides education for more than 13,000 students in fifteen main subject areas. The Faculty of Architecture, Housing, Urban Design and Planning is one of the largest faculties of the DUT with some 2000 students and over 500 staff members. The course of study takes four academic years: a first year (Propaedeuse) and a further three years (Doctoraal) leading to the "ingenieur" qualification. The basic course material is delivered in the first two years and is taken by all students. The third and fourth years consist of a smaller number of compulsory subjects in each of the department's specialist areas together with a wide range of option choices. The five main subject areas the students may choose from for their specialisation are Architecture, Building and Project Management, Building Technology, Urban Design and Planning, and Housing.

The curriculum of the Faculty has been radically revised over the last two years and is now based on the concept of "Problem-Based Learning". The subject matter taught is divided thematically into specific issues that are taught in six week blocks. The vehicles for these blocks are specially selected and adapted case studies prepared by teams of staff members. These provide a focus for integrating specialist subjects around a studio based design theme. In the case of second year this studio is largely computer-based: many drawings are produced by computer and several specially written computer applications are used in association with the specialist inputs.

This paper describes the "block structure" used in second year, giving examples of the special computer programs used, but also raises a number of broader educational issues. Introduction of the block system arose as a method of curriculum integration in response to difficulties emerging from the independent functioning of strong discipline areas in the traditional work groups. The need for a greater level of selfdirected learning was recognised as opposed to the "passive information model" of student learning in which the students are seen as empty vessels to be filled with knowledge - which they are then usually unable to apply in design related contexts in the studio. Furthermore, the value of electives had been questioned: whilst enabling some diversity of choice, they may also be seen as diverting attention and resources from the real problems of teaching architecture.

series eCAADe
email
last changed 2022/06/07 07:54

_id ddss9205
id ddss9205
authors De Scheemaker. A.
year 1993
title Towards an integrated facility management system for management and use of government buildings
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary The Government Building Agency in the Netherlands is developing an integrated facility management system for two of its departments. Applications are already developed to support a number of day-to-day facility management activities on an operational level. Research is now being carried out to develop a management control system to better plan and control housing and material resources.
series DDSS
last changed 2003/08/07 16:36

_id esaulov02_paper_eaea2007
id esaulov02_paper_eaea2007
authors Esaulov, G.V.
year 2008
title Videomodeling in Architecture. Introduction into Concerned Problems
source Proceedings of the 8th European Architectural Endoscopy Association Conference
summary Since the very 1st year Russian Academy of Architecture and building sciences that was established in 1992 by the Presidents’ decree as the higher scientific and creative organization in the country has always paid much attention to supporting and developing fundamental investigations in architecture, town-planning, building sciences, professional education and creative practice. Study of the birth process of the architectural idea and searching for tools assisting the architect’s creative activity and opportunities for adequate transfer of architectural image to potential consumer – relate to the number of problems which constantly bother the architectural community. Before turning to the conference, let us set certain conditions that have a significant impact on the development of architectural and construction activity in modern Russia.
series EAEA
more http://info.tuwien.ac.at/eaea
last changed 2008/04/29 20:46

_id 68c8
authors Flemming, U., Coyne, R. and Fenves, S. (et al.)
year 1994
title SEED: A Software Environment to Support the Early Phases in Building Design
source Proceeding of IKM '94, Weimar, Germany, pp. 5-10
summary The SEED project intends to develop a software environment that supports the early phases in building design (Flemming et al., 1993). The goal is to provide support, in principle, for the preliminary design of buildings in all aspects that can gain from computer support. This includes using the computer not only for analysis and evaluation, but also more actively for the generation of designs, or more accurately, for the rapid generation of design representations. A major motivation for the development of SEED is to bring the results of two multi-generational research efforts focusing on `generative' design systems closer to practice: 1. LOOS/ABLOOS, a generative system for the synthesis of layouts of rectangles (Flemming et al., 1988; Flemming, 1989; Coyne and Flemming, 1990; Coyne, 1991); 2. GENESIS, a rule-based system that supports the generation of assemblies of 3-dimensional solids (Heisserman, 1991; Heisserman and Woodbury, 1993). The rapid generation of design representations can take advantage of special opportunities when it deals with a recurring building type, that is, a building type dealt with frequently by the users of the system. Design firms - from housing manufacturers to government agencies - accumulate considerable experience with recurring building types. But current CAD systems capture this experience and support its reuse only marginally. SEED intends to provide systematic support for the storing and retrieval of past solutions and their adaptation to similar problem situations. This motivation aligns aspects of SEED closely with current work in Artificial Intelligence that focuses on case-based design (see, for example, Kolodner, 1991; Domeshek and Kolodner, 1992; Hua et al., 1992).
series other
email
last changed 2003/04/23 15:14

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id 6cfd
authors Harfmann, Anton C. and Majkowski, Bruce R.
year 1992
title Component-Based Spatial Reasoning
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 103-111
doi https://doi.org/10.52842/conf.acadia.1992.103
summary The design process and ordering of individual components through which architecture is realized relies on the use of abstract "models" to represent a proposed design. The emergence and use of these abstract "models" for building representation has a long history and tradition in the field of architecture. Models have been made and continue to be made for the patron, occasionally the public, and as a guide for the builders. Models have also been described as a means to reflect on the design and to allow the design to be in dialogue with the creator.

The term "model" in the above paragraph has been used in various ways and in this context is defined as any representation through which design intent is expressed. This includes accurate/ rational or abstract drawings (2- dimensional and 3-dimensional), physical models (realistic and abstract) and computer models (solid, void and virtual reality). The various models that fall within the categories above have been derived from the need to "view" the proposed design in various ways in order to support intuitive reasoning about the proposal and for evaluation purposes. For example, a 2-dimensional drawing of a floor plan is well suited to support reasoning about spatial relationships and circulation patterns while scaled 3-dimensional models facilitate reasoning about overall form, volume, light, massing etc. However, the common denominator of all architectural design projects (if the intent is to construct them in actual scale, physical form) are the discrete building elements from which the design will be constructed. It is proposed that a single computational model representing individual components supports all of the above "models" and facilitates "viewing"' the design according to the frame of reference of the viewer.

Furthermore, it is the position of the authors that all reasoning stems from this rudimentary level of modeling individual components.

The concept of component representation has been derived from the fact that a "real" building (made from individual components such as nuts, bolts and bar joists) can be "viewed" differently according to the frame of reference of the viewer. Each individual has the ability to infer and abstract from the assemblies of components a variety of different "models" ranging from a visceral, experiential understanding to a very technical, physical understanding. The component concept has already proven to be a valuable tool for reasoning about assemblies, interferences between components, tracing of load path and numerous other component related applications. In order to validate the component-based modeling concept this effort will focus on the development of spatial understanding from the component-based model. The discussions will, therefore, center about the representation of individual components and the development of spatial models and spatial reasoning from the component model. In order to frame the argument that spatial modeling and reasoning can be derived from the component representation, a review of the component-based modeling concept will precede the discussions of spatial issues.

series ACADIA
email
last changed 2022/06/07 07:49

_id 975e
authors Pearce, M. and Goel, A. (et al.)
year 1992
title Case-Based Design support: A case study in architectural design
source IEEE Expert 7(5): 14-20
summary Archie, a small computer-based library of architectural design cases, is described. Archie helps architects in the high-level task of conceptual design as opposed to low-level tasks such as drawing and drafting, numerical calculations, and constraint propagation. Archie goes beyond supporting architects in design proposal and critiquing. It acts as a shared external memory that supports two kinds of design collaboration. First, by including enough knowledge about the goals, plans, outcomes, and lessons of past cases, it lets the designer access the work of previous architects. Second, by providing access to the perspectives of domain experts via the domain models, Archie helps architects anticipate and accommodate experts' views on evolving designs. The lessons learned about building large case-based systems to support real-world decision making in developing Archie are discussed.
series journal paper
last changed 2003/04/23 15:14

_id ddss9204
id ddss9204
authors Pullen, W.R., Wassenaar, C.L.G., van Heti'ema, I., Dekkers, J.T., Janszen, I., Boender, C.G.E., Tas, A. and Stegeman, H.
year 1993
title A decision support system for housing of (public) organizations
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary In this paper we present a hierarchical decision support system for the allocation of organisations to available buildings, and for the allocation of employees of an organisation to the work units of a building. For both allocation problems a mathematical model and optimisation algorithm is developed, taking into account the relevant criteria, such as the extent to which the allocated floorspace is in accordance with the standards, and the extent to which departments are housed in connecting zones of a building. The decision support system is illustrated by two practical applications.
series DDSS
last changed 2003/08/07 16:36

_id daff
authors Richens, P.
year 1994
title CAD Research at the Martin Centre
source Automation in Construction, No. 3
summary The Martin Centre CADLAB has recently been established to investigate software techniques that could be of practical importance to architects within the next five years. In common with most CAD researchers, we are interested in the earlier, conceptual, stages of design, where commercial CAD systems have had little impact. Our approach is not Knowledge-Based, but rather focuses on using the computer as a medium for design and communication. This leads to a concentration on apparently superficial aspects such as visual appearance, the dynamics of interaction, immediate feedback, plasticity. We try to avoid building-in theoretical attitudes, and to reduce the semantic content of our systems to a low level on the basis that flexibility and intelligence are inversely related; and that flexibility is more important. The CADLAB became operational in January 1992. First year work in three areas – building models, experiencing architecture, and making drawings – is discussed.
series journal
email
more http://www.arct.cam.ac.uk/research/pubs/pdfs/rich94a.pdf
last changed 2000/03/05 19:05

_id c804
authors Richens, P.
year 1994
title Does Knowledge really Help?
source G. Carrara and Y.E. Kalay (Eds.), Knowledge-Based Computer-Aided Architectural Design, Elsevier
summary The Martin Centre CADLAB has recently been established to investigate software techniques that could be of practical importance to architects within the next five years. In common with most CAD researchers, we are interested in the earlier, conceptual, stages of design, where commercial CAD systems have had little impact. Our approach is not Knowledge-Based, but rather focuses on using the computer as a medium for design and communication. This leads to a concentration on apparently superficial aspects such as visual appearance, the dynamics of interaction, immediate feedback, plasticity. We try to avoid building-in theoretical attitudes, and to reduce the semantic content of our systems to a low level on the basis that flexibility and intelligence are inversely related; and that flexibility is more important. The CADLAB became operational in January 1992. First year work in three areas – building models, experiencing architecture, and making drawings – is discussed.
series other
more http://www.arct.cam.ac.uk/research/pubs/
last changed 2003/03/05 13:19

_id a3f5
authors Zandi-Nia, Abolfazl
year 1992
title Topgene: An artificial Intelligence Approach to a Design Process
source Delft University of Technology
summary This work deals with two architectural design (AD) problems at the topological level and in presence of the social norms community, privacy, circulation-cost, and intervening opportunity. The first problem concerns generating a design with respect to a set of above mentioned norms, and the second problem requires evaluation of existing designs with respect to the same set of norms. Both problems are based on the structural-behavioral relationship in buildings. This work has challenged above problems in the following senses: (1) A working system, called TOPGENE (The TOpological Pattern GENErator) has been developed. (2) Both problems may be vague and may lack enough information in their statement. For example, an AD in the presence of the social norms requires the degrees of interactions between the location pairs in the building. This information is not always implicitly available, and must be explicated from the design data. (3) An AD problem at topological level is intractable with no fast and efficient algorithm for its solution. To reduce the search efforts in the process of design generation, TOPGENE uses a heuristic hill climbing strategy that takes advantage of domain specific rules of thumbs to choose a path in the search space of a design. (4) TOPGENE uses the Q-analysis method for explication of hidden information, also hierarchical clustering of location-pairs with respect to their flow generation potential as a prerequisite information for the heuristic reasoning process. (5) To deal with a design of a building at topological level TOPGENE takes advantage of existing graph algorithms such as path-finding and planarity testing during its reasoning process. This work also presents a new efficient algorithm for keeping track of distances in a growing graph. (6) This work also presents a neural net implementation of a special case of the design generation problem. This approach is based on the Hopfield model of neural networks. The result of this approach has been used test TOPGENE approach in generating designs. A comparison of these two approaches shows that the neural network provides mathematically more optimal designs, while TOPGENE produces more realistic designs. These two systems may be integrated to create a hybrid system.
series thesis:PhD
last changed 2003/02/12 22:37

_id 2006_040
id 2006_040
authors Ambach, Barbara
year 2006
title Eve’s Four Faces-Interactive surface configurations
source Communicating Space(s) [24th eCAADe Conference Proceedings / ISBN 0-9541183-5-9] Volos (Greece) 6-9 September 2006, pp. 40-44
doi https://doi.org/10.52842/conf.ecaade.2006.040
summary Eve’s Four Faces consists of a series of digitally animated and interactive surfaces. Their content and structure are derived from a collection of sources outside the conventional boundaries of architectural research, namely psychology and the broader spectrum of arts and culture. The investigation stems from a psychological study documenting the attributes and social relationships of four distinct personality prototypes; the “Individuated”, the “Traditional”, the “Conflicted” and the “Assured”. (York and John, 1992) For the purposes of this investigation, all four prototypes are assumed to be inherent, to certain degrees, in each individual; however, the propensity towards one of the prototypes forms the basis for each individual’s “personality structure”. The attributes, social implications and prospects for habitation have been translated into animations and surfaces operating within A House for Eve’s Four Faces. The presentation illustrates the potential for constructed surfaces to be configured and transformed interactively, responding to the needs and qualities associated with each prototype. The intention is to study the effects of each configuration and how it may be therapeutic in supporting, challenging or altering one’s personality as it oscillates and shifts through the four prototypical conditions.
keywords interaction; digital; environments; psychology; prototypes
series eCAADe
type normal paper
last changed 2022/06/07 07:54

_id acadia06_455
id acadia06_455
authors Ambach, Barbara
year 2006
title Eve’s Four Faces interactive surface configurations
source Synthetic Landscapes [Proceedings of the 25th Annual Conference of the Association for Computer-Aided Design in Architecture] pp. 455-460
doi https://doi.org/10.52842/conf.acadia.2006.455
summary Eve’s Four Faces consists of a series of digitally animated and interactive surfaces. Their content and structure are derived from a collection of sources outside the conventional boundaries of architectural research, namely psychology and the broader spectrum of arts and culture.The investigation stems from a psychological study documenting the attributes and social relationships of four distinct personality prototypes: the Individuated, the Traditional, the Conflicted, and the Assured (York and John 1992). For the purposes of this investigation, all four prototypes are assumed to be inherent, to certain degrees, in each individual. However, the propensity towards one of the prototypes forms the basis for each individual’s “personality structure.” The attributes, social implications and prospects for habitation have been translated into animations and surfaces operating within A House for Eve’s Four Faces. The presentation illustrates the potential for constructed surfaces to be configured and transformed interactively, responding to the needs and qualities associated with each prototype. The intention is to study the effects of each configuration and how each configuration may be therapeutic in supporting, challenging or altering one’s personality as it oscillates and shifts through the four prototypical conditions.
series ACADIA
email
last changed 2022/06/07 07:54

_id caadria2004_k-1
id caadria2004_k-1
authors Kalay, Yehuda E.
year 2004
title CONTEXTUALIZATION AND EMBODIMENT IN CYBERSPACE
source CAADRIA 2004 [Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] Seoul Korea 28-30 April 2004, pp. 5-14
doi https://doi.org/10.52842/conf.caadria.2004.005
summary The introduction of VRML (Virtual Reality Markup Language) in 1994, and other similar web-enabled dynamic modeling software (such as SGI’s Open Inventor and WebSpace), have created a rush to develop on-line 3D virtual environments, with purposes ranging from art, to entertainment, to shopping, to culture and education. Some developers took their cues from the science fiction literature of Gibson (1984), Stephenson (1992), and others. Many were web-extensions to single-player video games. But most were created as a direct extension to our new-found ability to digitally model 3D spaces and to endow them with interactive control and pseudo-inhabitation. Surprisingly, this technologically-driven stampede paid little attention to the core principles of place-making and presence, derived from architecture and cognitive science, respectively: two principles that could and should inform the essence of the virtual place experience and help steer its development. Why are the principles of place-making and presence important for the development of virtual environments? Why not simply be content with our ability to create realistically-looking 3D worlds that we can visit remotely? What could we possibly learn about making these worlds better, had we understood the essence of place and presence? To answer these questions we cannot look at place-making (both physical and virtual) from a 3D space-making point of view alone, because places are not an end unto themselves. Rather, places must be considered a locus of contextualization and embodiment that ground human activities and give them meaning. In doing so, places acquire a meaning of their own, which facilitates, improves, and enriches many aspects of our lives. They provide us with a means to interpret the activities of others and to direct our own actions. Such meaning is comprised of the social and cultural conceptions and behaviors imprinted on the environment by the presence and activities of its inhabitants, who in turn, ‘read’ by them through their own corporeal embodiment of the same environment. This transactional relationship between the physical aspects of an environment, its social/cultural context, and our own embodiment of it, combine to create what is known as a sense of place: the psychological, physical, social, and cultural framework that helps us interpret the world around us, and directs our own behavior in it. In turn, it is our own (as well as others’) presence in that environment that gives it meaning, and shapes its social/cultural character. By understanding the essence of place-ness in general, and in cyberspace in particular, we can create virtual places that can better support Internet-based activities, and make them equal to, in some cases even better than their physical counterparts. One of the activities that stands to benefit most from understanding the concept of cyber-places is learning—an interpersonal activity that requires the co-presence of others (a teacher and/or fellow learners), who can point out the difference between what matters and what does not, and produce an emotional involvement that helps students learn. Thus, while many administrators and educators rush to develop webbased remote learning sites, to leverage the economic advantages of one-tomany learning modalities, these sites deprive learners of the contextualization and embodiment inherent in brick-and-mortar learning institutions, and which are needed to support the activity of learning. Can these qualities be achieved in virtual learning environments? If so, how? These are some of the questions this talk will try to answer by presenting a virtual place-making methodology and its experimental implementation, intended to create a sense of place through contextualization and embodiment in virtual learning environments.
series CAADRIA
type normal paper
last changed 2022/06/07 07:52

_id 96cf
authors Woolley, B.
year 1992
title Virtual Worlds: A Journey in Hype and Hyperreality
source Oxford: Blackwell
summary In Virtual Worlds, Benjamin Woolley examines the reality of virtual reality. He looks at the dramatic intellectual and cultural upheavals that gave birth to it, the hype that surrounds it, the people who have promoted it, and the dramatic implications of its development. Virtual reality is not simply a technology, it is a way of thinking created and promoted by a group of technologists and thinkers that sees itself as creating our future. Virtual Worlds reveals the politics and culture of these virtual realists, and examines whether they are creating reality, or losing their grasp of it.
series other
last changed 2003/04/23 15:14

_id 4704
authors Amirante, I., Rinaldi, S. and Muzzillo, F.
year 1992
title A Tutorial Experiment Concerning Dampness Diagnosis Supported by an Expert System
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 159-172
doi https://doi.org/10.52842/conf.ecaade.1992.159
summary (A) The teaching of Technology of Building Rehabilitation in Italian Universities - (B) Experimental course of technological rehabilitation with computer tools - (C) Synthesis of technological approach - (D) Dampness diagnostic process using the Expert System - (E) Primary consideration on tutorial experience - (F) Bibliography
series eCAADe
last changed 2022/06/07 07:54

_id ecaadesigradi2019_449
id ecaadesigradi2019_449
authors Becerra Santacruz, Axel
year 2019
title The Architecture of ScarCity Game - The craft and the digital as an alternative design process
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 3, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 45-52
doi https://doi.org/10.52842/conf.ecaade.2019.3.045
summary The Architecture of ScarCity Game is a board game used as a pedagogical tool that challenges architecture students by involving them in a series of experimental design sessions to understand the design process of scarcity and the actual relation between the craft and the digital. This means "pragmatic delivery processes and material constraints, where the exchange between the artisan of handmade, representing local skills and technology of the digitally conceived is explored" (Huang 2013). The game focuses on understanding the different variables of the crafted design process of traditional communities under conditions of scarcity (Michel and Bevan 1992). This requires first analyzing the spatial environmental model of interaction, available human and natural resources, and the dynamic relationship of these variables in a digital era. In the first stage (Pre-Agency), the game set the concept of the craft by limiting students design exploration from a minimum possible perspective developing locally available resources and techniques. The key elements of the design process of traditional knowledge communities have to be identified (Preez 1984). In other words, this stage is driven by limited resources + chance + contingency. In the second stage (Post-Agency) students taking the architects´ role within this communities, have to speculate and explore the interface between the craft (local knowledge and low technological tools), and the digital represented by computation data, new technologies available and construction. This means the introduction of strategy + opportunity + chance as part of the design process. In this sense, the game has a life beyond its mechanics. This other life challenges the participants to exploit the possibilities of breaking the actual boundaries of design. The result is a tool to challenge conventional methods of teaching and leaning controlling a prescribed design process. It confronts the rules that professionals in this field take for granted. The game simulates a 'fake' reality by exploring in different ways with surveyed information. As a result, participants do not have anything 'real' to lose. Instead, they have all the freedom to innovate and be creative.
keywords Global south, scarcity, low tech, digital-craft, design process and innovation by challenge.
series eCAADeSIGraDi
email
last changed 2022/06/07 07:54

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 12HOMELOGIN (you are user _anon_868835 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002