CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 356

_id 48a7
authors Brooks
year 1999
title What's Real About Virtual Reality
source IEEE Computer Graphics and Applications, Vol. 19, no. 6, Nov/Dec, 27
summary As is usual with infant technologies, the realization of the early dreams for VR and harnessing it to real work has taken longer than the wild hype predicted, but it is now happening. I assess the current state of the art, addressing the perennial questions of technology and applications. By 1994, one could honestly say that VR "almost works." Many workers at many centers could doe quite exciting demos. Nevertheless, the enabling technologies had limitations that seriously impeded building VR systems for any real work except entertainment and vehicle simulators. Some of the worst problems were end-to-end system latencies, low-resolution head-mounted displays, limited tracker range and accuracy, and costs. The technologies have made great strides. Today one can get satisfying VR experiences with commercial off-the-shelf equipment. Moreover, technical advances have been accompanied by dropping costs, so it is both technically and economically feasible to do significant application. VR really works. That is not to say that all the technological problems and limitations have been solved. VR technology today "barely works." Nevertheless, coming over the mountain pass from "almost works" to "barely works" is a major transition for the discipline. I have sought out applications that are now in daily productive use, in order to find out exactly what is real. Separating these from prototype systems and feasibility demos is not always easy. People doing daily production applications have been forthcoming about lessons learned and surprises encountered. As one would expect, the initial production applications are those offering high value over alternate approaches. These applications fall into a few classes. I estimate that there are about a hundred installations in daily productive use worldwide.
series journal paper
email
last changed 2003/04/23 15:14

_id 67fd
authors Brown, Paul
year 1994
title Hype, Hope and Cyberspace -or- Paradigms Lost Pedagogical Problems at the Digital Frontier
source The Virtual Studio [Proceedings of the 12th European Conference on Education in Computer Aided Architectural Design / ISBN 0-9523687-0-6] Glasgow (Scotland) 7-10 September 1994, pp. 7-12
doi https://doi.org/10.52842/conf.ecaade.1994.007
summary A number of critical issues and problems have evolved over the past 20 years as computers have been introduced into the art and design curriculum. This essay compares the pragmatic demands of tool usage and the metaphorical emulation of traditional media with the need for examination of fundamental issues.
series eCAADe
last changed 2022/06/07 07:54

_id 45f0
authors Coleman, Kim
year 1994
title Synergism and Contingency: Design Collaboration with the Computer
source Reconnecting [ACADIA Conference Proceedings / ISBN 1-880250-03-9] Washington University (Saint Louis / USA) 1994, pp. 209-217
doi https://doi.org/10.52842/conf.acadia.1994.209
summary The outcome of an architectural project is always contingent, dependent upon conditions or events that are not established at the outset. A university design studio does not easily replicate the state of flux which occurs as an architectural commission proceeds. In developing an architectural project, each new situation, whether it be a building code issue, an engineering issue, or a client reaction, must be viewed as an opportunity to further refine and develop the design rather than a hindrance to the outcome. In the design studio I describe in this paper, students test processes which attempt to take advantage of contingent conditions, opening up the design solutions to new possibilities. As a means to open up the design process to new possibilities, this studio introduces the computer as the primary tool for design exploration. Through the computer interface, the work speculates on the possibilities of synergism, defined as the actions of two or more substances or organisms to achieve an effect of which each is individually incapable.' Three synergetic conditions are explored: that between the designer and the computer, that between the designer with computer and designers of previous works of art or architecture, and that between two or more designers working together with the computer. The lack of a predictable result, one that may be obvious or superficial, is a positive byproduct of the synergetic and contingent circumstances under which the designs are developed.

series ACADIA
email
last changed 2022/06/07 07:56

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id a378
authors Friedell, M., Kochhar, S., Marks, J., Sistare, S. and Weitzman, L.
year 1994
title Cooperative design, Human-computer interaction, Interaction techniques, Graphical user interfaces, Design automation, Design methodologies, Automated design of graphical displays, Computer-aided design
source Proceedings of ACM CHI'94 Conference on Human Factors in Computing Systems 1994 v.2 pp.187-188
summary Computer-aided-design (CAD) systems are now used to design all kinds of artifacts, from jet fighters to works of art. A major challenge in the design of a CAD system itself is the user interface (UI). Developing the UI to a CAD system raises myriad questions about input devices and techniques, display devices and techniques, and the details of the dialogue that relates the two. But these questions are ancillary to one central question: what is the fundamental nature of the interaction between human and computer in the design process supported by the CAD system? Is the design activity essentially manual, with the computer playing the role of passive tool, like a pen or paintbrush? Or is the computer augmenting the human designer by actively restricting available design choices, or by playing the role of critic or "improver"? Or maybe the interaction paradigm is one of "interactive evolution," in which the computer is responsible for generating design alternatives, with the human merely choosing among choices suggested by the machine. Or perhaps the computer performs the design process completely automatically, with a final acceptance check being the only human contribution? The panelists will describe these different paradigms for human-computer cooperation in a set of related CAD systems and prototypes and discuss the conditions under which each paradigm might be most useful.
series other
last changed 2002/07/07 16:01

_id ddss9432
id ddss9432
authors Goldschmidt, G.
year 1994
title Visual Reference for Design: Analogy, Transformation and the Act of Sketching
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary All designers know that it is impossible to infer a design solution from the givens of a task alone, no matter how complete and well presented they are. Therefore, designers seek to complementinformation they receive, and the material they bring into the task environment includes visual images. Images may be gathered from every imaginable source, from domain-specific images (in architecture they are usually classified and pertain to building type, location, period, technology, style or creator) through 'metaphoric' images (art, nature) to eclectic personal favourites. Inaddition, randomly encountered images may find their way into a database of references: a depository of potentially useful images. With the exception of factual information that fills in thetask givens, it is usually far from clear what purpose may be served by images in general, or to what use the specific images aligned for a particular task may be put. We propose that the singlemost significant 'on line' role of visual references during the process of designing is to suggest potential analogies to the entity that is being designed. The process of discovering and exploitingan analogy in design is complex; we shall explain it in terms of Gentner's structure mapping theory, which we adapt to visual structures. We further propose that the abstraction process thatmust take place for the successful identification and mapping from source (visual reference) onto target (designed entity) requires transformations of images, and such transformations are bestachieved through sketching. Sketching facilitates the two way process of movement from the pictorial to the diagrammatic and from the schematic to the figural. Such transformations musttake place to arrive at the match that allows conceptual transfer, mapping of structural relations and insight through analogy.
series DDSS
email
last changed 2003/08/07 16:36

_id f42b
id f42b
authors Hofmeyer, Herm
year 1994
title KONSTRUKTIEF ONTWERPEN MET BEHULP VAN COMPUTERPROGRAMMATUUR (1) VERSLAG AFSTUDEERPROJECT (2) BIJLAGE GEBRUIKSAANWIJZING, CODE EN TOELICHTING BIJ PROGRAMMA
source Technische Universiteit Eindhoven, Department of Architecture, Building and Planning, Structural Design Group
summary This thesis presents the first basics of an expertsystem to transform a spatial into a structural design. The system thus relates space-allocation techniques and structural design software for stress-engineering. For the implementation Prolog-2 was used. Although in Dutch, the thesis provided background information for more recently written papers for eCAADe (2005) and CAADRIA (2006). The thesis was published as a paper in Design Studies (2006).
keywords space-allocation; structural design; expert system
series thesis:MSc
type normal paper
email
last changed 2006/04/21 07:58

_id caadria2004_k-1
id caadria2004_k-1
authors Kalay, Yehuda E.
year 2004
title CONTEXTUALIZATION AND EMBODIMENT IN CYBERSPACE
source CAADRIA 2004 [Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] Seoul Korea 28-30 April 2004, pp. 5-14
doi https://doi.org/10.52842/conf.caadria.2004.005
summary The introduction of VRML (Virtual Reality Markup Language) in 1994, and other similar web-enabled dynamic modeling software (such as SGI’s Open Inventor and WebSpace), have created a rush to develop on-line 3D virtual environments, with purposes ranging from art, to entertainment, to shopping, to culture and education. Some developers took their cues from the science fiction literature of Gibson (1984), Stephenson (1992), and others. Many were web-extensions to single-player video games. But most were created as a direct extension to our new-found ability to digitally model 3D spaces and to endow them with interactive control and pseudo-inhabitation. Surprisingly, this technologically-driven stampede paid little attention to the core principles of place-making and presence, derived from architecture and cognitive science, respectively: two principles that could and should inform the essence of the virtual place experience and help steer its development. Why are the principles of place-making and presence important for the development of virtual environments? Why not simply be content with our ability to create realistically-looking 3D worlds that we can visit remotely? What could we possibly learn about making these worlds better, had we understood the essence of place and presence? To answer these questions we cannot look at place-making (both physical and virtual) from a 3D space-making point of view alone, because places are not an end unto themselves. Rather, places must be considered a locus of contextualization and embodiment that ground human activities and give them meaning. In doing so, places acquire a meaning of their own, which facilitates, improves, and enriches many aspects of our lives. They provide us with a means to interpret the activities of others and to direct our own actions. Such meaning is comprised of the social and cultural conceptions and behaviors imprinted on the environment by the presence and activities of its inhabitants, who in turn, ‘read’ by them through their own corporeal embodiment of the same environment. This transactional relationship between the physical aspects of an environment, its social/cultural context, and our own embodiment of it, combine to create what is known as a sense of place: the psychological, physical, social, and cultural framework that helps us interpret the world around us, and directs our own behavior in it. In turn, it is our own (as well as others’) presence in that environment that gives it meaning, and shapes its social/cultural character. By understanding the essence of place-ness in general, and in cyberspace in particular, we can create virtual places that can better support Internet-based activities, and make them equal to, in some cases even better than their physical counterparts. One of the activities that stands to benefit most from understanding the concept of cyber-places is learning—an interpersonal activity that requires the co-presence of others (a teacher and/or fellow learners), who can point out the difference between what matters and what does not, and produce an emotional involvement that helps students learn. Thus, while many administrators and educators rush to develop webbased remote learning sites, to leverage the economic advantages of one-tomany learning modalities, these sites deprive learners of the contextualization and embodiment inherent in brick-and-mortar learning institutions, and which are needed to support the activity of learning. Can these qualities be achieved in virtual learning environments? If so, how? These are some of the questions this talk will try to answer by presenting a virtual place-making methodology and its experimental implementation, intended to create a sense of place through contextualization and embodiment in virtual learning environments.
series CAADRIA
type normal paper
last changed 2022/06/07 07:52

_id 2ccd
authors Kalisperis, Loukas N.
year 1994
title 3D Visualization in Design Education
source Reconnecting [ACADIA Conference Proceedings / ISBN 1-880250-03-9] Washington University (Saint Louis / USA) 1994, pp. 177-184
doi https://doi.org/10.52842/conf.acadia.1994.177
summary It has been said that "The beginning of architecture is empty space." (Mitchell 1990) This statement typifies a design education philosophy in which the concepts of space and form are separated and defined respectively as the negative and positive of the physical world, a world where solid objects exist and void-the mere absence of substance-is a surrounding atmospheric emptiness. Since the beginning of the nineteenth century, however, there has been an alternative concept of space as a continuum: that there is a continuously modified surface between the pressures of form and space in which the shape of the space in our lungs is directly connected to the shape of the space within which we exist. (Porter 1979). The nature of the task of representing architecture alters to reflect the state of architectural understanding at each period of time. The construction of architectural space and form represents a fundamental achievement of humans in their environment and has always involved effort and materials requiring careful planning, preparation, and forethought. In architecture there is a necessary conversion to that which is habitable, experiential, and functional from an abstraction in an entirely different medium. It is often an imperfect procedure that centers on the translation rather than the actual design. Design of the built environment is an art of distinctions within the continuum of space, for example: between solid and void, interior and exterior, light and dark, or warm and cold. It is concerned with the physical organization and articulation of space. The amount and shape of the void contained and generated by the building create the fabric and substance of the built environment. Architecture as a design discipline, therefore, can be considered as a creative expression of the coexistence of form and space on a human scale. As Frank Ching writes in Architecture: Form, Space, and Order, "These elements of form and space are the critical means of architecture. While the utilitarian concerns of function and use can be relatively short lived, and symbolic interpretations can vary from age to age, these primary elements of form and space comprise timeless and fundamental vocabulary of the architectural designer." (1979)

series ACADIA
email
last changed 2022/06/07 07:52

_id ga0009
id ga0009
authors Lewis, Matthew
year 2000
title Aesthetic Evolutionary Design with Data Flow Networks
source International Conference on Generative Art
summary For a little over a decade, software has been created which allows for the design of visual content by aesthetic evolutionary design (AED) [3]. The great majority of these AED systems involve custom software intended for breeding entities within one fairly narrow problem domain, e.g., certain classes of buildings, cars, images, etc. [5]. Only a very few generic AED systems have been attempted, and extending them to a new design problem domain can require a significant amount of custom software development [6][8]. High end computer graphics software packages have in recent years become sufficiently robust to allow for flexible specification and construction of high level procedural models. These packages also provide extensibility, allowing for the creation of new software tools. One component of these systems which enables rapid development of new generative models and tools is the visual data flow network [1][2][7]. One of the first CG packages to employ this paradigm was Houdini. A system constructed within Houdini which allows for very fast generic specification of evolvable parametric prototypes is described [4]. The real-time nature of the software, when combined with the interlocking data networks, allows not only for vertical ancestor/child populations within the design space to be explored, but also allows for fast "horizontal" exploration of the potential population surface. Several example problem domains will be presented and discussed. References: [1] Alias | Wavefront. Maya. 2000, http://www.aliaswavefront.com [2] Avid. SOFTIMAGE. 2000, http://www.softimage.com [3] Bentley, Peter J. Evolutionary Design by Computers. Morgan Kaufmann, 1999. [4] Lewis, Matthew. "Metavolve Home Page". 2000, http://www.cgrg.ohio-state.edu/~mlewis/AED/Metavolve/ [5] Lewis, Matthew. "Visual Aesthetic Evolutionary Design Links". 2000, http://www.cgrg.ohio-state.edu/~mlewis/aed.html [6] Rowley, Timothy. "A Toolkit for Visual Genetic Programming". Technical Report GCG-74, The Geometry Center, University of Minnesota, 1994. [7] Side Effects Software. Houdini. 2000, http://www.sidefx.com [8] Todd, Stephen and William Latham. "The Mutation and Growth of Art by Computers" in Evolutionary Design by Computers, Peter Bentley ed., pp. 221-250, Chapter 9, Morgan Kaufmann, 1999.    
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ddss9466
id ddss9466
authors Moore, Kathryn
year 1994
title Abstract Into Reality
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary Skills associated with the art of design, imagination, intuition, visual, spatial and perceptual thinking, have generally been ignored by the educational system. These imaginal skills have been considered insignificant within a predominately positivist culture, disregarded as a valid measure of intelligence. Culturally, therefore, they remain relatively underdeveloped. A narrowly defined type of logic, reason and rationality has been regarded as the preferred form of knowledge, and as a consequence, significant and complementary ways of understanding and thinking have been neglected. This affects how we regard design, design processes and design theory. It is suggested that it also explains the divergence between design theory and design practice. This paper explores the relationship between the imaginal skills and design. Whereas the imaginal skills are often regarded as subjective and elusive, it is argued that the imaginal skills are cognitive abilities that can be taught, and that in doing so confidence is developed in different ways of thinking. This encourages qualitative or sensory understanding of space and place, a more comprehensive understanding of the vocabulary of design, and the ability to make connections between design expression and conceptual thinking. It considers the pedagogical programme of the undergraduate course in landscape architecture UCE, which aims to develop understanding of different ways of thinking as an integral, complementary part of the design process.
series DDSS
last changed 2003/08/07 16:36

_id ddss9467
id ddss9467
authors Murison, Alison
year 1994
title A CAD Interface to Objective Assessment of Design to Support Decision Making in Urban Planning
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary The Department of Architecture at Edinburgh College of Art, Heriot Watt University, has an on-going project to create useful implementations of the method of spatial analysis called Space Syntax developed by Prof Bill Hillier at the Bartlett School of Architecture, London. Space Syntax can predict the potential usage of each route through an urban space or large building; some routes will be avoided by most traffic (pedestrian or vehicular), while other routes will become busy thoroughfares. It has been used by Architects and Urban Designers to support proposed developments, whether to show that potential commercial activity ought to be concentrated in an area of high traffic, or to change routes through troubled housing estates, bringing the protection of added traffic to areas previously avoided for fear of mugging. The paper describes how a specially written customized version of AutoCAD enables Post Graduate students of Urban Design and Undergraduate Architecture students to test their designs against the Space Syntax Measures. Simple interactive graphics enable plans to be entered and compared, so that plans may be evaluated during the design process, and decisions supported by objective tests. This improves both design decisions and the learning process, and should be useful to many professionals in urban planning.
series DDSS
email
last changed 2003/08/07 16:36

_id 9b9e
authors Schofield , Simon
year 1994
title Non-photorealistic rendering : A critical examination and proposed system
source Middlesex University
summary In the first part of the program the emergent field of Non-Photorealistic Rendering is explored from a cultural perspective. This is to establish a clear understanding of what Non-Photorealistic Rendering (NPR) ought to be in its mature form in order to provide goals and an overall infrastructure for future development. This thesis claims that unless we understand and clarify NPR's relationship with other media (photography, photorealistic computer graphics and traditional media) we will continue to manufacture "new solutions" to computer based imaging which are confused and naive in their goals. Such solutions will be rejected by the art and design community, generally condemned as novelties of little cultural worth ( i.e. they will not sell). This is achieved by critically reviewing published systems that are naively described as Non-photorealistic or "painterly" systems. Current practices and techniques are criticised in terms of their low ability to articulate meaning in images; solutions to this problem are given. A further argument claims that NPR, while being similar to traditional "natural media" techniques in certain aspects, is fundamentally different in other ways. This similarity has lead NPR to be sometimes proposed as "painting simulation" - something it can never be. Methods for avoiding this position are proposed. The similarities and differences to painting and drawing are presented and NPR's relationship to its other counterpart, Photorealistic Rendering (PR), is then delineated. It is shown that NPR is paradigmatically different to other forms of representation - i.e. it is not an "effect", but rather something basically different. The benefits of NPR in its mature form are discussed in the context of Architectural Representation and Design in general. This is done in conjunction with consultations with designers and architects. From this consultation a "wish-list" of capabilities is compiled by way of a requirements capture for a proposed system. A series of computer-based experiments resulting in the systems "Expressive Marks" and "Magic Painter" are carried out; these practical experiments add further understanding to the problems of NPR. The exploration concludes with a prototype system "Piranesi" which is submitted as a good overall solution to the problem of NPR. In support of this written thesis are : - * The Expressive Marks system * Magic Painter system * The Piranesi system (which includes the EPixel and Sketcher systems) * A large portfolio of images generated throughout the exploration
keywords Computer Graphics; Visual Representation; Non-photorealistic Rendering; Natural Media Simulations Rendering; Post-processing
series thesis:PhD
last changed 2003/02/12 22:37

_id ga0231
id ga0231
authors Sparacino, Flavia
year 2002
title Narrative Spaces: bridging architecture and entertainment via interactive technology
source International Conference on Generative Art
summary Our society’s modalities of communication are rapidly changing. Large panel displays and screens are be ing installed in many public spaces, ranging from open plazas, to shopping malls, to private houses, to theater stages, classrooms, and museums. In parallel, wearable computers are transforming our technological landscape by reshaping the heavy, bulkydesktop computer into a lightweight, portable device that is accessible to people at any time. Computation and sensing are moving from computers and devices into the environment itself. The space around us is instrumented with sensors and displays, and it tends to reflect adiffused need to combine together the information space with our physical space. This combination of large public and miniature personal digital displays together with distributed computing and sensing intelligence offers unprecedented opportunities to merge the virtual and the real, the information landscape of the Internet with the urban landscape of the city, to transform digital animated media in storytellers, in public installations and through personal wearable technology. This paper describes technological platforms built at the MIT Media Lab, through 1994-2002, that contribute to defining new trends in architecture that mergevirtual and real spaces, and are reshaping the way we live and experience the museum, the house, the theater, and the modern city.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ascaad2006_paper11
id ascaad2006_paper11
authors Stanton, Michael
year 2006
title Redemptive Technologies II: the sequel (A Decade Later)
source Computing in Architecture / Re-Thinking the Discourse: The Second International Conference of the Arab Society for Computer Aided Architectural Design (ASCAAD 2006), 25-27 April 2006, Sharjah, United Arab Emirates
summary Nearly ten years ago I published an article in the Dutch journal ARCHIS called "Redemptive Technologies." It derived from comments I made during a conference held in New Orleans in 1994. At that point the machine aesthetic associated with the "new technologies" generated by the computer had not established a precise formal vocabulary but were generating great excitement among the architectural avant-garde. It addressed the limits of the imagery and data produced by this machine and the simple but very political problem of cost and obsolescence. Now the millennium is well past and the somewhat apostolic fervor that accompanied the interaction of a very expensive consumer device with architecture has cooled. Discussion has generally moved from the titillating possibilities opened up by the device, many of which have so far not come to pass, to the sorts of hard and software available. An architectural language closely associated with the imagistic potential of new programs, biomorphism, has now come and gone on the runways of architectural taste. And yet, in recent articles rejecting the direct political effect of architectural work, the potential of new programs and virtual environments are proposed as alternative directions that our perpetually troubled profession may pursue. This paper will assess the last decade regarding the critical climate that surrounds cyber/technology. In the economic context of architectural education in which computers are still a central issue, the political issues that evolve will form a backdrop to any discussion. Furthermore, the problem of the "new" language of biomorphism will be reiterated as an architectural grammar with a 100-year history - from Catalan Modernismo and Art Nouveau, through Hermann Finsterlin and Eric Mendelsohn's projects of the 1920s, to Giovanni Michelucci and Italian work of the post-war, to Frederick Kiesler's Endless House of the late '50s, continuing through moments of Deconstructivism and Architectural Association salients, etc. These forms continue to be semantically simplistic and hard to make. Really the difference is the neo-avant-garde imagery and rhetoric involved in their continuing resurrection. Computer images, but also the ubiquitous machine itself, are omnipresent and often their value is assumed without question or proposed as a remedy for issues they cannot possibly address. This paper will underline the problem of the computer, of screens and the insistent imagistic formulas encourage by their use, and the ennui that is beginning to pervade the discipline after initial uncritical enthusiasm for this very powerful and expensive medium. But it will also propose other very valuable directions, those relating to reassessing the processes rather than the images that architecture engages, that this now aging "new" technology can much more resolutely and successfully address.
series ASCAAD
email
last changed 2007/04/08 19:47

_id dc0f
authors Stefik, M.
year 1994
title Knowledge Systems
source Morgan Kaufmann Publishers Inc., San Francisco. p. 295
summary Digital systems cannot act reliably and intelligently in ignorance. They need to know how to act intelligently. Computer systems that use knowledge are called knowledge-based systems, or simply, knowledge systems. Knowledge systems first came to the public's attention in the 1980s as a successful application of artificial intelligence. Since then their use has spread widely throughout industry, finance and science. But what are the principles behind knowledge systems? What are they useful for? How are they built? What are their limitations? How can they connect with human activities for creating and using knowledge? Addressing these questions is the purpose of this book. The art of building knowledge systems is inherently multidisciplinary, incorporating computer science theory, programming practice and psychology. The content of this book incorporates these varied fields covering topics ranging from the design of search algorithms and representations to techniques for acquiring the task specific knowledge required for developing successful systems. It discusses common representations for time, space, uncertainty, and vagueness. It also explains the knowledge-level organizations for the three most widespread knowledge-intensive tasks: classification, configuration, and diagnosis. In a university setting, this book is intended for use at the advanced undergraduate levels and beginning graduate levels. For students outside of computer science, this book provides an introduction that prepares them for using and creating knowledge systems in their own areas of specialization. For computer science students, this book provides a deeper treatment of knowledge systems than is possible in a general introduction to artificial intelligence.
series other
last changed 2003/04/23 15:14

_id ddss9495
id ddss9495
authors Tombre, Karl and Paul, Jean-Claude
year 1994
title Document Analysis: A Way To Integrate Existing Paper Information In Architectural Databases
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary In any domain, the use of information systems leads to the problem of converting the existing archives of paper documents into a format suitable the computerized system. In this area, most attention has probably been given to structured document analysis, i.e. the automated analysis of business documents such as letters, forms, documentation, manuals etc., including the well-known area of character recognition. But document analysis is also a powerful tool in technical domains such as architecture, where large quantities of drawings of various kinds are available on paper. In this paper, we shortly present the state of the art in technical drawing analysis and propose some techniques suitable for the specific application of the conversion from paper to architectural databases.
series DDSS
email
last changed 2003/08/07 16:36

_id 3c22
authors Wegener, M.
year 1994
title Operational Urban Models: State of the Art
source Journal of the American Planning Association 60(1), pp. 17-29
summary Contributed by Susan Pietsch (spietsch@arch.adelaide.edu.au)
keywords 3D City Modeling, Development Control, Design Control
series journal paper
last changed 2003/05/15 21:45

_id ddss9505
id ddss9505
authors Wyatt, Ray
year 1994
title Strategic Decision Support: Using Neural Networks to Enhance and Explore Human Strategizing
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary This paper focuses on a mechanism by which planners and designers are thought to reduce complexity. The mechanism involves choosing a potentially profitable direction of search, or choosing potentially profitable set of aims to pursue, within which a detailed solution might be found, and rejecting all potentially unprofitable directions of search. The literature of psychology, planning and operations research is drawn upon to argue that designers base such initial choice of direction on their candidate aims' relative scores for eight key parameters: probability, returns for effort, delay, robustness, difficulty, present satisfaction and dependence. The paper then describes a piece of decision support software which, by eliciting any user's scores for their candidate aims on the eight key parameters, is able to order such aims into a strategic plan. Such software also incorporates a simulated neural network which attempts to "learn", from users' recorded responses to the software-suggested strategies, how users actually weight the relative importances of the eight key parameters. That is, it is hoped that the neural network will "converge' to some prototypical pattern(s) of weightings. Having such a tool would certainly constitute an advance in the state of the art of computer-aided strategy development. Alternatively, if the network never converges, the use of neural networks in computer-aided planning is perhaps not advisable. Accordingly, a test was conducted in which a group of planners used the software to address a typical spatial problem. The results, in terms of whether or not the neural network converged, will be reported.
series DDSS
email
last changed 2003/08/07 16:36

_id b110
id b110
authors Abadi Abbo, Isaac and Cavallin Calanche, Humerto
year 1994
title Ecological Validity of Real Scale Models
source Beyond Tools for Architecture [Proceedings of the 5th European Full-scale Modeling Association Conference / ISBN 90-6754-375-6] Wageningen (The Netherlands) 6-9 September 1994, pp. 31-40
summary Space simulation is a technique employed by architects, urban designers, environmental psychologists and other related specialists. It is used for academic and research purposes, as an aid to evaluate the impact that the built environment or that to be built would yield in potential or real users. Real Scale Model is organized as one of the models which represents more reliable spatial characteristics in space simulations. However, it is necessary to know the ecological validity of the simulations carried out, that is the degree in which laboratory results could be taken as reliable and representative of real situations. In order to discover which variables of the model used are relevant so that their perception results ecologically valid in respect to reality, a research has been designed in which simulations of specific spaces are appraised both in real space and in the real scale model. The results of both evaluations were statistically analyzed and it shows no significative differences in psychological impressions between the evaluation of real spaces and real scale model. These ecological validation of the real scale model could be of great use to estimate the validity of the results obtained in spaces simulated in the laboratory.
keywords Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa
last changed 2006/06/24 09:29

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 17HOMELOGIN (you are user _anon_889732 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002