CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 76

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 2068
authors Frazer, John
year 1995
title AN EVOLUTIONARY ARCHITECTURE
source London: Architectural Association
summary In "An Evolutionary Architecture", John Frazer presents an overview of his work for the past 30 years. Attempting to develop a theoretical basis for architecture using analogies with nature's processes of evolution and morphogenesis. Frazer's vision of the future of architecture is to construct organic buildings. Thermodynamically open systems which are more environmentally aware and sustainable physically, sociologically and economically. The range of topics which Frazer discusses is a good illustration of the breadth and depth of the evolutionary design problem. Environmental Modelling One of the first topics dealt with is the importance of environmental modelling within the design process. Frazer shows how environmental modelling is often misused or misinterpreted by architects with particular reference to solar modelling. From the discussion given it would seem that simplifications of the environmental models is the prime culprit resulting in misinterpretation and misuse. The simplifications are understandable given the amount of information needed for accurate modelling. By simplifying the model of the environmental conditions the architect is able to make informed judgments within reasonable amounts of time and effort. Unfortunately the simplications result in errors which compound and cause the resulting structures to fall short of their anticipated performance. Frazer obviously believes that the computer can be a great aid in the harnessing of environmental modelling data, providing that the same simplifying assumptions are not made and that better models and interfaces are possible. Physical Modelling Physical modelling has played an important role in Frazer's research. Leading to the construction of several novel machine readable interactive models, ranging from lego-like building blocks to beermat cellular automata and wall partitioning systems. Ultimately this line of research has led to the Universal Constructor and the Universal Interactor. The Universal Constructor The Universal Constructor features on the cover of the book. It consists of a base plug-board, called the "landscape", on top of which "smart" blocks, or cells, can be stacked vertically. The cells are individually identified and can communicate with neighbours above and below. Cells communicate with users through a bank of LEDs displaying the current state of the cell. The whole structure is machine readable and so can be interpreted by a computer. The computer can interpret the states of the cells as either colour or geometrical transformations allowing a wide range of possible interpretations. The user interacts with the computer display through direct manipulation of the cells. The computer can communicate and even direct the actions of the user through feedback with the cells to display various states. The direct manipulation of the cells encourages experimentation by the user and demonstrates basic concepts of the system. The Universal Interactor The Universal Interactor is a whole series of experimental projects investigating novel input and output devices. All of the devices speak a common binary language and so can communicate through a mediating central hub. The result is that input, from say a body-suit, can be used to drive the out of a sound system or vice versa. The Universal Interactor opens up many possibilities for expression when using a CAD system that may at first seem very strange.However, some of these feedback systems may prove superior in the hands of skilled technicians than more standard devices. Imagine how a musician might be able to devise structures by playing melodies which express the character. Of course the interpretation of input in this form poses a difficult problem which will take a great deal of research to achieve. The Universal Interactor has been used to provide environmental feedback to affect the development of evolving genetic codes. The feedback given by the Universal Interactor has been used to guide selection of individuals from a population. Adaptive Computing Frazer completes his introduction to the range of tools used in his research by giving a brief tour of adaptive computing techniques. Covering topics including cellular automata, genetic algorithms, classifier systems and artificial evolution. Cellular Automata As previously mentioned Frazer has done some work using cellular automata in both physical and simulated environments. Frazer discusses how surprisingly complex behaviour can result from the simple local rules executed by cellular automata. Cellular automata are also capable of computation, in fact able to perform any computation possible by a finite state machine. Note that this does not mean that cellular automata are capable of any general computation as this would require the construction of a Turing machine which is beyond the capabilities of a finite state machine. Genetic Algorithms Genetic algorithms were first presented by Holland and since have become a important tool for many researchers in various areas.Originally developed for problem-solving and optimization problems with clearly stated criteria and goals. Frazer fails to mention one of the most important differences between genetic algorithms and other adaptive problem-solving techniques, ie. neural networks. Genetic algorithms have the advantage that criteria can be clearly stated and controlled within the fitness function. The learning by example which neural networks rely upon does not afford this level of control over what is to be learned. Classifier Systems Holland went on to develop genetic algorithms into classifier systems. Classifier systems are more focussed upon the problem of learning appropriate responses to stimuli, than searching for solutions to problems. Classifier systems receive information from the environment and respond according to rules, or classifiers. Successful classifiers are rewarded, creating a reinforcement learning environment. Obviously, the mapping between classifier systems and the cybernetic view of organisms sensing, processing and responding to environmental stimuli is strong. It would seem that a central process similar to a classifier system would be appropriate at the core of an organic building. Learning appropriate responses to environmental conditions over time. Artificial Evolution Artificial evolution traces it's roots back to the Biomorph program which was described by Dawkins in his book "The Blind Watchmaker". Essentially, artificial evolution requires that a user supplements the standard fitness function in genetic algorithms to guide evolution. The user may provide selection pressures which are unquantifiable in a stated problem and thus provide a means for dealing ill-defined criteria. Frazer notes that solving problems with ill-defined criteria using artificial evolution seriously limits the scope of problems that can be tackled. The reliance upon user interaction in artificial evolution reduces the practical size of populations and the duration of evolutionary runs. Coding Schemes Frazer goes on to discuss the encoding of architectural designs and their subsequent evolution. Introducing two major systems, the Reptile system and the Universal State Space Modeller. Blueprint vs. Recipe Frazer points out the inadequacies of using standard "blueprint" design techniques in developing organic structures. Using a "recipe" to describe the process of constructing a building is presented as an alternative. Recipes for construction are discussed with reference to the analogous process description given by DNA to construct an organism. The Reptile System The Reptile System is an ingenious construction set capable of producing a wide range of structures using just two simple components. Frazer saw the advantages of this system for rule-based and evolutionary systems in the compactness of structure descriptions. Compactness was essential for the early computational work when computer memory and storage space was scarce. However, compact representations such as those described form very rugged fitness landscapes which are not well suited to evolutionary search techniques. Structures are created from an initial "seed" or minimal construction, for example a compact spherical structure. The seed is then manipulated using a series of processes or transformations, for example stretching, shearing or bending. The structure would grow according to the transformations applied to it. Obviously, the transformations could be a predetermined sequence of actions which would always yield the same final structure given the same initial seed. Alternatively, the series of transformations applied could be environmentally sensitive resulting in forms which were also sensitive to their location. The idea of taking a geometrical form as a seed and transforming it using a series of processes to create complex structures is similar in many ways to the early work of Latham creating large morphological charts. Latham went on to develop his ideas into the "Mutator" system which he used to create organic artworks. Generalising the Reptile System Frazer has proposed a generalised version of the Reptile System to tackle more realistic building problems. Generating the seed or minimal configuration from design requirements automatically. From this starting point (or set of starting points) solutions could be evolved using artificial evolution. Quantifiable and specific aspects of the design brief define the formal criteria which are used as a standard fitness function. Non-quantifiable criteria, including aesthetic judgments, are evaluated by the user. The proposed system would be able to learn successful strategies for satisfying both formal and user criteria. In doing so the system would become a personalised tool of the designer. A personal assistant which would be able to anticipate aesthetic judgements and other criteria by employing previously successful strategies. Ultimately, this is a similar concept to Negroponte's "Architecture Machine" which he proposed would be computer system so personalised so as to be almost unusable by other people. The Universal State Space Modeller The Universal State Space Modeller is the basis of Frazer's current work. It is a system which can be used to model any structure, hence the universal claim in it's title. The datastructure underlying the modeller is a state space of scaleless logical points, called motes. Motes are arranged in a close-packing sphere arrangement, which makes each one equidistant from it's twelve neighbours. Any point can be broken down into a self-similar tetrahedral structure of logical points. Giving the state space a fractal nature which allows modelling at many different levels at once. Each mote can be thought of as analogous to a cell in a biological organism. Every mote carries a copy of the architectural genetic code in the same way that each cell within a organism carries a copy of it's DNA. The genetic code of a mote is stored as a sequence of binary "morons" which are grouped together into spatial configurations which are interpreted as the state of the mote. The developmental process begins with a seed. The seed develops through cellular duplication according to the rules of the genetic code. In the beginning the seed develops mainly in response to the internal genetic code, but as the development progresses the environment plays a greater role. Cells communicate by passing messages to their immediate twelve neighbours. However, it can send messages directed at remote cells, without knowledge of it's spatial relationship. During the development cells take on specialised functions, including environmental sensors or producers of raw materials. The resulting system is process driven, without presupposing the existence of a construction set to use. The datastructure can be interpreted in many ways to derive various phenotypes. The resulting structure is a by-product of the cellular activity during development and in response to the environment. As such the resulting structures have much in common with living organisms which are also the emergent result or by-product of local cellular activity. Primordial Architectural Soups To conclude, Frazer presents some of the most recent work done, evolving fundamental structures using limited raw materials, an initial seed and massive feedback. Frazer proposes to go further and do away with the need for initial seed and start with a primordial soup of basic architectural concepts. The research is attempting to evolve the starting conditions and evolutionary processes without any preconditions. Is there enough time to evolve a complex system from the basic building blocks which Frazer proposes? The computational complexity of the task being embarked upon is not discussed. There is an implicit assumption that the "superb tactics" of natural selection are enough to cut through the complexity of the task. However, Kauffman has shown how self-organisation plays a major role in the early development of replicating systems which we may call alive. Natural selection requires a solid basis upon which it can act. Is the primordial soup which Frazer proposes of the correct constitution to support self-organisation? Kauffman suggests that one of the most important attributes of a primordial soup to be capable of self-organisation is the need for a complex network of catalysts and the controlling mechanisms to stop the reactions from going supracritical. Can such a network be provided of primitive architectural concepts? What does it mean to have a catalyst in this domain? Conclusion Frazer shows some interesting work both in the areas of evolutionary design and self-organising systems. It is obvious from his work that he sympathizes with the opinions put forward by Kauffman that the order found in living organisms comes from both external evolutionary pressure and internal self-organisation. His final remarks underly this by paraphrasing the words of Kauffman, that life is always to found on the edge of chaos. By the "edge of chaos" Kauffman is referring to the area within the ordered regime of a system close to the "phase transition" to chaotic behaviour. Unfortunately, Frazer does not demonstrate that the systems he has presented have the necessary qualities to derive useful order at the edge of chaos. He does not demonstrate, as Kauffman does repeatedly, that there exists a "phase transition" between ordered and chaotic regimes of his systems. He also does not make any studies of the relationship of useful forms generated by his work to phase transition regions of his systems should they exist. If we are to find an organic architecture, in more than name alone, it is surely to reside close to the phase transition of the construction system of which is it built. Only there, if we are to believe Kauffman, are we to find useful order together with environmentally sensitive and thermodynamically open systems which can approach the utility of living organisms.
series other
type normal paper
last changed 2004/05/22 14:12

_id 647a
authors Kirschner, Ursula
year 1996
title Teaching Experimental Design with CAAD
doi https://doi.org/10.52842/conf.ecaade.1996.221
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 221-226
summary 2-D CAAD is the standard tool in architectural work and education. whereas 3-dimensional CAAD is still used to present a finished design. This paper demonstrates that experimental design in 3-D allows students to deal with new methods of design. At North East Lower Saxony Polytechnic, 1995 saw the beginning of development of didactic methods for teaching design with the interactive use of common 3-D CAAD tools. Six exercises were devised, the first two being 2-D exercises in urban and layout design. Subsequent steps introduced three styles of architectural designing with 3-D tools. The students selected one of these styles for their three-day exercise in urban planning. Based on the results, three main ways were developed: the "digital toolkit", the "additive design approach" and the "lighting simulation".
series eCAADe
last changed 2022/06/07 07:52

_id c7e9
authors Maver, T.W.
year 2002
title Predicting the Past, Remembering the Future
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 2-3
summary Charlas Magistrales 2There never has been such an exciting moment in time in the extraordinary 30 year history of our subject area, as NOW,when the philosophical theoretical and practical issues of virtuality are taking centre stage.The PastThere have, of course, been other defining moments during these exciting 30 years:• the first algorithms for generating building layouts (circa 1965).• the first use of Computer graphics for building appraisal (circa 1966).• the first integrated package for building performance appraisal (circa 1972).• the first computer generated perspective drawings (circa 1973).• the first robust drafting systems (circa 1975).• the first dynamic energy models (circa 1982).• the first photorealistic colour imaging (circa 1986).• the first animations (circa 1988)• the first multimedia systems (circa 1995), and• the first convincing demonstrations of virtual reality (circa 1996).Whereas the CAAD community has been hugely inventive in the development of ICT applications to building design, it hasbeen woefully remiss in its attempts to evaluate the contribution of those developments to the quality of the built environmentor to the efficiency of the design process. In the absence of any real evidence, one can only conjecture regarding the realbenefits which fall, it is suggested, under the following headings:• Verisimilitude: The extraordinary quality of still and animated images of the formal qualities of the interiors and exteriorsof individual buildings and of whole neighborhoods must surely give great comfort to practitioners and their clients thatwhat is intended, formally, is what will be delivered, i.e. WYSIWYG - what you see is what you get.• Sustainability: The power of «first-principle» models of the dynamic energetic behaviour of buildings in response tochanging diurnal and seasonal conditions has the potential to save millions of dollars and dramatically to reduce thedamaging environmental pollution created by badly designed and managed buildings.• Productivity: CAD is now a multi-billion dollar business which offers design decision support systems which operate,effectively, across continents, time-zones, professions and companies.• Communication: Multi-media technology - cheap to deliver but high in value - is changing the way in which we canexplain and understand the past and, envisage and anticipate the future; virtual past and virtual future!MacromyopiaThe late John Lansdown offered the view, in his wonderfully prophetic way, that ...”the future will be just like the past, onlymore so...”So what can we expect the extraordinary trajectory of our subject area to be?To have any chance of being accurate we have to have an understanding of the phenomenon of macromyopia: thephenomenon exhibitted by society of greatly exaggerating the immediate short-term impact of new technologies (particularlythe information technologies) but, more importantly, seriously underestimating their sustained long-term impacts - socially,economically and intellectually . Examples of flawed predictions regarding the the future application of information technologiesinclude:• The British Government in 1880 declined to support the idea of a national telephonic system, backed by the argumentthat there were sufficient small boys in the countryside to run with messages.• Alexander Bell was modest enough to say that: «I am not boasting or exaggerating but I believe, one day, there will bea telephone in every American city».• Tom Watson, in 1943 said: «I think there is a world market for about 5 computers».• In 1977, Ken Olssop of Digital said: «There is no reason for any individuals to have a computer in their home».The FutureJust as the ascent of woman/man-kind can be attributed to her/his capacity to discover amplifiers of the modest humancapability, so we shall discover how best to exploit our most important amplifier - that of the intellect. The more we know themore we can figure; the more we can figure the more we understand; the more we understand the more we can appraise;the more we can appraise the more we can decide; the more we can decide the more we can act; the more we can act themore we can shape; and the more we can shape, the better the chance that we can leave for future generations a trulysustainable built environment which is fit-for-purpose, cost-beneficial, environmentally friendly and culturally significactCentral to this aspiration will be our understanding of the relationship between real and virtual worlds and how to moveeffortlessly between them. We need to be able to design, from within the virtual world, environments which may be real ormay remain virtual or, perhaps, be part real and part virtual.What is certain is that the next 30 years will be every bit as exciting and challenging as the first 30 years.
series SIGRADI
email
last changed 2016/03/10 09:55

_id 01e5
authors Negroponte, N.
year 1995
title Being Digital
source Alfred A. Knopf, New York
summary As the founder of MIT's Media Lab and a popular columnist for Wired, Nicholas Negroponte has amassed a following of dedicated readers. Negroponte's fans will want to get a copy of Being Digital, which is an edited version of the 18 articles he wrote for Wired about "being digital." Negroponte's text is mostly a history of media technology rather than a set of predictions for future technologies. In the beginning, he describes the evolution of CD-ROMs, multimedia, hypermedia, HDTV (high-definition television), and more. The section on interfaces is informative, offering an up-to-date history on visual interfaces, graphics, virtual reality (VR), holograms, teleconferencing hardware, the mouse and touch-sensitive interfaces, and speech recognition. In the last chapter and the epilogue, Negroponte offers visionary insight on what "being digital" means for our future. Negroponte praises computers for their educational value but recognizes certain dangers of technological advances, such as increased software and data piracy and huge shifts in our job market that will require workers to transfer their skills to the digital medium. Overall, Being Digital provides an informative history of the rise of technology and some interesting predictions for its future.
series other
last changed 2003/04/23 15:14

_id 47c1
authors Selles Cantos, P. and Mas Llorens, V.
year 1995
title Digital Modelling Tools at the Design Studio: Methodology
doi https://doi.org/10.52842/conf.ecaade.1995.061
source Multimedia and Architectural Disciplines [Proceedings of the 13th European Conference on Education in Computer Aided Architectural Design in Europe / ISBN 0-9523687-1-4] Palermo (Italy) 16-18 November 1995, pp. 61-70
summary This research work is being financed by the "Instituto de Ciencias de la Educacion, I. CE., Universidad Politécnica de Valencia, U.P.V. and currently taking place at the "Taller2 de Proyectos Arquitectdnicos." We have placed CAD systems on drafting tables, so students can learn to apply different media and tools, both digital and traditional, in their design processes. To integrate CAD systems into the design studio we have developed a methodology upon which to relate the "Mechanics of the digital tool," with the often elusive and seldom explicit, "Mechanics of designing". We see architectural design as a process that can benefit from the use of computer systems through modelling (representation), and rendering (static or dynamic visualization). Digital modelling tools are powerful instruments to simulate and visualize (perceive) formal and spatial arrangements, under certain conditions of superficial appearance and light. Geometry composition, texture, light projection, rendering and animation, are the keys to understanding a digital modelling system as an extension of our design process. We introduce and explain each one of these categories, as it applies to architectural design and to three dimensional CAD systems. We present samples of student work following this method.
series eCAADe
more http://dpce.ing.unipa.it/Webshare/Wwwroot/ecaade95/Pag_8.htm
last changed 2022/06/07 07:56

_id ascaad2006_paper2
id ascaad2006_paper2
authors Sharji, Elyna and Ahmed Rafi
year 2006
title The Significant Role of an Electronic Gallery to the Education Experience and Learning Environment
source Computing in Architecture / Re-Thinking the Discourse: The Second International Conference of the Arab Society for Computer Aided Architectural Design (ASCAAD 2006), 25-27 April 2006, Sharjah, United Arab Emirates
summary Multimedia has brought new paradigms to education where users are able to use the technology to create compelling content that truly represents a new archetype in media experience. According to Burger (1995), the synergy of digital media is becoming a way of life where new paradigms for interactive audio-visual experiences of all communicative arts to date are mandatory. It potentially mixes technology and disciplines of architecture and art. Students can learn on their own pace and they can be tested in a non-linear way while interactivity allows the curious to easily explore related topics and concepts. Fundamental assumptions, theories and practices of conventional design paradigm are constantly being challenged by digital technology and this is the current scenario in architecture and art and design schools globally. Thus schools are enhancing the methods and improvising the technology of imparting knowledge to be in consistent with recent findings and knowledge. To be able to cater the use of digital media and information technology on architectural and art design education, four criteria are required, which are; the SPACE and place to accommodate the educational activities, the TOOLS that assist imparting of knowledge, the CONTENT of syllabus and information and the acceptance and culture of the receiving end users and HUMAN PERCEPTION. There is a need for the research of realization and activating the architectural space that has been equipped with multimedia tools and upgraded with recent technology to facilitate and support the community of learners and users. Spaces are now more interactive, multi functional, flexible and intelligent to suit the trend of computing in normal everyday life of the education sector, business and management, art and leisure, corporate and technological area. While the new concept of computing in education is still in the earlier phase, the conventional analogue paradigm still dominates the architectural design discourse which acts as a barrier to the development of digital designs and architectural education. A suitable approach is in need to bridge the gap between what theory has been explored and the practice of knowledge. A digital support environment with intelligent design and planning tools is envisioned to bridge the gap and to cater for the current scenario.
series ASCAAD
type normal paper
email
last changed 2021/07/16 10:34

_id 600e
authors Gavin, Lesley
year 1999
title Architecture of the Virtual Place
doi https://doi.org/10.52842/conf.ecaade.1999.418
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 418-423
summary The Bartlett School of Graduate Studies, University College London (UCL), set up the first MSc in Virtual Environments in the UK in 1995. The course aims to synthesise and build on research work undertaken in the arts, architecture, computing and biological sciences in exploring the realms of the creation of digital and virtual immersive spaces. The MSc is concerned primarily with equipping students from design backgrounds with the skills, techniques and theories necessary in the production of virtual environments. The course examines both virtual worlds as prototypes for real urban or built form and, over the last few years, has also developed an increasing interest in the the practice of architecture in purely virtual contexts. The MSc course is embedded in the UK government sponsored Virtual Reality Centre for the Built Environment which is hosted by the Bartlett School of Architecture. This centre involves the UCL departments of architecture, computer science and geography and includes industrial partners from a number of areas concerned with the built environment including architectural practice, surveying and estate management as well as some software companies and the telecoms industry. The first cohort of students graduated in 1997 and predominantly found work in companies working in the new market area of digital media. This paper aims to outline the nature of the course as it stands, examines the new and ever increasing market for designers within digital media and proposes possible future directions for the course.
keywords Virtual Reality, Immersive Spaces, Digital Media, Education
series eCAADe
email
more http://www.bartlett.ucl.ac.uk/ve/
last changed 2022/06/07 07:51

_id 30d7
authors Bartnicka, Malgorzata
year 1995
title Childishly Honest Associate of the Trickery
source CAD Space [Proceedings of the III International Conference Computer in Architectural Design] Bialystock 27-29 April 1995, pp. 209-219
summary Perspective is a method of presentation of 3- dimensional space on the 2-dimensional surface. It can only approximately express the complexity of the authentic perception of reality. During the centuries canons of presentation varied in different epochs. It is quite possible that conventions of presentation considered today as exact expressions of reality may seem for the future generations as untrue as the ancient Egypt paintings seem for us. Our mind plays the major role in all kinds of presentation. During the whole life we learn to perceive the surrounding reality. We have formed also ability to ,see" the perspective. The linear perspective is not so easy in perception without factors of colour and light. These factors play a very important role in perception of the distance. The perception of perspective is not always unmistakable. Introduction of light and shadow is one of the measures to limit the ambiguity. Objects shown in perspective with appropriately chosen colouring and light-and-shade effects reveal impression of the distance inside the flat picture. Illusions of perspective are most astonishing when one can assume deep-rooted expectations and suppositions of the addressee. The computer monitor, like the picture, has only one plane on which our project can be presented. The major feature of architecture programs is both the possibility of creating various architecture spaces and the possibility to examine how (in our opinion) the created space would affect the addressee. By means of computer programs we are able to generate drawings and objects of two kinds: first - being the ideal projection of reality (at least in the same measure as the photograph), and the second - being the total negation of perspective rules. By means of CAD programs enabling 3-dimensional job we can check how all sorts of perspective tricks and artifices affect our imagination. The program cooperates with us trying to cheat the imperfect sense of sight. The trickeries can be of various type, starting from play of lights, through the elements changing the perception of perspective, and terminating with objects totally negating the rules of sound construction of solids. The knowledge contained in these programs is an encyclopaedic recapitulation of all sorts of achievements in the field of perspective and application of colour and light effects. All that remains to the users is to exploit this tremendous variety of capabilities.
series plCAD
last changed 2000/01/24 10:08

_id 0459
authors Brown, G.Z., Kline, J. and Sekigitchi, T.
year 1995
title Infrared Professor - Design Phase
source Sixth International Conference on Computer-Aided Architectural Design Futures [ISBN 9971-62-423-0] Singapore, 24-26 September 1995, pp. 103-112
summary This paper describes diagnostic and advising modules that are being added to existing energy analysis software. The diagnostic module helps users understand whatís causing their building to have certain energy use characteristics by juxtaposing performance data with climate and building use data. The advisor is a rule-based expert system which tells the user what to do to improve the energy performance of their building design.
keywords Advisor, Architectural Design, Buildings, Energy, Expert System
series CAAD Futures
last changed 1999/08/03 17:16

_id 4202
authors Brown, Michael E. and Gallimore, Jennie J.
year 1995
title Visualization of Three-Dimensional Structure During Computer-Aided Design
source International Journal of Human-Computer Interaction 1995 v.7 n.1 pp. 37-56
summary The visual image presented to an engineer using a computer-aided design (CAD) system influences design activities such as decision making, problem solving, cognizance of complex relationships, and error correction. Because of the three-dimensional (3-D) nature of the object being created, an important attribute of the CAD visual interface concerns the various methods of presenting depth on the display's two-dimensional (2-D) surface. The objective of this research is to examine the effects of stereopsis on subjects' ability to (a) accurately transfer to, and retrieve from, long-term memory spatial information about 3-D objects; and (b) visualize spatial characteristics in a quick and direct manner. Subjects were instructed to memorize the shape of a 3-D object presented on a stereoscopic CRT during a study period. Following the study period, a series of static trial stimuli were shown. Each trial stimulus was rotated (relative to the original) about the vertical axis in one of six 36° increments between 0° and 180°. In each trial, the subject's task was to determine, as quickly and as accurately as possible, whether the trial object was the same shape as the memorized object or its mirrored image. One of the two cases was always true. To assess the relative merits associated with disparity and interposition, the two depth cues were manipulated in a within-subject manner during the study period and during the trials that followed. Subject response time and error rate were evaluated. Improved performance due to hidden surface is the most convincing experimental finding. Interposition is a powerful cue to object structure and should not be limited to late stages of design. The study also found a significant, albeit limited, effect of stereopsis. Under specific study object conditions, adding disparity to monocular trial objects significantly decreased response time. Response latency was also decreased by adding disparity information to stimuli in the study session.
series journal paper
last changed 2003/05/15 21:45

_id 67cd
authors Clibbon, K., Candy, L. and Edmonds, E.
year 1995
title A Logic-Based Framework for Representing Architectural Design Knowledge
source Sixth International Conference on Computer-Aided Architectural Design Futures [ISBN 9971-62-423-0] Singapore, 24-26 September 1995, pp. 91-102
summary This paper proposes a logic-based framework for representing and manipulating knowledge during Computer-Aided Architectural Design. The framework incorporates a meta-level architecture to represent declarative design knowledge and strategic knowledge used by the designer. It consists of an object layer, a design requirements layer and strategies for navigating through the design space. An extended first-order logic is described which has been used to represent some examples of architectural knowledge. This computational model is being implemented in KAUS (Knowledge Acquisition and Utilisation System), a general purpose knowledge-based system, founded in Multi-Layered Logic.
keywords Design Knowledge, Strategic Knowledge, Multi-Layered Logic.
series CAAD Futures
email
last changed 2003/05/16 20:58

_id 80df
authors Cook, Alan R.
year 1995
title Stereopsis in the Design and Presentation of Architectural Works
doi https://doi.org/10.52842/conf.acadia.1995.113
source Computing in Design - Enabling, Capturing and Sharing Ideas [ACADIA Conference Proceedings / ISBN 1-880250-04-7] University of Washington (Seattle, Washington / USA) October 19-22, 1995, pp. 113-137
summary This article presumes the primacy of spatial cognition in evaluating architectural designs and begins by describing key concepts involved in the perception of spatial form, focussing on parallax and stereoscopy. The ultimate emphasis is directed at presenting techniques which employ computers with modest hardware specifications and a basic three-dimensional modeling software application to produce sophisticated imaging tools. It is argued that these techniques are comparable to high end computer graphic products in their potentials for carrying information and in some ways are superior in their speed of generation and economies of dissemination. A camera analogy is considered in relation to controlling image variables. The ability to imply a temporal dimension is explored. An abbreviated summary of pertinent binocular techniques for viewing stereograms precedes a rationalization and initiation for using the cross-convergence technique. Ways to generate and view stereograms and other multiscopic views using 3-D computer models are described. Illustrations from sample projects show various levels of stereogram rendering including the theoretically 4-D wireframe stereogram. The translated perspective array autostereogram is presented as an economical and easily reproducible alternative to holography as well as being a substitute for stop action animation.

series ACADIA
email
last changed 2022/06/07 07:56

_id c3d0
authors Cotton, John
year 1995
title Solid Modeling as a Tool for Constructing Solar Envelopes
doi https://doi.org/10.52842/conf.acadia.1995.253
source Computing in Design - Enabling, Capturing and Sharing Ideas [ACADIA Conference Proceedings / ISBN 1-880250-04-7] University of Washington (Seattle, Washington / USA) October 19-22, 1995, pp. 253-260
summary This paper presents a method for constructing solar envelopes in site planning using a 3D solid-modeling program as the tool. The solar envelope for a building site is a mechanism for ensuring that planning regulations on the solar access rights of other sites are observed. In this application, solid modeling offers the practical advantage of being a general-purpose tool having the capability to handle sets of site conditions that are quite complex. The paper reviews the concept of solar envelopes and demonstrates the method of application of solar-envelope construction to a site defined to avoid overly simplifying conditions. Techniques for displaying the constraints on building sections imposed by a solar envelope are presented as well.
series ACADIA
email
last changed 2022/06/07 07:56

_id 27b5
authors Dießenbacher, Claus and Rank, Ernst
year 1995
title A Multimedia Archaeological Museum
doi https://doi.org/10.52842/conf.ecaade.1995.013
source Multimedia and Architectural Disciplines [Proceedings of the 13th European Conference on Education in Computer Aided Architectural Design in Europe / ISBN 0-9523687-1-4] Palermo (Italy) 16-18 November 1995, pp. 13-20
summary This paper will present a project, which was first initiated in 1994 as a graduate students seminar and is now being continued as a research project in a cooperation of computer scientists, architects and archaeologists. An ancient roman city (Colonia Ulpia Traiana near todays Xanten in Germany) has been reconstructed, using various levels of abstraction. On the coarsest level, a 3D-model of the whole city was established, distinguishing between different historical periods of the city. The second level picks places of special interest (temples, the forum, the amphitheater, the townbaths etc.) and reconstructs these buildings or groups of buildings. On the finest level important interior parts or functional details like the Hypocaustae in the town-baths are modelled. All reconstructions are oriented as close as possible to results from excavations or other available documents. All levels of the 3D-model have been visualized using photorealistic images and sequences of video animations. The 3D model is integrated into a multimedia environment, augmenting the visualization elements with plans of the city and individual buildings and with text documents. It is intended, that parts of the outlined system will be available at the site of the ancient city, where today a large public archaeological park is located.
series eCAADe
more http://dpce.ing.unipa.it/Webshare/Wwwroot/ecaade95/Pag_2.htm
last changed 2022/06/07 07:55

_id 0128
authors Engeli, M., Kurmann, D. and Schmitt, G.
year 1995
title A New Design Studio: Intelligent Objects and Personal Agents
doi https://doi.org/10.52842/conf.acadia.1995.155
source Computing in Design - Enabling, Capturing and Sharing Ideas [ACADIA Conference Proceedings / ISBN 1-880250-04-7] University of Washington (Seattle, Washington / USA) October 19-22, 1995, pp. 155-170
summary As design processes and products are constantly increasing in complexity, new tools are being developed for the designer to cope with the growing demands. In this paper we describe our research towards a design environment, within which different aspects of design can be combined, elaborated and controlled. New hardware equipment will be combined with recent developments in graphics and artificial intelligence programming to develop appropriate computer based tools and find possible new design techniques. The core of the new design studio comprises intelligent objects in a virtual reality environment that exhibit different behaviours drawn from Artificial Intelligence (AI) and Artificial Life (AL) principles, a part already realised in a tool called 'Sculptor'. The tasks of the architect will focus on preferencing and initiating good tendencies in the development of the design. A first set of software agents, assistants that support the architect in viewing, experiencing and judging the design has also been conceptualised for this virtual design environment. The goal is to create an optimised environment for the designer, where the complexity of the design task can be reduced thanks to the support made available from the machine.
keywords Architectural Design, Design Process, Virtual Reality, Artificial Intelligence, Personal Agents
series ACADIA
email
last changed 2022/06/07 07:55

_id b72a
authors Ford, S., Aouad, G., Kirkham, J., Brandon, P., Brown, F., Child, T., Cooper, G., Oxman, R. and Young, B.
year 1995
title An information engineering approach to modelling building design
source Automation in Construction 4 (1) (1995) pp. 5-15
summary This paper highlights potential problems in the construction industry concerning the large quantities of information produced and the lack of an adequate information structure within which to coordinate this information. The Information Engineering Method (IEM) and Information Engineering Facility (IEF) CASE tool are described and put forward as a means of establishing an information structure at a strategic level thus providing a framework for the implementation of lower level applications systems. The paper describes how the ICON (Integration/Information for Construction) project at Salford University is establishing and modelling the information requirements for the construction industry at the strategic level. The IEM and IEF are demonstrated using activity, data and interaction models with particular attention being paid to the function of building design within the broader context of design, procurement and the management of construction. Implications for future practice are also discussed.
keywords Information engineering; CASE tools; Modelling; Integration; Design
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/06/02 09:32

_id b04c
authors Goerger, S., Darken, R., Boyd, M., Gagnon, T., Liles, S., Sullivan, J. and Lawson, J.
year 1996
title Spatial Knowledge Acquisition from Maps and Virtual Environments in Complex Architectural Space
source Proc. 16 th Applied Behavioral Sciences Symposium, 22-23 April, U.S. Airforce Academy, Colorado Springs, CO., 1996, 6-10
summary It has often been suggested that due to its inherent spatial nature, a virtual environment (VE) might be a powerful tool for spatial knowledge acquisition of a real environment, as opposed to the use of maps or some other two-dimensional, symbolic medium. While interesting from a psychological point of view, a study of the use of a VE in lieu of a map seems nonsensical from a practical point of view. Why would the use of a VE preclude the use of a map? The more interesting investigation would be of the value added of the VE when used with a map. If the VE could be shown to substantially improve navigation performance, then there might be a case for its use as a training tool. If not, then we have to assume that maps continue to be the best spatial knowledge acquisition tool available. An experiment was conducted at the Naval Postgraduate School to determine if the use of an interactive, three-dimensional virtual environment would enhance spatial knowledge acquisition of a complex architectural space when used in conjunction with floor plan diagrams. There has been significant interest in this research area of late. Witmer, Bailey, and Knerr (1995) showed that a VE was useful in acquiring route knowledge of a complex building. Route knowledge is defined as the procedural knowledge required to successfully traverse paths between distant locations (Golledge, 1991). Configurational (or survey) knowledge is the highest level of spatial knowledge and represents a map-like internal encoding of the environment (Thorndyke, 1980). The Witmer study could not confirm if configurational knowledge was being acquired. Also, no comparison was made to a map-only condition, which we felt is the most obvious alternative. Comparisons were made only to a real world condition and a symbolic condition where the route is presented verbally.
series other
last changed 2003/04/23 15:50

_id 2115
authors Ingram, R. and Benford, S.
year 1995
title Improving the legibility of virtual environments
source Second Euro graphics Workshop on Virtual Environments
summary Years of research into hyper-media systems have shown that finding one's way through large electronic information systems can be a difficult task. Our experiences with virtual reality suggest that users will also suffer from the commonly experienced "lost in hyperspace" problem when trying to navigate virtual environments. The goal of this paper is to propose and demonstrate a technique which is currently under development with the aim of overcoming this problem. Our approach is based upon the concept of legibility, adapted from the discipline of city planning. The legibility of an urban environment refers to the ease with which its inhabitants can develop a cognitive map over a period of time and so orientate themselves within it and navigate through it [Lynch60]. Research into this topic since the 1960s has argued that, by carefully designing key features of urban environments planners can significantly influence their legibility. We propose that these legibility features might be adapted and applied to the design of a wide variety of virtual environments and that, when combined with other navigational aids such as the trails, tours and signposts of the hyper-media world, might greatly enhance people's ability to navigate them. In particular, the primary role of legibility would be to help users to navigate more easily as a result of experiencing a world for some time (hence the idea of building a cognitive map). Thus, we would see our technique being of most benefit when applied to long term, persistent and slowly evolving virtual environments. Furthermore, we are particularly interested in the automatic application of legibility techniques to information visualisations as opposed to their relatively straight forward application to simulations of the real-word. Thus, a typical future application of our work might be in enhancing visualisations of large information systems such the World Wide Web. Section 2 of this paper summarises the concept of legibility as used in the domain of city planning and introduces some of the key features that have been adapted and applied in our work. Section 3 then describes in detail the set of algorithms and techniques which are being developed for the automatic creation or enhancement of these features within virtual data spaces. Next, section 4 presents two example applications based on two different kinds of virtual data space. Finally, section 5 presents some initial reflections on this work and discusses the next steps in its evolution.
series other
last changed 2003/04/23 15:50

_id 09b4
authors Ismail, Ashraf and McCartney, Kevin
year 1993
title A Tool for Conceptual Design Evaluation Based on Compliance with Site-Development Briefs and Related Planning Regulations
doi https://doi.org/10.52842/conf.ecaade.1993.x.c6i
source [eCAADe Conference Proceedings] Eindhoven (The Netherlands) 11-13 November 1993
summary The need has been established for a computer based decision support tool to use during the conceptual stages of architectural design. The main functions are to check design compliance with the requirements of local planning authorities; characteristics evaluated will include building size, height, plot ratios, circulation and accessibility, and the preservation of natural features on site. This tool is being developed to operate under AutoCAD environment; the construction industry standard computer aided design software, following standard layering convention, integrated command lines, and pull-down menus. In addition to the common graphical output; i.c. plans, elevations and three dimensional models, it will generate textual analysis in report format to use as part of the Environmental Impact Analysis of proposed development. The tool's functions will be based upon the result of two types of field studies. First, interviews and questionnaires will be carried out with architects and planners of both private and public sectors. These will cover issues related to the performance of Computer Aided Architectural Design applications with regard to the evaluation of design schematics, and decision-making for the production of data for environmental statements. Second, field observation and participation will be carried out to observe decision-makers behaviour during assessment of building design proposals. A prototype is currently under development and will be tested against the expectations of the tool designer, Ashraf Ismail, and a team of professionals to be involved in the field studies. A critical analysis of the prototype design methodology and the study findings will be documented in the research thesis to be presented in June 1995.

series eCAADe
last changed 2022/06/07 07:50

For more results click below:

this is page 0show page 1show page 2show page 3HOMELOGIN (you are user _anon_729063 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002