CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 391

_id 7670
authors Sawicki, Bogumil
year 1995
title Ray Tracing – New Chances, Possibilities and Limitations in AutoCAD
source CAD Space [Proceedings of the III International Conference Computer in Architectural Design] Bialystock 27-29 April 1995, pp. 121-136
summary Realistic image synthesis is nowadays widely used in engineering applications. Some of these applications, such as architectural, interior, lighting and industrial design demand accurate visualization of non-existent scenes as they would look to us, when built in reality. This can only be archived by using physically based models of light interaction with surfaces, and simulating propagation of light through an environment. Ray tracing is one of the most powerful techniques used in computer graphics, which can produce such very realistic images. Ray tracing algorithm follows the paths of light rays backwards from observer into the scene. It is very time consuming process and as such one could not be developed until proper computers appeared, In recent years the technological improvements in computer industry brought more powerful machines with bigger storage capacities and better graphic devices. Owing to increasing these hardware capabilities successful implementation of ray tracing in different CAD software became possible also on PC machines. Ray tracing in AutoCAD r.12 - the most popular CAD package in the world - is the best of that example. AccuRender and AutoVision are an AutoCAD Development System (ADS) applications that use ray tracing to create photorealistic images from 3D AutoCAD models. These ,internal"' applications let users generate synthetic images of threedimensional models and scenes entirely within AutoCAD space and show effects directly on main AutoCAD screen. Ray tracing algorithm accurately calculates and displays shadows, transparency, diffusion, reflection, and refraction from surface qualities of user-defined materials. The accurate modelling of light lets produce sophisticated effects and high-quality images, which these ray tracers always generates at 24-bit pixel depth,"providing 16,7 million colours. That results can be quite impressive for some architects and are almost acceptable for others but that coloured virtual world, which is presented by ray tracing in AutoCAD space in such convincing way, is still not exactly the same as the real world. Main limitations of realism are due to the nature of ray tracing method Classical ray tracing technique takes into account the effects of light reflection from neighbouring surfaces but, leaves out of account the ambient and global illumination arising out of complex interreflections in an environment. So models generated by ray tracing belong to an "ideal" world where real materials and environment can't find their right place. We complain about that fact and say that ray tracing shows us "too specular world", but (...) (...) there is anything better on the horizon? It should be concluded, that typical abilities of today's graphics software and hardware are far from exploited. As was observed in literature there have been various works carried along with the explicit intention of overcoming all these ray tracing limitations, These researches seem to be very promising and let us hope that their results will be seen in CAD applications soon. As it happens with modelling, perhaps the answer will come from a variety of techniques that can be combined together with ray tracing depending on the case we are dealing with. Therefore from the point of view of an architects that try to keep alive some interest on the nature of materials and their interaction with form, "ray tracing" seems to be right path of research and development that we can still a long way follow, From the point of view of the school, a critical assimilation of "ray tracing" processes is required and one that might help to determinate exactly their distortions and to indicate the correct way of its development and right place in CAAD education. I trust that ray tracing will become standard not only in AutoCAD but in all architectural space modelling CAD applications and will be established as a powerful and real tool for experimental researches in architectural design process. Will be the technological progress so significant in the nearest future as it is anticipated?
series plCAD
last changed 2000/01/24 10:08

_id avocaad_2001_16
id avocaad_2001_16
authors Yu-Ying Chang, Yu-Tung Liu, Chien-Hui Wong
year 2001
title Some Phenomena of Spatial Characteristics of Cyberspace
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary "Space," which has long been an important concept in architecture (Bloomer & Moore, 1977; Mitchell, 1995, 1999), has attracted interest of researchers from various academic disciplines in recent years (Agnew, 1993; Benko & Strohmayer, 1996; Chang, 1999; Foucault, 1982; Gould, 1998). Researchers from disciplines such as anthropology, geography, sociology, philosophy, and linguistics regard it as the basis of the discussion of various theories in social sciences and humanities (Chen, 1999). On the other hand, since the invention of Internet, Internet users have been experiencing a new and magic "world." According to the definitions in traditional architecture theories, "space" is generated whenever people define a finite void by some physical elements (Zevi, 1985). However, although Internet is a virtual, immense, invisible and intangible world, navigating in it, we can still sense the very presence of ourselves and others in a wonderland. This sense could be testified by our naming of Internet as Cyberspace -- an exotic kind of space. Therefore, as people nowadays rely more and more on the Internet in their daily life, and as more and more architectural scholars and designers begin to invest their efforts in the design of virtual places online (e.g., Maher, 1999; Li & Maher, 2000), we cannot help but ask whether there are indeed sensible spaces in Internet. And if yes, these spaces exist in terms of what forms and created by what ways?To join the current interdisciplinary discussion on the issue of space, and to obtain new definition as well as insightful understanding of "space", this study explores the spatial phenomena in Internet. We hope that our findings would ultimately be also useful for contemporary architectural designers and scholars in their designs in the real world.As a preliminary exploration, the main objective of this study is to discover the elements involved in the creation/construction of Internet spaces and to examine the relationship between human participants and Internet spaces. In addition, this study also attempts to investigate whether participants from different academic disciplines define or experience Internet spaces in different ways, and to find what spatial elements of Internet they emphasize the most.In order to achieve a more comprehensive understanding of the spatial phenomena in Internet and to overcome the subjectivity of the members of the research team, the research design of this study was divided into two stages. At the first stage, we conducted literature review to study existing theories of space (which are based on observations and investigations of the physical world). At the second stage of this study, we recruited 8 Internet regular users to approach this topic from different point of views, and to see whether people with different academic training would define and experience Internet spaces differently.The results of this study reveal that the relationship between human participants and Internet spaces is different from that between human participants and physical spaces. In the physical world, physical elements of space must be established first; it then begins to be regarded as a place after interaction between/among human participants or interaction between human participants and the physical environment. In contrast, in Internet, a sense of place is first created through human interactions (or activities), Internet participants then begin to sense the existence of a space. Therefore, it seems that, among the many spatial elements of Internet we found, "interaction/reciprocity" Ñ either between/among human participants or between human participants and the computer interface Ð seems to be the most crucial element.In addition, another interesting result of this study is that verbal (linguistic) elements could provoke a sense of space in a degree higher than 2D visual representation and no less than 3D visual simulations. Nevertheless, verbal and 3D visual elements seem to work in different ways in terms of cognitive behaviors: Verbal elements provoke visual imagery and other sensory perceptions by "imagining" and then excite personal experiences of space; visual elements, on the other hand, provoke and excite visual experiences of space directly by "mapping".Finally, it was found that participants with different academic training did experience and define space differently. For example, when experiencing and analyzing Internet spaces, architecture designers, the creators of the physical world, emphasize the design of circulation and orientation, while participants with linguistics training focus more on subtle language usage. Visual designers tend to analyze the graphical elements of virtual spaces based on traditional painting theories; industrial designers, on the other hand, tend to treat these spaces as industrial products, emphasizing concept of user-center and the control of the computer interface.The findings of this study seem to add new information to our understanding of virtual space. It would be interesting for future studies to investigate how this information influences architectural designers in their real-world practices in this digital age. In addition, to obtain a fuller picture of Internet space, further research is needed to study the same issue by examining more Internet participants who have no formal linguistics and graphical training.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 2a99
authors Keul, A. and Martens, B.
year 1996
title SIMULATION - HOW DOES IT SHAPE THE MESSAGE?
source The Future of Endoscopy [Proceedings of the 2nd European Architectural Endoscopy Association Conference / ISBN 3-85437-114-4], pp. 47-54
summary Architectural simulation techniques - CAD, video montage, endoscopy, full-scale or smaller models, stereoscopy, holography etc. - are common visualizations in planning. A subjective theory of planners says "experts are able to distinguish between 'pure design' in their heads and visualized design details and contexts like color, texture, material, brightness, eye level or perspective." If this is right, simulation details should be compensated mentally by trained people, but act as distractors to the lay mind.

Environmental psychologists specializing in architectural psychology offer "user needs' assessments" and "post occupancy evaluations" to facilitate communication between users and experts. To compare the efficiency of building descriptions, building walkthroughs, regular plans, simulation, and direct, long-time exposition, evaluation has to be evaluated.

Computer visualizations and virtual realities grow more important, but studies on the effects of simulation techniques upon experts and users are rare. As a contribution to the field of architectural simulation, an expert - user comparison of CAD versus endoscopy/model simulations of a Vienna city project was realized in 1995. The Department for Spatial Simulation at the Vienna University of Technology provided diaslides of the planned city development at Aspern showing a) CAD and b) endoscopy photos of small-scale polystyrol models. In an experimental design, they were presented uncommented as images of "PROJECT A" versus "PROJECT B" to student groups of architects and non-architects at Vienna and Salzburg (n= 95) and assessed by semantic differentials. Two contradictory hypotheses were tested: 1. The "selective framing hypothesis" (SFH) as the subjective theory of planners, postulating different judgement effects (measured by item means of the semantic differential) through selective attention of the planners versus material- and context-bound perception of the untrained users. 2. The "general framing hypothesis" (GFH) postulates typical framing and distraction effects of all simulation techniques affecting experts as well as non-experts.

The experiment showed that -counter-intuitive to expert opinions- framing and distraction were prominent both for experts and lay people (= GFH). A position effect (assessment interaction of CAD and endoscopy) was present with experts and non-experts, too. With empirical evidence for "the medium is the message", a more cautious attitude has to be adopted towards simulation products as powerful framing (i.e. perception- and opinion-shaping) devices.

keywords Architectural Endoscopy, Real Environments
series EAEA
type normal paper
email
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id c7e9
authors Maver, T.W.
year 2002
title Predicting the Past, Remembering the Future
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 2-3
summary Charlas Magistrales 2There never has been such an exciting moment in time in the extraordinary 30 year history of our subject area, as NOW,when the philosophical theoretical and practical issues of virtuality are taking centre stage.The PastThere have, of course, been other defining moments during these exciting 30 years:• the first algorithms for generating building layouts (circa 1965).• the first use of Computer graphics for building appraisal (circa 1966).• the first integrated package for building performance appraisal (circa 1972).• the first computer generated perspective drawings (circa 1973).• the first robust drafting systems (circa 1975).• the first dynamic energy models (circa 1982).• the first photorealistic colour imaging (circa 1986).• the first animations (circa 1988)• the first multimedia systems (circa 1995), and• the first convincing demonstrations of virtual reality (circa 1996).Whereas the CAAD community has been hugely inventive in the development of ICT applications to building design, it hasbeen woefully remiss in its attempts to evaluate the contribution of those developments to the quality of the built environmentor to the efficiency of the design process. In the absence of any real evidence, one can only conjecture regarding the realbenefits which fall, it is suggested, under the following headings:• Verisimilitude: The extraordinary quality of still and animated images of the formal qualities of the interiors and exteriorsof individual buildings and of whole neighborhoods must surely give great comfort to practitioners and their clients thatwhat is intended, formally, is what will be delivered, i.e. WYSIWYG - what you see is what you get.• Sustainability: The power of «first-principle» models of the dynamic energetic behaviour of buildings in response tochanging diurnal and seasonal conditions has the potential to save millions of dollars and dramatically to reduce thedamaging environmental pollution created by badly designed and managed buildings.• Productivity: CAD is now a multi-billion dollar business which offers design decision support systems which operate,effectively, across continents, time-zones, professions and companies.• Communication: Multi-media technology - cheap to deliver but high in value - is changing the way in which we canexplain and understand the past and, envisage and anticipate the future; virtual past and virtual future!MacromyopiaThe late John Lansdown offered the view, in his wonderfully prophetic way, that ...”the future will be just like the past, onlymore so...”So what can we expect the extraordinary trajectory of our subject area to be?To have any chance of being accurate we have to have an understanding of the phenomenon of macromyopia: thephenomenon exhibitted by society of greatly exaggerating the immediate short-term impact of new technologies (particularlythe information technologies) but, more importantly, seriously underestimating their sustained long-term impacts - socially,economically and intellectually . Examples of flawed predictions regarding the the future application of information technologiesinclude:• The British Government in 1880 declined to support the idea of a national telephonic system, backed by the argumentthat there were sufficient small boys in the countryside to run with messages.• Alexander Bell was modest enough to say that: «I am not boasting or exaggerating but I believe, one day, there will bea telephone in every American city».• Tom Watson, in 1943 said: «I think there is a world market for about 5 computers».• In 1977, Ken Olssop of Digital said: «There is no reason for any individuals to have a computer in their home».The FutureJust as the ascent of woman/man-kind can be attributed to her/his capacity to discover amplifiers of the modest humancapability, so we shall discover how best to exploit our most important amplifier - that of the intellect. The more we know themore we can figure; the more we can figure the more we understand; the more we understand the more we can appraise;the more we can appraise the more we can decide; the more we can decide the more we can act; the more we can act themore we can shape; and the more we can shape, the better the chance that we can leave for future generations a trulysustainable built environment which is fit-for-purpose, cost-beneficial, environmentally friendly and culturally significactCentral to this aspiration will be our understanding of the relationship between real and virtual worlds and how to moveeffortlessly between them. We need to be able to design, from within the virtual world, environments which may be real ormay remain virtual or, perhaps, be part real and part virtual.What is certain is that the next 30 years will be every bit as exciting and challenging as the first 30 years.
series SIGRADI
email
last changed 2016/03/10 09:55

_id 05f7
authors Carrara, G., Confessore, G., Fioravanti, A. and Novembri, G.
year 1995
title Multimedia and Knowledge-Based Computer-Aided Architectural Design
source Multimedia and Architectural Disciplines [Proceedings of the 13th European Conference on Education in Computer Aided Architectural Design in Europe / ISBN 0-9523687-1-4] Palermo (Italy) 16-18 November 1995, pp. 323-330
doi https://doi.org/10.52842/conf.ecaade.1995.323
summary It appears by now fairly accepted to many researchers in the field of the Computer Aided Architectural Design that the way to realize support tools for these aims is by means of the realization of Knowledge Based Assistants. This kind of computer programs, based on the knowledge engineering, finds their power and efficaciousness by their knowledge base. Nowadays this kind of tools is leaving the research world and it appears evident that the common graphic interfaces and the modalities of dialogue between the architect and the computer, are inadequate to support the exchange of information that the use of these tools requires. The use of the knowledge bases furthermore, presupposes that the conceptual model of the building realized by others, must be made entirely understandable to the architect. The CAAD Laboratory has carried out a system software prototype based on Knowledge Engineering in the field of hospital buildings. In order to overcome the limit of software systems based on usual Knowledge Engineering, by improving architect-computer interaction, at CAAD Lab it is refining building model introducing into the knowledge base two complementary each other methodologies: the conceptual clustering and multimedia technics. This research will make it possible for architects navigate consciously through the domain of the knowledge base already implemented.

series eCAADe
more http://dpce.ing.unipa.it/Webshare/Wwwroot/ecaade95/Pag_39.htm
last changed 2022/06/07 07:55

_id cf51
authors Cheng, Nancy Yen-wen
year 1995
title By All Means: Multiple Media In Design Education
source Multimedia and Architectural Disciplines [Proceedings of the 13th European Conference on Education in Computer Aided Architectural Design in Europe / ISBN 0-9523687-1-4] Palermo (Italy) 16-18 November 1995, pp. 117-128
doi https://doi.org/10.52842/conf.ecaade.1995.117
summary This paper describes how to combine media to maximize understanding in CAD education. Advantages such as synergistic communication, clarification of common concepts and awareness of media characteristics are illuminated through digital and traditional projects. Learning computer media through communication exercises is discussed with examples from networked collaborations using the World Wide Web. Language teaching shows how to use these exercises to encourage critical thinking in a CAD curriculum.

series eCAADe
email
more http://dpce.ing.unipa.it/Webshare/Wwwroot/ecaade95/Pag_16.htm
last changed 2022/06/07 07:55

_id 8991
authors Danahy, John and Hoinkes, Rodney
year 1995
title Polytrim: Collaborative Setting for Environmental Design
source Sixth International Conference on Computer-Aided Architectural Design Futures [ISBN 9971-62-423-0] Singapore, 24-26 September 1995, pp. 647-658
summary This paper begins with a review of the structuring values and questions the Centre for Landscape Research (CLR) is interested in answering with its testbed software system Polytrim (and its derivatives; CLRview, CLRpaint, CLRmosaic available via anonymous ftp over the internet). The mid section of the paper serves as a guide to Polytrim's structure and implementation issues. Some of the most enduring and significant principles learned from Polytrim's use over the last six years of use in research, teaching and professional practice are introduced. The paper will end with an overview of characteristics that we believe our next generation of software should achieve. The CLR's digital library on the World-Wide Web provides an extensive Set of illustrations and detailed descriptions of the ideas and figures presented in this paper. Endnotes provide specific internet addresses for those that wish to read, see or use the system.
keywords Dialogue, Interaction, Collaboration, Integration, Setting
series CAAD Futures
email
last changed 2003/05/16 20:58

_id 819d
authors Eiteljorg, H.
year 1988
title Computing Assisted Drafting and Design: new technologies for old problems
source Center for the study of architecture, Bryn Mawr, Pennsylvania
summary In past issues of the Newsletter, George Tressel and I have written about virtual reality and renderings. We have each discussed particular problems with the technology, and both of us mentioned how compelling computer visualizations can be. In my article ("Virtual Reality and Rendering," February, 1995, Vol. 7, no. 4), I indicated my concerns about the quality of the scholarship and the level of detail used in making renderings or virtual worlds. Mr. Tressel (in "Visualizing the Ancient World," November, 1996, Vol. IX, no. 3) wrote about the need to distinguish between real and hypothetical parts of a visualization, the need to differentiate materials, and the difficulties involved in creating the visualizations (some of which were included in the Newsletter in black-and-white and on the Web in color). I am returning to this topic now, in part because the quality of the images available to us is improving so fast and in part because it seems now that neither Mr. Tressel nor I treated all the issues raised by the use of high-quality visualizations. The quality may be illustrated by new images of the older propylon that were created by Mr. Tressel (Figs. 1 - 3); these images are significantly more realistic than the earlier ones, but they do not represent the ultimate in quality, since they were created on a personal computer.
series other
last changed 2003/04/23 15:50

_id db00
authors Espina, Jane J.B.
year 2002
title Base de datos de la arquitectura moderna de la ciudad de Maracaibo 1920-1990 [Database of the Modern Architecture of the City of Maracaibo 1920-1990]
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 133-139
summary Bases de datos, Sistemas y Redes 134The purpose of this report is to present the achievements obtained in the use of the technologies of information andcommunication in the architecture, by means of the construction of a database to register the information on the modernarchitecture of the city of Maracaibo from 1920 until 1990, in reference to the constructions located in 5 of Julio, Sectorand to the most outstanding planners for its work, by means of the representation of the same ones in digital format.The objective of this investigation it was to elaborate a database for the registration of the information on the modernarchitecture in the period 1920-1990 of Maracaibo, by means of the design of an automated tool to organize the it datesrelated with the buildings, parcels and planners of the city. The investigation was carried out considering three methodologicalmoments: a) Gathering and classification of the information of the buildings and planners of the modern architectureto elaborate the databases, b) Design of the databases for the organization of the information and c) Design ofthe consultations, information, reports and the beginning menu. For the prosecution of the data files were generated inprograms attended by such computer as: AutoCAD R14 and 2000, Microsoft Word, Microsoft PowerPoint and MicrosoftAccess 2000, CorelDRAW V9.0 and Corel PHOTOPAINT V9.0.The investigation is related with the work developed in the class of Graphic Calculation II, belonging to the Departmentof Communication of the School of Architecture of the Faculty of Architecture and Design of The University of the Zulia(FADLUZ), carried out from the year 1999, using part of the obtained information of the works of the students generatedby means of the CAD systems for the representation in three dimensions of constructions with historical relevance in themodern architecture of Maracaibo, which are classified in the work of The Other City, generating different types ofisometric views, perspectives, representations photorealistics, plants and facades, among others.In what concerns to the thematic of this investigation, previous antecedents are ignored in our environment, and beingthe first time that incorporates the digital graph applied to the work carried out by the architects of “The Other City, thegenesis of the oil city of Maracaibo” carried out in the year 1994; of there the value of this research the field of thearchitecture and computer science. To point out that databases exist in the architecture field fits and of the design, alsoweb sites with information has more than enough architects and architecture works (Montagu, 1999).In The University of the Zulia, specifically in the Faculty of Architecture and Design, they have been carried out twoworks related with the thematic one of database, specifically in the years 1995 and 1996, in the first one a system wasdesigned to visualize, to classify and to analyze from the architectural point of view some historical buildings of Maracaiboand in the second an automated system of documental information was generated on the goods properties built insidethe urban area of Maracaibo. In the world environment it stands out the first database developed in Argentina, it is the database of the Modern andContemporary Architecture “Datarq 2000” elaborated by the Prof. Arturo Montagú of the University of Buenos Aires. The general objective of this work it was the use of new technologies for the prosecution in Architecture and Design (MONTAGU, Ob.cit). In the database, he intends to incorporate a complementary methodology and alternative of use of the informationthat habitually is used in the teaching of the architecture. When concluding this investigation, it was achieved: 1) analysis of projects of modern architecture, of which some form part of the historical patrimony of Maracaibo; 2) organized registrations of type text: historical, formal, space and technical data, and graph: you plant, facades, perspectives, pictures, among other, of the Moments of the Architecture of the Modernity in the city, general data and more excellent characteristics of the constructions, and general data of the Planners with their more important works, besides information on the parcels where the constructions are located, 3)construction in digital format and development of representations photorealistics of architecture projects already built. It is excellent to highlight the importance in the use of the Technologies of Information and Communication in this investigation, since it will allow to incorporate to the means digital part of the information of the modern architecturalconstructions that characterized the city of Maracaibo at the end of the XX century, and that in the last decades they have suffered changes, some of them have disappeared, destroying leaves of the modern historical patrimony of the city; therefore, the necessity arises of to register and to systematize in digital format the graphic information of those constructions. Also, to demonstrate the importance of the use of the computer and of the computer science in the representation and compression of the buildings of the modern architecture, to inclination texts, images, mapping, models in 3D and information organized in databases, and the relevance of the work from the pedagogic point of view,since it will be able to be used in the dictation of computer science classes and history in the teaching of the University studies of third level, allowing the learning with the use in new ways of transmission of the knowledge starting from the visual information on the part of the students in the elaboration of models in three dimensions or electronic scalemodels, also of the modern architecture and in a future to serve as support material for virtual recoveries of some buildings that at the present time they don’t exist or they are almost destroyed. In synthesis, the investigation will allow to know and to register the architecture of Maracaibo in this last decade, which arises under the parameters of the modernity and that through its organization and visualization in digital format, it will allow to the students, professors and interested in knowing it in a quicker and more efficient way, constituting a contribution to theteaching in the history area and calculation. Also, it can be of a lot of utility for the development of future investigation projects related with the thematic one and restoration of buildings of the modernity in Maracaibo.
keywords database, digital format, modern architecture, model, mapping
series SIGRADI
email
last changed 2016/03/10 09:51

_id 2068
authors Frazer, John
year 1995
title AN EVOLUTIONARY ARCHITECTURE
source London: Architectural Association
summary In "An Evolutionary Architecture", John Frazer presents an overview of his work for the past 30 years. Attempting to develop a theoretical basis for architecture using analogies with nature's processes of evolution and morphogenesis. Frazer's vision of the future of architecture is to construct organic buildings. Thermodynamically open systems which are more environmentally aware and sustainable physically, sociologically and economically. The range of topics which Frazer discusses is a good illustration of the breadth and depth of the evolutionary design problem. Environmental Modelling One of the first topics dealt with is the importance of environmental modelling within the design process. Frazer shows how environmental modelling is often misused or misinterpreted by architects with particular reference to solar modelling. From the discussion given it would seem that simplifications of the environmental models is the prime culprit resulting in misinterpretation and misuse. The simplifications are understandable given the amount of information needed for accurate modelling. By simplifying the model of the environmental conditions the architect is able to make informed judgments within reasonable amounts of time and effort. Unfortunately the simplications result in errors which compound and cause the resulting structures to fall short of their anticipated performance. Frazer obviously believes that the computer can be a great aid in the harnessing of environmental modelling data, providing that the same simplifying assumptions are not made and that better models and interfaces are possible. Physical Modelling Physical modelling has played an important role in Frazer's research. Leading to the construction of several novel machine readable interactive models, ranging from lego-like building blocks to beermat cellular automata and wall partitioning systems. Ultimately this line of research has led to the Universal Constructor and the Universal Interactor. The Universal Constructor The Universal Constructor features on the cover of the book. It consists of a base plug-board, called the "landscape", on top of which "smart" blocks, or cells, can be stacked vertically. The cells are individually identified and can communicate with neighbours above and below. Cells communicate with users through a bank of LEDs displaying the current state of the cell. The whole structure is machine readable and so can be interpreted by a computer. The computer can interpret the states of the cells as either colour or geometrical transformations allowing a wide range of possible interpretations. The user interacts with the computer display through direct manipulation of the cells. The computer can communicate and even direct the actions of the user through feedback with the cells to display various states. The direct manipulation of the cells encourages experimentation by the user and demonstrates basic concepts of the system. The Universal Interactor The Universal Interactor is a whole series of experimental projects investigating novel input and output devices. All of the devices speak a common binary language and so can communicate through a mediating central hub. The result is that input, from say a body-suit, can be used to drive the out of a sound system or vice versa. The Universal Interactor opens up many possibilities for expression when using a CAD system that may at first seem very strange.However, some of these feedback systems may prove superior in the hands of skilled technicians than more standard devices. Imagine how a musician might be able to devise structures by playing melodies which express the character. Of course the interpretation of input in this form poses a difficult problem which will take a great deal of research to achieve. The Universal Interactor has been used to provide environmental feedback to affect the development of evolving genetic codes. The feedback given by the Universal Interactor has been used to guide selection of individuals from a population. Adaptive Computing Frazer completes his introduction to the range of tools used in his research by giving a brief tour of adaptive computing techniques. Covering topics including cellular automata, genetic algorithms, classifier systems and artificial evolution. Cellular Automata As previously mentioned Frazer has done some work using cellular automata in both physical and simulated environments. Frazer discusses how surprisingly complex behaviour can result from the simple local rules executed by cellular automata. Cellular automata are also capable of computation, in fact able to perform any computation possible by a finite state machine. Note that this does not mean that cellular automata are capable of any general computation as this would require the construction of a Turing machine which is beyond the capabilities of a finite state machine. Genetic Algorithms Genetic algorithms were first presented by Holland and since have become a important tool for many researchers in various areas.Originally developed for problem-solving and optimization problems with clearly stated criteria and goals. Frazer fails to mention one of the most important differences between genetic algorithms and other adaptive problem-solving techniques, ie. neural networks. Genetic algorithms have the advantage that criteria can be clearly stated and controlled within the fitness function. The learning by example which neural networks rely upon does not afford this level of control over what is to be learned. Classifier Systems Holland went on to develop genetic algorithms into classifier systems. Classifier systems are more focussed upon the problem of learning appropriate responses to stimuli, than searching for solutions to problems. Classifier systems receive information from the environment and respond according to rules, or classifiers. Successful classifiers are rewarded, creating a reinforcement learning environment. Obviously, the mapping between classifier systems and the cybernetic view of organisms sensing, processing and responding to environmental stimuli is strong. It would seem that a central process similar to a classifier system would be appropriate at the core of an organic building. Learning appropriate responses to environmental conditions over time. Artificial Evolution Artificial evolution traces it's roots back to the Biomorph program which was described by Dawkins in his book "The Blind Watchmaker". Essentially, artificial evolution requires that a user supplements the standard fitness function in genetic algorithms to guide evolution. The user may provide selection pressures which are unquantifiable in a stated problem and thus provide a means for dealing ill-defined criteria. Frazer notes that solving problems with ill-defined criteria using artificial evolution seriously limits the scope of problems that can be tackled. The reliance upon user interaction in artificial evolution reduces the practical size of populations and the duration of evolutionary runs. Coding Schemes Frazer goes on to discuss the encoding of architectural designs and their subsequent evolution. Introducing two major systems, the Reptile system and the Universal State Space Modeller. Blueprint vs. Recipe Frazer points out the inadequacies of using standard "blueprint" design techniques in developing organic structures. Using a "recipe" to describe the process of constructing a building is presented as an alternative. Recipes for construction are discussed with reference to the analogous process description given by DNA to construct an organism. The Reptile System The Reptile System is an ingenious construction set capable of producing a wide range of structures using just two simple components. Frazer saw the advantages of this system for rule-based and evolutionary systems in the compactness of structure descriptions. Compactness was essential for the early computational work when computer memory and storage space was scarce. However, compact representations such as those described form very rugged fitness landscapes which are not well suited to evolutionary search techniques. Structures are created from an initial "seed" or minimal construction, for example a compact spherical structure. The seed is then manipulated using a series of processes or transformations, for example stretching, shearing or bending. The structure would grow according to the transformations applied to it. Obviously, the transformations could be a predetermined sequence of actions which would always yield the same final structure given the same initial seed. Alternatively, the series of transformations applied could be environmentally sensitive resulting in forms which were also sensitive to their location. The idea of taking a geometrical form as a seed and transforming it using a series of processes to create complex structures is similar in many ways to the early work of Latham creating large morphological charts. Latham went on to develop his ideas into the "Mutator" system which he used to create organic artworks. Generalising the Reptile System Frazer has proposed a generalised version of the Reptile System to tackle more realistic building problems. Generating the seed or minimal configuration from design requirements automatically. From this starting point (or set of starting points) solutions could be evolved using artificial evolution. Quantifiable and specific aspects of the design brief define the formal criteria which are used as a standard fitness function. Non-quantifiable criteria, including aesthetic judgments, are evaluated by the user. The proposed system would be able to learn successful strategies for satisfying both formal and user criteria. In doing so the system would become a personalised tool of the designer. A personal assistant which would be able to anticipate aesthetic judgements and other criteria by employing previously successful strategies. Ultimately, this is a similar concept to Negroponte's "Architecture Machine" which he proposed would be computer system so personalised so as to be almost unusable by other people. The Universal State Space Modeller The Universal State Space Modeller is the basis of Frazer's current work. It is a system which can be used to model any structure, hence the universal claim in it's title. The datastructure underlying the modeller is a state space of scaleless logical points, called motes. Motes are arranged in a close-packing sphere arrangement, which makes each one equidistant from it's twelve neighbours. Any point can be broken down into a self-similar tetrahedral structure of logical points. Giving the state space a fractal nature which allows modelling at many different levels at once. Each mote can be thought of as analogous to a cell in a biological organism. Every mote carries a copy of the architectural genetic code in the same way that each cell within a organism carries a copy of it's DNA. The genetic code of a mote is stored as a sequence of binary "morons" which are grouped together into spatial configurations which are interpreted as the state of the mote. The developmental process begins with a seed. The seed develops through cellular duplication according to the rules of the genetic code. In the beginning the seed develops mainly in response to the internal genetic code, but as the development progresses the environment plays a greater role. Cells communicate by passing messages to their immediate twelve neighbours. However, it can send messages directed at remote cells, without knowledge of it's spatial relationship. During the development cells take on specialised functions, including environmental sensors or producers of raw materials. The resulting system is process driven, without presupposing the existence of a construction set to use. The datastructure can be interpreted in many ways to derive various phenotypes. The resulting structure is a by-product of the cellular activity during development and in response to the environment. As such the resulting structures have much in common with living organisms which are also the emergent result or by-product of local cellular activity. Primordial Architectural Soups To conclude, Frazer presents some of the most recent work done, evolving fundamental structures using limited raw materials, an initial seed and massive feedback. Frazer proposes to go further and do away with the need for initial seed and start with a primordial soup of basic architectural concepts. The research is attempting to evolve the starting conditions and evolutionary processes without any preconditions. Is there enough time to evolve a complex system from the basic building blocks which Frazer proposes? The computational complexity of the task being embarked upon is not discussed. There is an implicit assumption that the "superb tactics" of natural selection are enough to cut through the complexity of the task. However, Kauffman has shown how self-organisation plays a major role in the early development of replicating systems which we may call alive. Natural selection requires a solid basis upon which it can act. Is the primordial soup which Frazer proposes of the correct constitution to support self-organisation? Kauffman suggests that one of the most important attributes of a primordial soup to be capable of self-organisation is the need for a complex network of catalysts and the controlling mechanisms to stop the reactions from going supracritical. Can such a network be provided of primitive architectural concepts? What does it mean to have a catalyst in this domain? Conclusion Frazer shows some interesting work both in the areas of evolutionary design and self-organising systems. It is obvious from his work that he sympathizes with the opinions put forward by Kauffman that the order found in living organisms comes from both external evolutionary pressure and internal self-organisation. His final remarks underly this by paraphrasing the words of Kauffman, that life is always to found on the edge of chaos. By the "edge of chaos" Kauffman is referring to the area within the ordered regime of a system close to the "phase transition" to chaotic behaviour. Unfortunately, Frazer does not demonstrate that the systems he has presented have the necessary qualities to derive useful order at the edge of chaos. He does not demonstrate, as Kauffman does repeatedly, that there exists a "phase transition" between ordered and chaotic regimes of his systems. He also does not make any studies of the relationship of useful forms generated by his work to phase transition regions of his systems should they exist. If we are to find an organic architecture, in more than name alone, it is surely to reside close to the phase transition of the construction system of which is it built. Only there, if we are to believe Kauffman, are we to find useful order together with environmentally sensitive and thermodynamically open systems which can approach the utility of living organisms.
series other
type normal paper
last changed 2004/05/22 14:12

_id 2115
authors Ingram, R. and Benford, S.
year 1995
title Improving the legibility of virtual environments
source Second Euro graphics Workshop on Virtual Environments
summary Years of research into hyper-media systems have shown that finding one's way through large electronic information systems can be a difficult task. Our experiences with virtual reality suggest that users will also suffer from the commonly experienced "lost in hyperspace" problem when trying to navigate virtual environments. The goal of this paper is to propose and demonstrate a technique which is currently under development with the aim of overcoming this problem. Our approach is based upon the concept of legibility, adapted from the discipline of city planning. The legibility of an urban environment refers to the ease with which its inhabitants can develop a cognitive map over a period of time and so orientate themselves within it and navigate through it [Lynch60]. Research into this topic since the 1960s has argued that, by carefully designing key features of urban environments planners can significantly influence their legibility. We propose that these legibility features might be adapted and applied to the design of a wide variety of virtual environments and that, when combined with other navigational aids such as the trails, tours and signposts of the hyper-media world, might greatly enhance people's ability to navigate them. In particular, the primary role of legibility would be to help users to navigate more easily as a result of experiencing a world for some time (hence the idea of building a cognitive map). Thus, we would see our technique being of most benefit when applied to long term, persistent and slowly evolving virtual environments. Furthermore, we are particularly interested in the automatic application of legibility techniques to information visualisations as opposed to their relatively straight forward application to simulations of the real-word. Thus, a typical future application of our work might be in enhancing visualisations of large information systems such the World Wide Web. Section 2 of this paper summarises the concept of legibility as used in the domain of city planning and introduces some of the key features that have been adapted and applied in our work. Section 3 then describes in detail the set of algorithms and techniques which are being developed for the automatic creation or enhancement of these features within virtual data spaces. Next, section 4 presents two example applications based on two different kinds of virtual data space. Finally, section 5 presents some initial reflections on this work and discusses the next steps in its evolution.
series other
last changed 2003/04/23 15:50

_id 3c8c
authors Kadysz, Andrzej
year 1995
title CAAD Space – Incompatible Space
source CAD Space [Proceedings of the III International Conference Computer in Architectural Design] Bialystock 27-29 April 1995, pp. 147-158
summary In this paper computer is considered as the "hypertool" - union of technical and methodological aspects of a tool. CAAD and its space is a microcosmos incompatible with our real world. CAAD performs the role of electronic modeller that redefines space and substance of our model by structure of the CAAD software and reduces the range of possible operations, transformations of a model. The environment that is internally wild opened - everything is an information easy to exchange, but externally is excluded from direct influences and manual access. I try to discover typical and unique features of this virtual environment of CAAD, substance of virtual model and computer as the tool of architectural creation. Medium that redefines" architects" imagination.
series plCAD
last changed 2000/01/24 10:08

_id 2e3b
authors Kvan, Thomas and Kvan, Erik
year 1997
title Is Design Really Social
source Creative Collaboration in Virtual Communities 1997, ed. A. Cicognani. VC'97. Sydney: Key Centre of Design Computing, Department of Architectural and Design Science, University of Sydney, 8 p.
summary There are many who will readily agree with Mitchell’s assertion that “the most interesting new directions (for computer-aided design) are suggested by the growing convergence of computation and telecommunication. This allows us to treat designing not just as a technical process... but also as a social process.” [Mitchell 1995]. The assumption is that design was a social process until users of computer-aided design systems were distracted into treating it as a merely technical process. Most readers will assume that this convergence must and will lead to increased communication between design participants; that better social interaction leads to be better design. The unspoken assumption appears to be that putting the participants into an environment with maximal communication channels will result in design collaboration. The tools provided; therefore; must permit the best communication and the best social interaction. We think it essential to examine the foundations and assumptions on which software and environments are designed to support collaborative design communication. Of particular interest to us in this paper is the assumption about the “social” nature of design. Early research in computer-assisted design collaborations has jumped immediately into conclusions about communicative models which lead to high-bandwidth video connections as the preferred channel of collaboration. The unstated assumption is that computer-supported design environments are not adequate until they replicate in full the sensation of being physically present in the same space as the other participants (you are not there until you are really there). It is assumed that the real social process of design must include all the signals used to establish and facilitate face-to-face communication; including gestures; body language and all outputs of drawing (e.g. Tang [1991]). In our specification of systems for virtual design communities; are we about to fall into the same traps as drafting systems did?
keywords CSCW; Virtual Community; Architectural Design; Computer-Aided Design
series other
email
last changed 2002/11/15 18:29

_id 4cb3
authors Kwartler, Michael
year 1995
title Beyond the Adversial: Conflict Resolution, Simulation and Community Design
source The Future of Endoscopy [Proceedings of the 2nd European Architectural Endoscopy Association Conference / ISBN 3-85437-114-4]
summary Fundamentally, the design of communities in the United States is grounded in the Constitution’s evolving definition of property and the rights and obligations attendant to the ownership and use of real property. The rearticulation of Jefferson’s dictum in the Declaration of Independence; “that individuals have certain inalienable rights, among these are life, liberty, and the pursuit of happiness” to the Constitution’s “life, liberty and property” represents a pragmatic understanding of the relationship between property and the actualization of the individual in society. In terms of community design, this means extensive public involvement and participation in not only the formulation of rules and regulations but of individual projects as well.

Since the 1960’s as planning and community design decision making has become increasingly contentious, the American legal system’s adversial approach to conflict resolution has become the dominant model for public decision making. The legal system’s adversial approach to adjudication is essentially a zero-sum game of winners and losers, and as most land-use lawyers will agree, is not a good model for the design of cities. While the adversial approach does not resolve disputes it rarely creates a positive and constructive consensus for change. Because physical planning and community design issues are not only value based, community design through consensus building has emerged as a new paradigm for physical planning and design.

The Environmental Simulation Center employs a broad range of complementary simulation and visualization techniques including 3-D vector based computer models, endoscopy, and verifiable digital photomontages to provide objective and verifiable information for projects and regulations under study.

In this context, a number of recent projects will be discussed which have explored the use of various simulation and visualization techniques in community design. Among them are projects involved with changes in the City’s Zoning Regulations, the community design of a major public open space in one of the region’s mid-size cities, and the design of a new village center for a suburban community, with the last project employing the Center’s userfriendly and interactive 3-D computer kit of parts. The kit - a kind of computer “pattern book” is comprised of site planning, urban and landscape design and architectural conventions - is part of the Center’s continuing effort to support a consensus based, rather than adversial based, public planning and design process.

keywords Architectural Simulation, Real Environments
series EAEA
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id 2e5a
authors Matsumoto, N. and Seta, S.
year 1997
title A history and application of visual simulation in which perceptual behaviour movement is measured.
source Architectural and Urban Simulation Techniques in Research and Education [3rd EAEA-Conference Proceedings]
summary For our research on perception and judgment, we have developed a new visual simulation system based on the previous system. Here, we report on the development history of our system and on the current research employing it. In 1975, the first visual simulation system was introduced, witch comprised a fiberscope and small-scale models. By manipulating the fiberscope's handles, the subject was able to view the models at eye level. When the pen-size CCD TV camera came out, we immediately embraced it, incorporating it into a computer controlled visual simulation system in 1988. It comprises four elements: operation input, drive control, model shooting, and presentation. This system was easy to operate, and the subject gained an omnidirectional, eye-level image as though walking through the model. In 1995, we began developing a new visual system. We wanted to relate the scale model image directly to perceptual behavior, to make natural background images, and to record human feelings in a non-verbal method. Restructuring the above four elements to meet our equirements and adding two more (background shooting and emotion spectrum analysis), we inally completed the new simulation system in 1996. We are employing this system in streetscape research. Using the emotion spectrum system, we are able to record brain waves. Quantifying the visual effects through these waves, we are analyzing the relation between visual effects and physical elements. Thus, we are presented with a new aspect to study: the relationship between brain waves and changes in the physical environment. We will be studying the relation of brain waves in our sequential analysis of the streetscape.
keywords Architectural Endoscopy, Endoscopy, Simulation, Visualisation, Visualization, Real Environments
series EAEA
email
more http://www.bk.tudelft.nl/media/eaea/eaea97.html
last changed 2005/09/09 10:43

_id ebbf
authors Ohno, Ryozo
year 1995
title Street-scape and Way-finding Performance
source The Future of Endoscopy [Proceedings of the 2nd European Architectural Endoscopy Association Conference / ISBN 3-85437-114-4]
summary In this study, it was hypothesized that people’s performance of way-finding depends on the characteristics of street-scapes, i.e., the more visual information exists the easier people find their own ways. This relationship was investigated by an experiment using an environmental simulator and analysis of the subject’s behavioral data recorded by the simulation system. Three scale models (1/150) of identical maze patterns (300m x 300m) which have different street-scapes were created and set in the simulator, in which an endoscope connected to CCD color TV camera controlled by a system operated by a personal computer. Three types of streets are: (1) having no characteristics with monotonous surface, (2) having characteristics on each corner with different buildings, (3) having characteristics along the streets with trees, columns or fences. The simulator allows a subject to move through the scale models and looking around, using a “joy-stick“ for viewing the scene as projected on 100-inch CCTV screen. The control system of the simulator records all signals generated by the “joy-stick“ every 0.01 second, and thus exact position within the model space and the viewing direction at given moment can be stored in the computer memory, which can be used to analyze the subject’s behavior. The task of a subject was to find the way which was previously shown by the screen. Three male and three female subjects for each of three street types, for a total of eighteen subjects participated in the experiment. An analysis of the trace of movements and viewing directions generally supported the hypothesis that the street with visual characteristics were easier to memorize the route although there was a large difference in performance among subjects. It was also noted that there were three different strategies of way-finding according to the subject: one group of subjects seemed to rely on well structured knowledge of the route, i.e., the cognitive map, and the other group seemed to rely on incoming visual information of the changing scenes, and the last group seemed to find the way using both the cognitive map and visual information depending on the situations.
keywords Architectural Endoscopy, Real Environments
series EAEA
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id b731
authors Ramstein, Christophe
year 1995
title An Architecture Model for Multimodal Interfaces with Force Feedback I.14 Virtual Reality 2
source Proceedings of the Sixth International Conference on Human-Computer Interaction 1995 v.I. Human and Future Computing pp. 455-460
summary Multimodal interfaces with force feedback pose new problems both in terms of their design and for hardware and software implementation. The first problem is to design and build force-feedback pointing devices that permit users both to select and manipulate interface objects (windows, menus and icons) and at the same time feel these objects with force and precision through their tactile and kinesthetic senses. The next problem is to model the interface such that it can be returned to the user via force-feedback devices: the task is to define the fields of force corresponding to interface objects and events, and to design algorithms to synthesize these forces in such a way as to provide optimum real-time operation. The final problem concerns the hardware and software architecture to be used to facilitate the integration of this technology with contemporary graphic interfaces. An architecture model for a multimodal interface is presented: it is based on the notion of a multiagent model and breaks down inputs and outputs according to multiple modalities (visual, auditory and haptic). These modalities are represented by independent software components that communicate with one another via a higher-level control agent.
keywords Multimodal Interface; Software Architecture Model; Force Feedback; Haptic Device; Physical Model
series other
last changed 2002/07/07 16:01

_id 1bb0
authors Russell, S. and Norvig, P.
year 1995
title Artificial Intelligence: A Modern Approach
source Prentice Hall, Englewood Cliffs, NJ
summary Humankind has given itself the scientific name homo sapiens--man the wise--because our mental capacities are so important to our everyday lives and our sense of self. The field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves. But unlike philosophy and psychology, which are also concerned with AI strives to build intelligent entities as well as understand them. Another reason to study AI is that these constructed intelligent entities are interesting and useful in their own right. AI has produced many significant and impressive products even at this early stage in its development. Although no one can predict the future in detail, it is clear that computers with human-level intelligence (or better) would have a huge impact on our everyday lives and on the future course of civilization. AI addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain{brain}, whether biological or electronic, to perceive, understand, predict, and manipulate a world far larger and more complicated than itself? How do we go about making something with those properties? These are hard questions, but unlike the search for faster-than-light travel or an antigravity device, the researcher in AI has solid evidence that the quest is possible. All the researcher has to do is look in the mirror to see an example of an intelligent system. AI is one of the newest disciplines. It was formally initiated in 1956, when the name was coined, although at that point work had been under way for about five years. Along with modern genetics, it is regularly cited as the ``field I would most like to be in'' by scientists in other disciplines. A student in physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest, and that it takes many years of study before one can contribute new ideas. AI, on the other hand, still has openings for a full-time Einstein. The study of intelligence is also one of the oldest disciplines. For over 2000 years, philosophers have tried to understand how seeing, learning, remembering, and reasoning could, or should, be done. The advent of usable computers in the early 1950s turned the learned but armchair speculation concerning these mental faculties into a real experimental and theoretical discipline. Many felt that the new ``Electronic Super-Brains'' had unlimited potential for intelligence. ``Faster Than Einstein'' was a typical headline. But as well as providing a vehicle for creating artificially intelligent entities, the computer provides a tool for testing theories of intelligence, and many theories failed to withstand the test--a case of ``out of the armchair, into the fire.'' AI has turned out to be more difficult than many at first imagined, and modern ideas are much richer, more subtle, and more interesting as a result. AI currently encompasses a huge variety of subfields, from general-purpose areas such as perception and logical reasoning, to specific tasks such as playing chess, proving mathematical theorems, writing poetry{poetry}, and diagnosing diseases. Often, scientists in other fields move gradually into artificial intelligence, where they find the tools and vocabulary to systematize and automate the intellectual tasks on which they have been working all their lives. Similarly, workers in AI can choose to apply their methods to any area of human intellectual endeavor. In this sense, it is truly a universal field.
series other
last changed 2003/04/23 15:14

_id 276c
authors Breen, Jack
year 1995
title Dynamic Perspective: The Media Research Programme
source The Future of Endoscopy [Proceedings of the 2nd European Architectural Endoscopy Association Conference / ISBN 3-85437-114-4]
summary This paper focuses on the Research Programme of the Media Sector at the Faculty of Architecture, Delft University of Technology. The media research objectives for the coming years have been brought together with an overall project: “Dynamic Perspective”. The “dynamic” quality may be interpreted both as movement (visual displacement and registration) and as change (the effects of different options).

The four projects which together make up this research programme deal with perception (understanding) and conception (designing and imaging) of urban space: “the architecture of the city”. Specific aspects are the effects of primary and secondary spatial boundaries and the systematic structuring of simulation of visual information. The programme will further concentrate on the development and implementation of relevant techniques (besides “traditional” ones such as the drawing and the architectural model, on multimedia techniques such as endoscopy, computer visualization and development of virtual reality systems), both in education and in design practice.

By means of analysis, the creation of visual models of choice and the setting up of experiments, the programme aims at the furthering of theoretical knowledge and at acquiring better insights into the effects of design decisions at an urban level, both for designers and for other participants in the design process. Further development of existing laboratory facilities towards a comprehensive Design Simulation Laboratory is an important aspect of the programme.

Within the media research process the Aspern location master plan has been considered as a case study, the findings of which will be presented separately in the workshop sessions.

keywords Architectural Endoscopy, Real Environments
series EAEA
email
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id 913a
authors Brutzman, D.P., Macedonia, M.R. and Zyda, M.J.
year 1995
title Internetwork Infrastructure Requirements for Virtual Environments
source NIl 2000 Forum of the Computer Science and Telecommunications Board, National Research Council, Washington, D.C., May 1995
summary Virtual environments (VEs) are a broad multidisciplinary research area that includes all aspects of computer science, virtual reality, virtual worlds, teleoperation and telepresence. A variety of network elements are required to scale up virtual environments to arbitrarily large sizes, simultaneously connecting thousands of interacting players and all kinds of information objects. Four key communications components for virtual environments are found within the Internet Protocol (IP) suite: light-weight messages, network pointers, heavy-weight objects and real-time streams. Software and hardware shortfalls and successes for internetworked virtual environments provide specific research conclusions and recommendations. Since large-scale networked are intended to include all possible types of content and interaction, they are expected to enable new classes of interdisciplinary research and sophisticated applications that are particularly suitable for implementation using the Virtual Reality Modeling Language (VRML).
series other
last changed 2003/04/23 15:50

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 19HOMELOGIN (you are user _anon_550771 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002