CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 396

_id f2c8
authors Mine, M.
year 1995
title ISAAC: A virtual environment tool for the interactive construction of virtual worlds
source UNC Chapel Hill Computer Science Technical Report. TR95-020
summary absract im ordner als eps
series report
email
last changed 2003/04/23 15:50

_id db00
authors Espina, Jane J.B.
year 2002
title Base de datos de la arquitectura moderna de la ciudad de Maracaibo 1920-1990 [Database of the Modern Architecture of the City of Maracaibo 1920-1990]
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 133-139
summary Bases de datos, Sistemas y Redes 134The purpose of this report is to present the achievements obtained in the use of the technologies of information andcommunication in the architecture, by means of the construction of a database to register the information on the modernarchitecture of the city of Maracaibo from 1920 until 1990, in reference to the constructions located in 5 of Julio, Sectorand to the most outstanding planners for its work, by means of the representation of the same ones in digital format.The objective of this investigation it was to elaborate a database for the registration of the information on the modernarchitecture in the period 1920-1990 of Maracaibo, by means of the design of an automated tool to organize the it datesrelated with the buildings, parcels and planners of the city. The investigation was carried out considering three methodologicalmoments: a) Gathering and classification of the information of the buildings and planners of the modern architectureto elaborate the databases, b) Design of the databases for the organization of the information and c) Design ofthe consultations, information, reports and the beginning menu. For the prosecution of the data files were generated inprograms attended by such computer as: AutoCAD R14 and 2000, Microsoft Word, Microsoft PowerPoint and MicrosoftAccess 2000, CorelDRAW V9.0 and Corel PHOTOPAINT V9.0.The investigation is related with the work developed in the class of Graphic Calculation II, belonging to the Departmentof Communication of the School of Architecture of the Faculty of Architecture and Design of The University of the Zulia(FADLUZ), carried out from the year 1999, using part of the obtained information of the works of the students generatedby means of the CAD systems for the representation in three dimensions of constructions with historical relevance in themodern architecture of Maracaibo, which are classified in the work of The Other City, generating different types ofisometric views, perspectives, representations photorealistics, plants and facades, among others.In what concerns to the thematic of this investigation, previous antecedents are ignored in our environment, and beingthe first time that incorporates the digital graph applied to the work carried out by the architects of “The Other City, thegenesis of the oil city of Maracaibo” carried out in the year 1994; of there the value of this research the field of thearchitecture and computer science. To point out that databases exist in the architecture field fits and of the design, alsoweb sites with information has more than enough architects and architecture works (Montagu, 1999).In The University of the Zulia, specifically in the Faculty of Architecture and Design, they have been carried out twoworks related with the thematic one of database, specifically in the years 1995 and 1996, in the first one a system wasdesigned to visualize, to classify and to analyze from the architectural point of view some historical buildings of Maracaiboand in the second an automated system of documental information was generated on the goods properties built insidethe urban area of Maracaibo. In the world environment it stands out the first database developed in Argentina, it is the database of the Modern andContemporary Architecture “Datarq 2000” elaborated by the Prof. Arturo Montagú of the University of Buenos Aires. The general objective of this work it was the use of new technologies for the prosecution in Architecture and Design (MONTAGU, Ob.cit). In the database, he intends to incorporate a complementary methodology and alternative of use of the informationthat habitually is used in the teaching of the architecture. When concluding this investigation, it was achieved: 1) analysis of projects of modern architecture, of which some form part of the historical patrimony of Maracaibo; 2) organized registrations of type text: historical, formal, space and technical data, and graph: you plant, facades, perspectives, pictures, among other, of the Moments of the Architecture of the Modernity in the city, general data and more excellent characteristics of the constructions, and general data of the Planners with their more important works, besides information on the parcels where the constructions are located, 3)construction in digital format and development of representations photorealistics of architecture projects already built. It is excellent to highlight the importance in the use of the Technologies of Information and Communication in this investigation, since it will allow to incorporate to the means digital part of the information of the modern architecturalconstructions that characterized the city of Maracaibo at the end of the XX century, and that in the last decades they have suffered changes, some of them have disappeared, destroying leaves of the modern historical patrimony of the city; therefore, the necessity arises of to register and to systematize in digital format the graphic information of those constructions. Also, to demonstrate the importance of the use of the computer and of the computer science in the representation and compression of the buildings of the modern architecture, to inclination texts, images, mapping, models in 3D and information organized in databases, and the relevance of the work from the pedagogic point of view,since it will be able to be used in the dictation of computer science classes and history in the teaching of the University studies of third level, allowing the learning with the use in new ways of transmission of the knowledge starting from the visual information on the part of the students in the elaboration of models in three dimensions or electronic scalemodels, also of the modern architecture and in a future to serve as support material for virtual recoveries of some buildings that at the present time they don’t exist or they are almost destroyed. In synthesis, the investigation will allow to know and to register the architecture of Maracaibo in this last decade, which arises under the parameters of the modernity and that through its organization and visualization in digital format, it will allow to the students, professors and interested in knowing it in a quicker and more efficient way, constituting a contribution to theteaching in the history area and calculation. Also, it can be of a lot of utility for the development of future investigation projects related with the thematic one and restoration of buildings of the modernity in Maracaibo.
keywords database, digital format, modern architecture, model, mapping
series SIGRADI
email
last changed 2016/03/10 09:51

_id b04c
authors Goerger, S., Darken, R., Boyd, M., Gagnon, T., Liles, S., Sullivan, J. and Lawson, J.
year 1996
title Spatial Knowledge Acquisition from Maps and Virtual Environments in Complex Architectural Space
source Proc. 16 th Applied Behavioral Sciences Symposium, 22-23 April, U.S. Airforce Academy, Colorado Springs, CO., 1996, 6-10
summary It has often been suggested that due to its inherent spatial nature, a virtual environment (VE) might be a powerful tool for spatial knowledge acquisition of a real environment, as opposed to the use of maps or some other two-dimensional, symbolic medium. While interesting from a psychological point of view, a study of the use of a VE in lieu of a map seems nonsensical from a practical point of view. Why would the use of a VE preclude the use of a map? The more interesting investigation would be of the value added of the VE when used with a map. If the VE could be shown to substantially improve navigation performance, then there might be a case for its use as a training tool. If not, then we have to assume that maps continue to be the best spatial knowledge acquisition tool available. An experiment was conducted at the Naval Postgraduate School to determine if the use of an interactive, three-dimensional virtual environment would enhance spatial knowledge acquisition of a complex architectural space when used in conjunction with floor plan diagrams. There has been significant interest in this research area of late. Witmer, Bailey, and Knerr (1995) showed that a VE was useful in acquiring route knowledge of a complex building. Route knowledge is defined as the procedural knowledge required to successfully traverse paths between distant locations (Golledge, 1991). Configurational (or survey) knowledge is the highest level of spatial knowledge and represents a map-like internal encoding of the environment (Thorndyke, 1980). The Witmer study could not confirm if configurational knowledge was being acquired. Also, no comparison was made to a map-only condition, which we felt is the most obvious alternative. Comparisons were made only to a real world condition and a symbolic condition where the route is presented verbally.
series other
last changed 2003/04/23 15:50

_id d5b3
authors Knight, Michael and Brown, Andre
year 1999
title Working in Virtual Environments through appropriate Physical Interfaces
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 431-436
doi https://doi.org/10.52842/conf.ecaade.1999.431
summary The work described here is aimed at contributing towards the debate and development relating to the construction of interfaces to explore buildings and their environs through virtual worlds. We describe a particular hardware and software configuration which is derived by the use of low cost games software to create the Virtual Environment. The Physical Interface responds to the work of other researchers, in this area, in particular Shaw (1994) and Vasquez de Velasco & Trigo (1997). Virtual Evironments might have the potential to be "a magical window into other worlds, from molecules to minds" (Rheingold, 1992), but what is the nature of that window? Currently it is often a translucent opening which gives a hazy and distorted (disembodied) view. And many versions of such openings are relatively expensive. We consider ways towards clearing the haze without too much expense, adapting techniques proposed by developers of low cost virtual reality systems (Hollands, 1995) for use in an architectural setting.
keywords Virtual Environments, Games Software
series eCAADe
email
last changed 2022/06/07 07:51

_id ae9f
authors Damer, B.
year 1996
title Inhabited Virtual Worlds: A New Frontier for Interaction Design
source Interactions, Vol.3, No.5 ACM
summary In April of 1995 the Internet took a step into the third dimension with the introduction of the Virtual Reality Modeling Language (VRML) as a commercial standard. Another event that month caused fewer headlines but in retrospect was just as significant. A small company from San Francisco, Worlds Incorporated, launched WorldsChat, a three dimensional environment allowing any Internet user to don a digital costume, or avatar, and travel about and converse with other people inhabiting the space. WorldsChat was appropriately modeled on a space station complete with a central hub, hallways, sliding doors, windows, and escalators to outlying pods.
series journal paper
last changed 2003/04/23 15:50

_id a91e
authors Deering, M.
year 1995
title Holosketch: A virtual reality sketching/animation tool
source ACM Transactions on Computer-Human Interaction, 2(3), pp. 220-238
summary This article describes HoloSketch, a virtual reality-based 3D geometry creation and manipulation tool. HoloSketch is aimed at providing nonprogrammers with an easy-to-use 3D “What-You-See-Is-What-You-Get” environment. Using head-tracked stereo shutter glasses and a desktop CRT display configuration, virtual objects can be created with a 3D wand manipulator directly in front of the user, at very high accuracy and much more rapidly than with traditional 3D drawing systems. HoloSketch also supports simple animation and audio control for virtual objects. This article describes the functions of the HoloSketch system, as well as our experience so far with more-general issues of head-tracked stereo 3D user interface design.
series journal paper
last changed 2003/04/23 15:50

_id 5e49
authors Deering, Michael F.
year 1996
title HoloSketch: A Virtual Reality Sketching/Animation Tool Special Issue on Virtual Reality Software and Technology
source Transactions on Computer-Human Interaction 1995 v.2 n.3 pp. 220-238
summary This article describes HoloSketch, a virtual reality-based 3D geometry creation and manipulation tool. HoloSketch is aimed at providing nonprogrammers with an easy-to-use 3D "What-You-See-Is-What-You-Get" environment. Using head-tracked stereo shutter glasses and a desktop CRT display configuration, virtual objects can be created with a 3D wand manipulator directly in front of the user, at very high accuracy and much more rapidly than with traditional 3D drawing systems. HoloSketch also supports simple animation and audio control for virtual objects. This article describes the functions of the HoloSketch system, as well as our experience so far with more-general issues of head-tracked stereo 3D user interface design.
keywords Computer Graphics; Picture/Image Generation; Display Algorithms; Computer Graphics; Three-Dimensional Graphics and Realism; Human Factors; 3D Animation; 3D Graphics; Graphics Drawing Systems; Graphics Painting Systems; Man-Machine Interface; Virtual Reality
series other
last changed 2002/07/07 16:01

_id 0128
authors Engeli, M., Kurmann, D. and Schmitt, G.
year 1995
title A New Design Studio: Intelligent Objects and Personal Agents
source Computing in Design - Enabling, Capturing and Sharing Ideas [ACADIA Conference Proceedings / ISBN 1-880250-04-7] University of Washington (Seattle, Washington / USA) October 19-22, 1995, pp. 155-170
doi https://doi.org/10.52842/conf.acadia.1995.155
summary As design processes and products are constantly increasing in complexity, new tools are being developed for the designer to cope with the growing demands. In this paper we describe our research towards a design environment, within which different aspects of design can be combined, elaborated and controlled. New hardware equipment will be combined with recent developments in graphics and artificial intelligence programming to develop appropriate computer based tools and find possible new design techniques. The core of the new design studio comprises intelligent objects in a virtual reality environment that exhibit different behaviours drawn from Artificial Intelligence (AI) and Artificial Life (AL) principles, a part already realised in a tool called 'Sculptor'. The tasks of the architect will focus on preferencing and initiating good tendencies in the development of the design. A first set of software agents, assistants that support the architect in viewing, experiencing and judging the design has also been conceptualised for this virtual design environment. The goal is to create an optimised environment for the designer, where the complexity of the design task can be reduced thanks to the support made available from the machine.
keywords Architectural Design, Design Process, Virtual Reality, Artificial Intelligence, Personal Agents
series ACADIA
email
last changed 2022/06/07 07:55

_id 2068
authors Frazer, John
year 1995
title AN EVOLUTIONARY ARCHITECTURE
source London: Architectural Association
summary In "An Evolutionary Architecture", John Frazer presents an overview of his work for the past 30 years. Attempting to develop a theoretical basis for architecture using analogies with nature's processes of evolution and morphogenesis. Frazer's vision of the future of architecture is to construct organic buildings. Thermodynamically open systems which are more environmentally aware and sustainable physically, sociologically and economically. The range of topics which Frazer discusses is a good illustration of the breadth and depth of the evolutionary design problem. Environmental Modelling One of the first topics dealt with is the importance of environmental modelling within the design process. Frazer shows how environmental modelling is often misused or misinterpreted by architects with particular reference to solar modelling. From the discussion given it would seem that simplifications of the environmental models is the prime culprit resulting in misinterpretation and misuse. The simplifications are understandable given the amount of information needed for accurate modelling. By simplifying the model of the environmental conditions the architect is able to make informed judgments within reasonable amounts of time and effort. Unfortunately the simplications result in errors which compound and cause the resulting structures to fall short of their anticipated performance. Frazer obviously believes that the computer can be a great aid in the harnessing of environmental modelling data, providing that the same simplifying assumptions are not made and that better models and interfaces are possible. Physical Modelling Physical modelling has played an important role in Frazer's research. Leading to the construction of several novel machine readable interactive models, ranging from lego-like building blocks to beermat cellular automata and wall partitioning systems. Ultimately this line of research has led to the Universal Constructor and the Universal Interactor. The Universal Constructor The Universal Constructor features on the cover of the book. It consists of a base plug-board, called the "landscape", on top of which "smart" blocks, or cells, can be stacked vertically. The cells are individually identified and can communicate with neighbours above and below. Cells communicate with users through a bank of LEDs displaying the current state of the cell. The whole structure is machine readable and so can be interpreted by a computer. The computer can interpret the states of the cells as either colour or geometrical transformations allowing a wide range of possible interpretations. The user interacts with the computer display through direct manipulation of the cells. The computer can communicate and even direct the actions of the user through feedback with the cells to display various states. The direct manipulation of the cells encourages experimentation by the user and demonstrates basic concepts of the system. The Universal Interactor The Universal Interactor is a whole series of experimental projects investigating novel input and output devices. All of the devices speak a common binary language and so can communicate through a mediating central hub. The result is that input, from say a body-suit, can be used to drive the out of a sound system or vice versa. The Universal Interactor opens up many possibilities for expression when using a CAD system that may at first seem very strange.However, some of these feedback systems may prove superior in the hands of skilled technicians than more standard devices. Imagine how a musician might be able to devise structures by playing melodies which express the character. Of course the interpretation of input in this form poses a difficult problem which will take a great deal of research to achieve. The Universal Interactor has been used to provide environmental feedback to affect the development of evolving genetic codes. The feedback given by the Universal Interactor has been used to guide selection of individuals from a population. Adaptive Computing Frazer completes his introduction to the range of tools used in his research by giving a brief tour of adaptive computing techniques. Covering topics including cellular automata, genetic algorithms, classifier systems and artificial evolution. Cellular Automata As previously mentioned Frazer has done some work using cellular automata in both physical and simulated environments. Frazer discusses how surprisingly complex behaviour can result from the simple local rules executed by cellular automata. Cellular automata are also capable of computation, in fact able to perform any computation possible by a finite state machine. Note that this does not mean that cellular automata are capable of any general computation as this would require the construction of a Turing machine which is beyond the capabilities of a finite state machine. Genetic Algorithms Genetic algorithms were first presented by Holland and since have become a important tool for many researchers in various areas.Originally developed for problem-solving and optimization problems with clearly stated criteria and goals. Frazer fails to mention one of the most important differences between genetic algorithms and other adaptive problem-solving techniques, ie. neural networks. Genetic algorithms have the advantage that criteria can be clearly stated and controlled within the fitness function. The learning by example which neural networks rely upon does not afford this level of control over what is to be learned. Classifier Systems Holland went on to develop genetic algorithms into classifier systems. Classifier systems are more focussed upon the problem of learning appropriate responses to stimuli, than searching for solutions to problems. Classifier systems receive information from the environment and respond according to rules, or classifiers. Successful classifiers are rewarded, creating a reinforcement learning environment. Obviously, the mapping between classifier systems and the cybernetic view of organisms sensing, processing and responding to environmental stimuli is strong. It would seem that a central process similar to a classifier system would be appropriate at the core of an organic building. Learning appropriate responses to environmental conditions over time. Artificial Evolution Artificial evolution traces it's roots back to the Biomorph program which was described by Dawkins in his book "The Blind Watchmaker". Essentially, artificial evolution requires that a user supplements the standard fitness function in genetic algorithms to guide evolution. The user may provide selection pressures which are unquantifiable in a stated problem and thus provide a means for dealing ill-defined criteria. Frazer notes that solving problems with ill-defined criteria using artificial evolution seriously limits the scope of problems that can be tackled. The reliance upon user interaction in artificial evolution reduces the practical size of populations and the duration of evolutionary runs. Coding Schemes Frazer goes on to discuss the encoding of architectural designs and their subsequent evolution. Introducing two major systems, the Reptile system and the Universal State Space Modeller. Blueprint vs. Recipe Frazer points out the inadequacies of using standard "blueprint" design techniques in developing organic structures. Using a "recipe" to describe the process of constructing a building is presented as an alternative. Recipes for construction are discussed with reference to the analogous process description given by DNA to construct an organism. The Reptile System The Reptile System is an ingenious construction set capable of producing a wide range of structures using just two simple components. Frazer saw the advantages of this system for rule-based and evolutionary systems in the compactness of structure descriptions. Compactness was essential for the early computational work when computer memory and storage space was scarce. However, compact representations such as those described form very rugged fitness landscapes which are not well suited to evolutionary search techniques. Structures are created from an initial "seed" or minimal construction, for example a compact spherical structure. The seed is then manipulated using a series of processes or transformations, for example stretching, shearing or bending. The structure would grow according to the transformations applied to it. Obviously, the transformations could be a predetermined sequence of actions which would always yield the same final structure given the same initial seed. Alternatively, the series of transformations applied could be environmentally sensitive resulting in forms which were also sensitive to their location. The idea of taking a geometrical form as a seed and transforming it using a series of processes to create complex structures is similar in many ways to the early work of Latham creating large morphological charts. Latham went on to develop his ideas into the "Mutator" system which he used to create organic artworks. Generalising the Reptile System Frazer has proposed a generalised version of the Reptile System to tackle more realistic building problems. Generating the seed or minimal configuration from design requirements automatically. From this starting point (or set of starting points) solutions could be evolved using artificial evolution. Quantifiable and specific aspects of the design brief define the formal criteria which are used as a standard fitness function. Non-quantifiable criteria, including aesthetic judgments, are evaluated by the user. The proposed system would be able to learn successful strategies for satisfying both formal and user criteria. In doing so the system would become a personalised tool of the designer. A personal assistant which would be able to anticipate aesthetic judgements and other criteria by employing previously successful strategies. Ultimately, this is a similar concept to Negroponte's "Architecture Machine" which he proposed would be computer system so personalised so as to be almost unusable by other people. The Universal State Space Modeller The Universal State Space Modeller is the basis of Frazer's current work. It is a system which can be used to model any structure, hence the universal claim in it's title. The datastructure underlying the modeller is a state space of scaleless logical points, called motes. Motes are arranged in a close-packing sphere arrangement, which makes each one equidistant from it's twelve neighbours. Any point can be broken down into a self-similar tetrahedral structure of logical points. Giving the state space a fractal nature which allows modelling at many different levels at once. Each mote can be thought of as analogous to a cell in a biological organism. Every mote carries a copy of the architectural genetic code in the same way that each cell within a organism carries a copy of it's DNA. The genetic code of a mote is stored as a sequence of binary "morons" which are grouped together into spatial configurations which are interpreted as the state of the mote. The developmental process begins with a seed. The seed develops through cellular duplication according to the rules of the genetic code. In the beginning the seed develops mainly in response to the internal genetic code, but as the development progresses the environment plays a greater role. Cells communicate by passing messages to their immediate twelve neighbours. However, it can send messages directed at remote cells, without knowledge of it's spatial relationship. During the development cells take on specialised functions, including environmental sensors or producers of raw materials. The resulting system is process driven, without presupposing the existence of a construction set to use. The datastructure can be interpreted in many ways to derive various phenotypes. The resulting structure is a by-product of the cellular activity during development and in response to the environment. As such the resulting structures have much in common with living organisms which are also the emergent result or by-product of local cellular activity. Primordial Architectural Soups To conclude, Frazer presents some of the most recent work done, evolving fundamental structures using limited raw materials, an initial seed and massive feedback. Frazer proposes to go further and do away with the need for initial seed and start with a primordial soup of basic architectural concepts. The research is attempting to evolve the starting conditions and evolutionary processes without any preconditions. Is there enough time to evolve a complex system from the basic building blocks which Frazer proposes? The computational complexity of the task being embarked upon is not discussed. There is an implicit assumption that the "superb tactics" of natural selection are enough to cut through the complexity of the task. However, Kauffman has shown how self-organisation plays a major role in the early development of replicating systems which we may call alive. Natural selection requires a solid basis upon which it can act. Is the primordial soup which Frazer proposes of the correct constitution to support self-organisation? Kauffman suggests that one of the most important attributes of a primordial soup to be capable of self-organisation is the need for a complex network of catalysts and the controlling mechanisms to stop the reactions from going supracritical. Can such a network be provided of primitive architectural concepts? What does it mean to have a catalyst in this domain? Conclusion Frazer shows some interesting work both in the areas of evolutionary design and self-organising systems. It is obvious from his work that he sympathizes with the opinions put forward by Kauffman that the order found in living organisms comes from both external evolutionary pressure and internal self-organisation. His final remarks underly this by paraphrasing the words of Kauffman, that life is always to found on the edge of chaos. By the "edge of chaos" Kauffman is referring to the area within the ordered regime of a system close to the "phase transition" to chaotic behaviour. Unfortunately, Frazer does not demonstrate that the systems he has presented have the necessary qualities to derive useful order at the edge of chaos. He does not demonstrate, as Kauffman does repeatedly, that there exists a "phase transition" between ordered and chaotic regimes of his systems. He also does not make any studies of the relationship of useful forms generated by his work to phase transition regions of his systems should they exist. If we are to find an organic architecture, in more than name alone, it is surely to reside close to the phase transition of the construction system of which is it built. Only there, if we are to believe Kauffman, are we to find useful order together with environmentally sensitive and thermodynamically open systems which can approach the utility of living organisms.
series other
type normal paper
last changed 2004/05/22 14:12

_id 600e
authors Gavin, Lesley
year 1999
title Architecture of the Virtual Place
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 418-423
doi https://doi.org/10.52842/conf.ecaade.1999.418
summary The Bartlett School of Graduate Studies, University College London (UCL), set up the first MSc in Virtual Environments in the UK in 1995. The course aims to synthesise and build on research work undertaken in the arts, architecture, computing and biological sciences in exploring the realms of the creation of digital and virtual immersive spaces. The MSc is concerned primarily with equipping students from design backgrounds with the skills, techniques and theories necessary in the production of virtual environments. The course examines both virtual worlds as prototypes for real urban or built form and, over the last few years, has also developed an increasing interest in the the practice of architecture in purely virtual contexts. The MSc course is embedded in the UK government sponsored Virtual Reality Centre for the Built Environment which is hosted by the Bartlett School of Architecture. This centre involves the UCL departments of architecture, computer science and geography and includes industrial partners from a number of areas concerned with the built environment including architectural practice, surveying and estate management as well as some software companies and the telecoms industry. The first cohort of students graduated in 1997 and predominantly found work in companies working in the new market area of digital media. This paper aims to outline the nature of the course as it stands, examines the new and ever increasing market for designers within digital media and proposes possible future directions for the course.
keywords Virtual Reality, Immersive Spaces, Digital Media, Education
series eCAADe
email
more http://www.bartlett.ucl.ac.uk/ve/
last changed 2022/06/07 07:51

_id 5c5f
authors Jepson, W., Liggett, R. and Friedman, S.
year 1995
title An environment for real-time urban visualization
source Proceedings of the Symposium on Interactive 3D Gra hics, Monterey, CA
summary Drawing from technologies developed for military flight simulation and virtual reality, a system for efficiently modeling and simulating urban environments has been implemented at UCLA. This system combines relatively simple 3-dimensional models (from a traditional CAD standpoint) with aerial photographs and street level video to create a realistic (down to plants, street signs and the graffiti on the walls) model of an urban neighborhood which can then be used for interactive fly and walk-through demonstrations.The Urban Simulator project is more than just the simulation software. It is a methodology which integrates existing systems such as CAD and GIS with visual simulation to facilitate the modeling, display, and evaluation of alternative proposed environments. It can be used to visualize neighborhoods as they currently exist and how they might appear after built intervention occurs. Or, the system can be used to simulate entirely new development.
series other
last changed 2003/04/23 15:50

_id c7e9
authors Maver, T.W.
year 2002
title Predicting the Past, Remembering the Future
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 2-3
summary Charlas Magistrales 2There never has been such an exciting moment in time in the extraordinary 30 year history of our subject area, as NOW,when the philosophical theoretical and practical issues of virtuality are taking centre stage.The PastThere have, of course, been other defining moments during these exciting 30 years:• the first algorithms for generating building layouts (circa 1965).• the first use of Computer graphics for building appraisal (circa 1966).• the first integrated package for building performance appraisal (circa 1972).• the first computer generated perspective drawings (circa 1973).• the first robust drafting systems (circa 1975).• the first dynamic energy models (circa 1982).• the first photorealistic colour imaging (circa 1986).• the first animations (circa 1988)• the first multimedia systems (circa 1995), and• the first convincing demonstrations of virtual reality (circa 1996).Whereas the CAAD community has been hugely inventive in the development of ICT applications to building design, it hasbeen woefully remiss in its attempts to evaluate the contribution of those developments to the quality of the built environmentor to the efficiency of the design process. In the absence of any real evidence, one can only conjecture regarding the realbenefits which fall, it is suggested, under the following headings:• Verisimilitude: The extraordinary quality of still and animated images of the formal qualities of the interiors and exteriorsof individual buildings and of whole neighborhoods must surely give great comfort to practitioners and their clients thatwhat is intended, formally, is what will be delivered, i.e. WYSIWYG - what you see is what you get.• Sustainability: The power of «first-principle» models of the dynamic energetic behaviour of buildings in response tochanging diurnal and seasonal conditions has the potential to save millions of dollars and dramatically to reduce thedamaging environmental pollution created by badly designed and managed buildings.• Productivity: CAD is now a multi-billion dollar business which offers design decision support systems which operate,effectively, across continents, time-zones, professions and companies.• Communication: Multi-media technology - cheap to deliver but high in value - is changing the way in which we canexplain and understand the past and, envisage and anticipate the future; virtual past and virtual future!MacromyopiaThe late John Lansdown offered the view, in his wonderfully prophetic way, that ...”the future will be just like the past, onlymore so...”So what can we expect the extraordinary trajectory of our subject area to be?To have any chance of being accurate we have to have an understanding of the phenomenon of macromyopia: thephenomenon exhibitted by society of greatly exaggerating the immediate short-term impact of new technologies (particularlythe information technologies) but, more importantly, seriously underestimating their sustained long-term impacts - socially,economically and intellectually . Examples of flawed predictions regarding the the future application of information technologiesinclude:• The British Government in 1880 declined to support the idea of a national telephonic system, backed by the argumentthat there were sufficient small boys in the countryside to run with messages.• Alexander Bell was modest enough to say that: «I am not boasting or exaggerating but I believe, one day, there will bea telephone in every American city».• Tom Watson, in 1943 said: «I think there is a world market for about 5 computers».• In 1977, Ken Olssop of Digital said: «There is no reason for any individuals to have a computer in their home».The FutureJust as the ascent of woman/man-kind can be attributed to her/his capacity to discover amplifiers of the modest humancapability, so we shall discover how best to exploit our most important amplifier - that of the intellect. The more we know themore we can figure; the more we can figure the more we understand; the more we understand the more we can appraise;the more we can appraise the more we can decide; the more we can decide the more we can act; the more we can act themore we can shape; and the more we can shape, the better the chance that we can leave for future generations a trulysustainable built environment which is fit-for-purpose, cost-beneficial, environmentally friendly and culturally significactCentral to this aspiration will be our understanding of the relationship between real and virtual worlds and how to moveeffortlessly between them. We need to be able to design, from within the virtual world, environments which may be real ormay remain virtual or, perhaps, be part real and part virtual.What is certain is that the next 30 years will be every bit as exciting and challenging as the first 30 years.
series SIGRADI
email
last changed 2016/03/10 09:55

_id 7670
authors Sawicki, Bogumil
year 1995
title Ray Tracing – New Chances, Possibilities and Limitations in AutoCAD
source CAD Space [Proceedings of the III International Conference Computer in Architectural Design] Bialystock 27-29 April 1995, pp. 121-136
summary Realistic image synthesis is nowadays widely used in engineering applications. Some of these applications, such as architectural, interior, lighting and industrial design demand accurate visualization of non-existent scenes as they would look to us, when built in reality. This can only be archived by using physically based models of light interaction with surfaces, and simulating propagation of light through an environment. Ray tracing is one of the most powerful techniques used in computer graphics, which can produce such very realistic images. Ray tracing algorithm follows the paths of light rays backwards from observer into the scene. It is very time consuming process and as such one could not be developed until proper computers appeared, In recent years the technological improvements in computer industry brought more powerful machines with bigger storage capacities and better graphic devices. Owing to increasing these hardware capabilities successful implementation of ray tracing in different CAD software became possible also on PC machines. Ray tracing in AutoCAD r.12 - the most popular CAD package in the world - is the best of that example. AccuRender and AutoVision are an AutoCAD Development System (ADS) applications that use ray tracing to create photorealistic images from 3D AutoCAD models. These ,internal"' applications let users generate synthetic images of threedimensional models and scenes entirely within AutoCAD space and show effects directly on main AutoCAD screen. Ray tracing algorithm accurately calculates and displays shadows, transparency, diffusion, reflection, and refraction from surface qualities of user-defined materials. The accurate modelling of light lets produce sophisticated effects and high-quality images, which these ray tracers always generates at 24-bit pixel depth,"providing 16,7 million colours. That results can be quite impressive for some architects and are almost acceptable for others but that coloured virtual world, which is presented by ray tracing in AutoCAD space in such convincing way, is still not exactly the same as the real world. Main limitations of realism are due to the nature of ray tracing method Classical ray tracing technique takes into account the effects of light reflection from neighbouring surfaces but, leaves out of account the ambient and global illumination arising out of complex interreflections in an environment. So models generated by ray tracing belong to an "ideal" world where real materials and environment can't find their right place. We complain about that fact and say that ray tracing shows us "too specular world", but (...) (...) there is anything better on the horizon? It should be concluded, that typical abilities of today's graphics software and hardware are far from exploited. As was observed in literature there have been various works carried along with the explicit intention of overcoming all these ray tracing limitations, These researches seem to be very promising and let us hope that their results will be seen in CAD applications soon. As it happens with modelling, perhaps the answer will come from a variety of techniques that can be combined together with ray tracing depending on the case we are dealing with. Therefore from the point of view of an architects that try to keep alive some interest on the nature of materials and their interaction with form, "ray tracing" seems to be right path of research and development that we can still a long way follow, From the point of view of the school, a critical assimilation of "ray tracing" processes is required and one that might help to determinate exactly their distortions and to indicate the correct way of its development and right place in CAAD education. I trust that ray tracing will become standard not only in AutoCAD but in all architectural space modelling CAD applications and will be established as a powerful and real tool for experimental researches in architectural design process. Will be the technological progress so significant in the nearest future as it is anticipated?
series plCAD
last changed 2000/01/24 10:08

_id ascaad2022_099
id ascaad2022_099
authors Sencan, Inanc
year 2022
title Progeny: A Grasshopper Plug-in that Augments Cellular Automata Algorithms for 3D Form Explorations
source Hybrid Spaces of the Metaverse - Architecture in the Age of the Metaverse: Opportunities and Potentials [10th ASCAAD Conference Proceedings] Debbieh (Lebanon) [Virtual Conference] 12-13 October 2022, pp. 377-391
summary Cellular automata (CA) is a well-known computation method introduced by John von Neumann and Stanislaw Ulam in the 1940s. Since then, it has been studied in various fields such as computer science, biology, physics, chemistry, and art. The Classic CA algorithm is a calculation of a grid of cells' binary states based on neighboring cells and a set of rules. With the variation of these parameters, the CA algorithm has evolved into alternative versions such as 3D CA, Multiple neighborhood CA, Multiple rules CA, and Stochastic CA (Url-1). As a rule-based generative algorithm, CA has been used as a bottom-up design approach in the architectural design process in the search for form (Frazer,1995; Dinçer et al., 2014), in simulating the displacement of individuals in space, and in revealing complex relations at the urban scale (Güzelci, 2013). There are implementations of CA tools in 3D design software for designers as additional scripts or plug-ins. However, these often have limited ability to create customized CA algorithms by the designer. This study aims to create a customizable framework for 3D CA algorithms to be used in 3D form explorations by designers. Grasshopper3D, which is a visual scripting environment in Rhinoceros 3D, is used to implement the framework. The main difference between this work and the current Grasshopper3D plug-ins for CA simulation is the customizability and the real-time control of the framework. The parameters that allow the CA algorithm to be customized are; the initial state of the 3D grid, neighborhood conditions, cell states and rules. CA algorithms are created for each customizable parameter using the framework. Those algorithms are evaluated based on the ability to generate form. A voxel-based approach is used to generate geometry from the points created by the 3D cellular automata. In future, forms generated using this framework can be used as a form generating tool for digital environments.
series ASCAAD
email
last changed 2024/02/16 13:38

_id avocaad_2001_16
id avocaad_2001_16
authors Yu-Ying Chang, Yu-Tung Liu, Chien-Hui Wong
year 2001
title Some Phenomena of Spatial Characteristics of Cyberspace
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary "Space," which has long been an important concept in architecture (Bloomer & Moore, 1977; Mitchell, 1995, 1999), has attracted interest of researchers from various academic disciplines in recent years (Agnew, 1993; Benko & Strohmayer, 1996; Chang, 1999; Foucault, 1982; Gould, 1998). Researchers from disciplines such as anthropology, geography, sociology, philosophy, and linguistics regard it as the basis of the discussion of various theories in social sciences and humanities (Chen, 1999). On the other hand, since the invention of Internet, Internet users have been experiencing a new and magic "world." According to the definitions in traditional architecture theories, "space" is generated whenever people define a finite void by some physical elements (Zevi, 1985). However, although Internet is a virtual, immense, invisible and intangible world, navigating in it, we can still sense the very presence of ourselves and others in a wonderland. This sense could be testified by our naming of Internet as Cyberspace -- an exotic kind of space. Therefore, as people nowadays rely more and more on the Internet in their daily life, and as more and more architectural scholars and designers begin to invest their efforts in the design of virtual places online (e.g., Maher, 1999; Li & Maher, 2000), we cannot help but ask whether there are indeed sensible spaces in Internet. And if yes, these spaces exist in terms of what forms and created by what ways?To join the current interdisciplinary discussion on the issue of space, and to obtain new definition as well as insightful understanding of "space", this study explores the spatial phenomena in Internet. We hope that our findings would ultimately be also useful for contemporary architectural designers and scholars in their designs in the real world.As a preliminary exploration, the main objective of this study is to discover the elements involved in the creation/construction of Internet spaces and to examine the relationship between human participants and Internet spaces. In addition, this study also attempts to investigate whether participants from different academic disciplines define or experience Internet spaces in different ways, and to find what spatial elements of Internet they emphasize the most.In order to achieve a more comprehensive understanding of the spatial phenomena in Internet and to overcome the subjectivity of the members of the research team, the research design of this study was divided into two stages. At the first stage, we conducted literature review to study existing theories of space (which are based on observations and investigations of the physical world). At the second stage of this study, we recruited 8 Internet regular users to approach this topic from different point of views, and to see whether people with different academic training would define and experience Internet spaces differently.The results of this study reveal that the relationship between human participants and Internet spaces is different from that between human participants and physical spaces. In the physical world, physical elements of space must be established first; it then begins to be regarded as a place after interaction between/among human participants or interaction between human participants and the physical environment. In contrast, in Internet, a sense of place is first created through human interactions (or activities), Internet participants then begin to sense the existence of a space. Therefore, it seems that, among the many spatial elements of Internet we found, "interaction/reciprocity" Ñ either between/among human participants or between human participants and the computer interface Ð seems to be the most crucial element.In addition, another interesting result of this study is that verbal (linguistic) elements could provoke a sense of space in a degree higher than 2D visual representation and no less than 3D visual simulations. Nevertheless, verbal and 3D visual elements seem to work in different ways in terms of cognitive behaviors: Verbal elements provoke visual imagery and other sensory perceptions by "imagining" and then excite personal experiences of space; visual elements, on the other hand, provoke and excite visual experiences of space directly by "mapping".Finally, it was found that participants with different academic training did experience and define space differently. For example, when experiencing and analyzing Internet spaces, architecture designers, the creators of the physical world, emphasize the design of circulation and orientation, while participants with linguistics training focus more on subtle language usage. Visual designers tend to analyze the graphical elements of virtual spaces based on traditional painting theories; industrial designers, on the other hand, tend to treat these spaces as industrial products, emphasizing concept of user-center and the control of the computer interface.The findings of this study seem to add new information to our understanding of virtual space. It would be interesting for future studies to investigate how this information influences architectural designers in their real-world practices in this digital age. In addition, to obtain a fuller picture of Internet space, further research is needed to study the same issue by examining more Internet participants who have no formal linguistics and graphical training.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 00ae
id 00ae
authors Ataman, Osman
year 1995
title Building A Computer Aid for Teaching Architectural Design Concepts
source Computing in Design - Enabling, Capturing and Sharing Ideas [ACADIA Conference Proceedings / ISBN 1-880250-04-7] University of Washington (Seattle, Washington / USA) October 19-22, 1995, pp. 187-208
doi https://doi.org/10.52842/conf.acadia.1995.187
summary Building an aid for teaching architectural design concepts is the process of elaborating topics, defining problems and suggesting to the students strategies for solving those problems. I believe students in Environment and Behavior (E&B) courses at Georgia Tech can benefit greatly from a computer based educational tool designed to provide them with experiences they currently do not possess. In particular, little time in the course (outside lectures) is devoted to applying concepts taught in the course to the studio projects. The tool I am proposing provides students with an opportunity to critique architectural environments (both simple examples and previous projects) using a single concept, "affordances". This paper describes my current progress toward realizing the goal of designing a tool that will help the students to understand particular concepts and to integrate them into their designs. It is my claim that an integrative and interactive approach - creating a learning environment and making both the students and the environment mutually supportive- is fundamentally more powerful than traditional educational methods.

series ACADIA
email
last changed 2022/06/07 07:54

_id e100
authors Bermudez, Julio and King, Kevin
year 1995
title Architecture in Digital Space: Actual and Potential Markets
source Computing in Design - Enabling, Capturing and Sharing Ideas [ACADIA Conference Proceedings / ISBN 1-880250-04-7] University of Washington (Seattle, Washington / USA) October 19-22, 1995, pp. 405-423
doi https://doi.org/10.52842/conf.acadia.1995.405
summary As both the skepticism and 'hype' surrounding electronic environments vanish under the weight of ever increasing power, knowledge, and use of information technologies, the architectural profession must prepare for significant expansion of its professional services. To address the issue, this paper offers a survey of the professional services architects and designers do and may provide in digital space, and who the potential clients are. The survey was conducted by interviews with software developers, gaming companies, programmers, investigators, practicing architects, faculty, etc. It also included reviews of actual software products and literary research of conference proceedings, journals, books and newspapers (i.e. articles, classified ads, etc.). The actual and potential markets include gaming and entertainment developments, art installations, educational applications, and research. These markets provide architects the opportunity to participate in the design of 3D gaming environments, educational software, architecture for public experience and entertainment, data representation, cyberspace and virtual reality studies, and other digital services which will be required for this new world. We will demonstrate that although the rapidly growing digital market may be seen by some to be non-architectural and thus irrelevant to our profession, it actually represents great opportunities for growth and development. Digital environments will not replace the built environment as a major architectural market, but they will significantly complement it, thus strengthening the entire architectural profession.
series ACADIA
email
last changed 2022/06/07 07:52

_id d7eb
authors Bharwani, Seraj
year 1996
title The MIT Design Studio of the Future: Virtual Design Review Video Program
source Proceedings of ACM CSCW'96 Conference on Computer-Supported Cooperative Work 1996 p.10
summary The MIT Design Studio of the Future is an interdisciplinary effort to focus on geographically distributed electronic design and work group collaboration issues. The physical elements of this virtual studio comprise networked computer and videoconferencing connections among electronic design studios at MIT in Civil and Environmental Engineering, Architecture and Planning, Mechanical Engineering, the Lab for Computer Science, and the Rapid Prototyping Lab, with WAN and other electronic connections to industry partners and sponsors to take advantage of non-local expertise and to introduce real design and construction and manufacturing problems into the equation. This prototype collaborative design network is known as StudioNet. The project is looking at aspects of the design process to determine how advanced technologies impact the process. The first experiment within the electronic studio setting was the "virtual design review", wherein jurors for the final design review were located in geographically distributed sites. The video captures the results of that project, as does a paper recently published in the journal Architectural Research Quarterly (Cambridge, UK; Vol. 1, No. 2; Dec. 1995).
series other
last changed 2002/07/07 16:01

_id c05a
authors Bridges, Alan
year 1995
title Design Precedents for Virtual Worlds
source Sixth International Conference on Computer-Aided Architectural Design Futures [ISBN 9971-62-423-0] Singapore, 24-26 September 1995, pp. 293-302
summary The usual precedents cited in relation to Cyberspace are William Gibson's book "Neuromancer" and Ridley Scott's film. "Bladerunner". This paper argues that, whilst literature and film are appropriate precedents, there are more suitable sources to refer to when designing virtual worlds. The paper discusses the use of computer modelling in exploring architectonic concepts in three-dimensional space. In doing so it draws on the philosophy of simulation and gives examples from alternative film and literature sources but concludes that one of the most appropriate metaphors is widely available in the form of the television soap opera.
keywords Design Simulation, Space, Time, Virtual Reality
series CAAD Futures
email
last changed 2003/11/21 15:16

_id 913a
authors Brutzman, D.P., Macedonia, M.R. and Zyda, M.J.
year 1995
title Internetwork Infrastructure Requirements for Virtual Environments
source NIl 2000 Forum of the Computer Science and Telecommunications Board, National Research Council, Washington, D.C., May 1995
summary Virtual environments (VEs) are a broad multidisciplinary research area that includes all aspects of computer science, virtual reality, virtual worlds, teleoperation and telepresence. A variety of network elements are required to scale up virtual environments to arbitrarily large sizes, simultaneously connecting thousands of interacting players and all kinds of information objects. Four key communications components for virtual environments are found within the Internet Protocol (IP) suite: light-weight messages, network pointers, heavy-weight objects and real-time streams. Software and hardware shortfalls and successes for internetworked virtual environments provide specific research conclusions and recommendations. Since large-scale networked are intended to include all possible types of content and interaction, they are expected to enable new classes of interdisciplinary research and sophisticated applications that are particularly suitable for implementation using the Virtual Reality Modeling Language (VRML).
series other
last changed 2003/04/23 15:50

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 19HOMELOGIN (you are user _anon_579451 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002