CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 747

_id 70c4
authors Gross, M.D., Do, E.Y.-L. and Johnson, B.R.
year 2000
title Beyond the low-hanging fruit: Information technology in architectural design past, present and future
source W. Mitchell and J. Fernandez (eds), ACSA Technology Conference, MIT Press, Cambridge MA
summary Today's commercial CAD software is the product of years of research that began in the 1960's and 1970's. These applications have found widespread use in the architectural marketplace; nevertheless they represent only the first fruits of research in computer aided design. New developments based on research in human-computer interaction (HCI), computer-supported collaborative work (CSCW), and virtual reality (VR) will result in a next generation of tools for architectural design. Although preliminary applications to design have been demonstrated in each of these areas, excellent opportunities remain to exploit new technologies and insights in service of better design software. In this paper we briefly examine each of these areas using examples from our own work to discuss the prospects for future research. We envision that future design technologies will develop from current and traditional conventions of practice combined with forward looking application of emerging technologies. In HCI, pen based interaction will allow architects to use the pencil again, without sacrificing the added power of computer aided design tools, and speech recognition will begin to play a role in capturing and retrieving design critique and discussion. In CSCW, a new generation of applications will address the needs of designers more closely than current general purpose meeting tools. In VR, applications are possible that use the technology not simply to provide a sense of three-dimensional presence, but that organize design information spatially, integrating it into the representation of artifacts and places.
series other
email
last changed 2003/04/23 15:50

_id 38ff
authors Van den Heuvel, F.A.
year 2000
title Trends in CAD-based photogrammetric measurement
source International Archives of Photogrammetry and Remote Sensing, Vol. 33, Part 5/2, pp. 852-863
summary In the past few decades, Computer Aided Design (CAD) systems have evolved from 2D tools that assist in construction design to the basis of software systems for a variety of applications, such as (re)design, manufacturing, quality control, and facility management. The basic functions of a modern CAD system are storage and retrieval of 3D data, their construction, manipulation, and visualisation. All these functions are needed in a photogrammetric measurement system. Therefore, photogrammetry benefits from integration with CAD, and thereby from developments in this field. There are two main interpretations of the term CAD-based photogrammetry. The first interpretation is on a system level: there is a trend towards integration of photogrammetric tools in existing CAD systems. The second interpretation is on an algorithmic level: developments in the field of CAD regarding object modelling techniques are being implemented in photogrammetric systems. In practice, the two interpretations overlap to a varying extent. The integrated photogrammetric processing of geometry and topology is defined as a minimum requirement for CAD-based photogrammetry. The paper discusses the relation between CAD and photogrammetry with an emphasis on close-range photogrammetry. Several approaches for the integration of CAD and photogrammetry are briefly reviewed, and trends in CAD-based photogrammetry are outlined. First of all, the trend towards CAD-based photogrammetry is observed. The integration of photogrammetry and CAD increases the efficiency of photogrammetric modelling. One of the reasons for this is the improvement of the user-interface, which allows better interaction with the data. A more fundamental improvement is the use of advanced object modelling techniques such as Constructive Solid Geometry, and the incorporation of geometric object constraints. Furthermore, research emphasis is on CAD-based matching techniques for automatic precise measurement of CAD-models. An overall conclusion remains: the integration of photogrammetry and CAD has great potential for widening the acceptance of photogrammetry, especially in industry. This is firstly because of the improvement in efficiency, and secondly because of the established and well-known concept of CAD.
series journal paper
last changed 2003/04/23 15:50

_id fa1b
authors Haapasalo, H.
year 2000
title Creative computer aided architectural design An internal approach to the design process
source University of Oulu (Finland)
summary This survey can be seen as quite multidisciplinary research. The basis for this study has been inapplicability of different CAD user interfaces in architectural design. The objective of this research is to improve architectural design from the creative problem-solving viewpoint, where the main goal is to intensify architectural design by using information technology. The research is linked to theory of methods, where an internal approach to design process means studying the actions and thinking of architects in the design process. The research approach has been inspired by hermeneutics. The human thinking process is divided into subconscious and conscious thinking. The subconscious plays a crucial role in creative work. The opposite of creative work is systematic work, which attempts to find solutions by means of logical inference. Both creative and systematic problem solving have had periods of predominance in the history of Finnish architecture. The perceptions in the present study indicate that neither method alone can produce optimal results. Logic is one of the tools of creativity, since the analysis and implementation of creative solutions require logical thinking. The creative process cannot be controlled directly, but by creating favourable work conditions for creativity, it can be enhanced. Present user interfaces can make draughting and the creation of alternatives quicker and more effective in the final stages of designing. Only two thirds of the architects use computers in working design, even the CAD system is being acquired in greater number of offices. User interfaces are at present inflexible in sketching. Draughting and sketching are the basic methods of creative work for architects. When working with the mouse, keyboard and screen the natural communication channel is impaired, since there is only a weak connection between the hand and the line being drawn on the screen. There is no direct correspondence between hand movements and the lines that appear on the screen, and the important items cannot be emphasized by, for example, pressing the pencil more heavily than normally. In traditional sketching the pen is a natural extension of the hand, as sketching can sometimes be controlled entirely by the unconscious. Conscious efforts in using the computer shift the attention away from the actual design process. However, some architects have reached a sufficiently high level of skill in the use of computer applications in order to be able to use them effectively in designing without any harmful effect on the creative process. There are several possibilities in developing CAD systems aimed at architectural design, but the practical creative design process has developed during a long period of time, in which case changing it in a short period of time would be very difficult. Although CAD has had, and will have, some evolutionary influences on the design process of architects as an entity, the future CAD user interface should adopt its features from the architect's practical and creative design process, and not vice versa.
keywords Creativity, Systematicism, Sketching
series thesis:PhD
email
more http://herkules.oulu.fi/isbn9514257545/
last changed 2003/02/12 22:37

_id 83cb
authors Telea, Alexandru C.
year 2000
title Visualisation and simulation with object-oriented networks
source Eindhoven University of Technology
summary Among the existing systems, visual programming environments address best these issues. However, producing interactive simulations and visualisations is still a difficult task. This defines the main research objective of this thesis: The development and implementation of concepts and techniques to combine visualisation, simulation, and application construction in an interactive, easy to use, generic environment. The aim is to produce an environment in which the above mentioned activities can be learnt and carried out easily by a researcher. Working with such an environment should decrease the amount of time usually spent in redesigning existing software elements such as graphics interfaces, existing computational modules, and general infrastructure code. Writing new computational components or importing existing ones should be simple and automatic enough to make using the envisaged system an attractive option for a non programmer expert. Besides this, all proven successful elements of an interactive simulation and visualisation environment should be provided, such as visual programming, graphics user interfaces, direct manipulation, and so on. Finally, a large palette of existing scientific computation, data processing, and visualisation components should be integrated in the proposed system. On one hand, this should prove our claims of openness and easy code integration. On the other hand, this should provide the concrete set of tools needed for building a range of scientific applications and visualisations. This thesis is structured as follows. Chapter 2 defines the context of our work. The scientific research environment is presented and partitioned into the three roles of end user, application designer, and component developer. The interactions between these roles and their specific requirements are described and lead to a more precise formulation of our problem statement. Chapter 3 presents the most used architectures for simulation and visualisation systems: the monolithic system, the application library, and the framework. The advantages and disadvantages of these architectural models are then discussed in relation with our problem statement requirements. The main conclusion drawn is that no single existing architectural model suffices, and that what is needed is a combination of the features present in all three models. Chapter 4 introduces the new architectural model we propose, based on the combination of object-orientation in form of the C++ language and dataflow modelling in the new MC++ language. Chapter 5 presents VISSION, an interactive simulation and visualisation environment constructed on the introduced new architectural model, and shows how the usual tasks of application construction, steering, and visualisation are addressed. In chapter 6, the implementation of VISSION’s architectural model is described in terms of its component parts. Chapter 7 presents the applications of VISSION to numerical simulation, while chapter 8 focuses on its visualisation and graphics applications. Finally, chapter 9 concludes the thesis and outlines possible direction for future research.
keywords Computer Visualisation
series thesis:PhD
email
last changed 2003/02/12 22:37

_id a136
authors Blaise, J.Y., Dudek, I. and Drap, P.
year 1998
title Java collaborative interface for architectural simulations A case study on wooden ceilings of Krakow
source International Conference On Conservation - Krakow 2000, 23-24 November 1998, Krakow, Poland
summary Concern for the architectural and urban preservation problems has been considerably increasing in the past decades, and with it the necessity to investigate the consequences and opportunities opened for the conservation discipline by the development of computer-based systems. Architectural interventions on historical edifices or in preserved urban fabric face conservationists and architects with specific problems related to the handling and exchange of a variety of historical documents and representations. The recent development of information technologies offers opportunities to favour a better access to such data, as well as means to represent architectural hypothesis or design. Developing applications for the Internet also introduces a greater capacity to exchange experiences or ideas and to invest on low-cost collaborative working platforms. In the field of the architectural heritage, our research addresses two problems: historical data and documentation of the edifice, methods of representation (knowledge modelling and visualisation) of the edifice. This research is connected with the ARKIW POLONIUM co-operation program that links the MAP-GAMSAU CNRS laboratory (Marseilles, France) and the Institute HAiKZ of Kraków's Faculty of Architecture. The ARKIW programme deals with questions related to the use of information technologies in the recording, protection and studying of the architectural heritage. Case studies are chosen in order to experience and validate a technical platform dedicated to the formalisation and exchange of knowledge related to the architectural heritage (architectural data management, representation and simulation tools, survey methods, ...). A special focus is put on the evolution of the urban fabric and on the simulation of reconstructional hypothesis. Our contribution will introduce current ARKIW internet applications and experiences: The ARPENTEUR architectural survey experiment on Wieża Ratuszowa (a photogrammetrical survey based on an architectural model). A Gothic and Renaissance reconstruction of the Ratusz Krakowski using a commercial modelisation and animation software (MAYA). The SOL on line documentation interface for Kraków's Rynek G_ówny. Internet analytical approach in the presentation of morphological informations about Kraków's Kramy Bogate Rynku Krakowskiego. Object-Orientation approach in the modelling of the architectural corpus. The VALIDEUR and HUBLOT Virtual Reality modellers for the simulation and representation of reconstructional hypothesis and corpus analysis.
series other
last changed 2003/04/23 15:14

_id 9554
authors Jagbeck, A.
year 2000
title Field test of a product-model-based construction planning tool
source CIDAC, Volume 2 Issue 2 May 2000, pp. 80-91
summary Over the past decade, more than a dozen papers describing proposals for product-model-based planning models have been published, but only a few of these proposals have been implemented in prototypes that have been tested in full-scale tests. PreFacto is a research-based software for production planning based on product model data, which has been developed and tested in close cooperation with a construction company. It is operational but still under development. Assessing the degree of functionality achieved so far is a natural part of a modern cyclical software development process. This paper describes a 6-month full-scale field trial of the PreFacto system undertaken by the site management in cooperation with the author. It was carried out as a parallel planning activity on a real ongoing project. The trial was documented and the system's usability for the construction planning process was analysed and evaluated using mainly qualitative methods. The evaluated planning activities include importing product model data and performing a range of planning activities. The evaluation addressed such usability aspects as system capacity, ease of use of the interface, and conceptual compliance with the use context and the various planning tasks. The test method was useful for checking the conceptual model from the user's point of view. At the same time, the field trial worked equally as a case study for developers, a study of a degree of reality that would not have been possible in a laboratory situation. Apart from the evaluation of the features of the software itself, there are some results of general interest. the main result was that all the advantages of the system derive from the connection between design and planning, i.e. the use of a product model as a basis for defining the result of production tasks. Allowing production managers to freely structure tasks and to apply resource recipes were the most relevant functions.
keywords Integration, Information, Construction, Planning, Field Trial, Product Model
series journal paper
last changed 2003/05/15 21:23

_id aa7f
authors Bollinger, Elizabeth and Hill, Pamela
year 1993
title Virtual Reality: Technology of the Future or Playground of the Cyberpunk?
source Education and Practice: The Critical Interface [ACADIA Conference Proceedings / ISBN 1-880250-02-0] Texas (Texas / USA) 1993, pp. 121-129
doi https://doi.org/10.52842/conf.acadia.1993.121
summary Jaron Lanier is a major spokesperson of our society's hottest new technology: VR or virtual reality. He expressed his faith in the VR movement in this quote which appears in The User's Guide to the New Edge published by Mondo 2000. In its most technical sense, VR has attracted the attention of politicians in Washington who wonder if yet another technology developed in the United States will find its application across the globe in Asia. In its most human element, an entire "cyberpunk movement" has appealed to young minds everywhere as a seemingly safe form of hallucination. As architecture students, educators, and practitioners around the world are becoming attracted to the possibilities of VR technology as an extension of 3D modeling, visualization, and animation, it is appropriate to consider an overview of virtual reality.

In virtual reality a user encounters a computersimulated environment through the use of a physical interface. The user can interact with the environment to the point of becoming a part of the experience, and the experience becomes reality. Natural and

instinctive body movements are translated by the interface into computer commands. The quest for perfection in this human-computer relationship seems to be the essence of virtual reality technology.

To begin to capture the essence of virtual reality without first-hand experience, it is helpful to understand two important terms: presence and immersion. The sense of presence can be defined as the degree to which the user feels a part of the actual environment. The more reality the experience provides, the more presence it has. Immersion can be defined as the degree of other simulation a virtual reality interface provides for the viewer. A highly immersive system might provide more than just visual stimuli; for example, it may additionally provide simulated sound and motion, and simultaneously prevent distractions from being present.

series ACADIA
email
last changed 2022/06/07 07:52

_id d244
authors De Mesa, A., Quilez, J. and Regot, J.
year 2000
title Análisis Geométrico de Formas Arquitectónicas Complejas (Geometrical Analysis of Complex Architectural Forms)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 295-297
summary The present graphic computer system allows defining high-level shape problems with great freedom. In free-form surface modeling it comes to be a good reason to develop an example that shows, which is the best way to create, modify and control complex free-form shapes in three-dimensional architectonic virtual modeling. The parameters of Bezier curves are not simple. But the use of Splines curves let us a friendly free form curves management with a great designer performance level. Unfortunately, the standard computer graphic tools to control these entities have a lot of variations, and normally create an unclear and confuse interface for generic users without several knowledge of mathematics and geometry. With the help of an example, this paper expose the use of computer graphics to make models of architectonic buildings with complex shapes that contains free-form surfaces. At the same time, it is an evaluation of how the standard CAD software processes this problem.
series SIGRADI
last changed 2016/03/10 09:50

_id 349e
authors Durmisevic, Sanja
year 2002
title Perception Aspects in Underground Spaces using Intelligent Knowledge Modeling
source Delft University of Technology
summary The intensification, combination and transformation are main strategies for future spatial development of the Netherlands, which are stated in the Fifth Bill regarding Spatial Planning. These strategies indicate that in the future, space should be utilized in a more compact and more efficient way requiring, at the same time, re-evaluation of the existing built environment and finding ways to improve it. In this context, the concept of multiple space usage is accentuated, which would focus on intensive 4-dimensional spatial exploration. The underground space is acknowledged as an important part of multiple space usage. In the document 'Spatial Exploration 2000', the underground space is recognized by policy makers as an important new 'frontier' that could provide significant contribution to future spatial requirements.In a relatively short period, the underground space became an important research area. Although among specialists there is appreciation of what underground space could provide for densely populated urban areas, there are still reserved feelings by the public, which mostly relate to the poor quality of these spaces. Many realized underground projects, namely subways, resulted in poor user satisfaction. Today, there is still a significant knowledge gap related to perception of underground space. There is also a lack of detailed documentation on actual applications of the theories, followed by research results and applied techniques. This is the case in different areas of architectural design, but for underground spaces perhaps most evident due to their infancv role in general architectural practice. In order to create better designs, diverse aspects, which are very often of qualitative nature, should be considered in perspective with the final goal to improve quality and image of underground space. In the architectural design process, one has to establish certain relations among design information in advance, to make design backed by sound rationale. The main difficulty at this point is that such relationships may not be determined due to various reasons. One example may be the vagueness of the architectural design data due to linguistic qualities in them. Another, may be vaguely defined design qualities. In this work, the problem was not only the initial fuzziness of the information but also the desired relevancy determination among all pieces of information given. Presently, to determine the existence of such relevancy is more or less a matter of architectural subjective judgement rather than systematic, non-subjective decision-making based on an existing design. This implies that the invocation of certain tools dealing with fuzzy information is essential for enhanced design decisions. Efficient methods and tools to deal with qualitative, soft data are scarce, especially in the architectural domain. Traditionally well established methods, such as statistical analysis, have been used mainly for data analysis focused on similar types to the present research. These methods mainly fall into a category of pattern recognition. Statistical regression methods are the most common approaches towards this goal. One essential drawback of this method is the inability of dealing efficiently with non-linear data. With statistical analysis, the linear relationships are established by regression analysis where dealing with non-linearity is mostly evaded. Concerning the presence of multi-dimensional data sets, it is evident that the assumption of linear relationships among all pieces of information would be a gross approximation, which one has no basis to assume. A starting point in this research was that there maybe both linearity and non-linearity present in the data and therefore the appropriate methods should be used in order to deal with that non-linearity. Therefore, some other commensurate methods were adopted for knowledge modeling. In that respect, soft computing techniques proved to match the quality of the multi-dimensional data-set subject to analysis, which is deemed to be 'soft'. There is yet another reason why soft-computing techniques were applied, which is related to the automation of knowledge modeling. In this respect, traditional models such as Decision Support Systems and Expert Systems have drawbacks. One important drawback is that the development of these systems is a time-consuming process. The programming part, in which various deliberations are required to form a consistent if-then rule knowledge based system, is also a time-consuming activity. For these reasons, the methods and tools from other disciplines, which also deal with soft data, should be integrated into architectural design. With fuzzy logic, the imprecision of data can be dealt with in a similar way to how humans do it. Artificial neural networks are deemed to some extent to model the human brain, and simulate its functions in the form of parallel information processing. They are considered important components of Artificial Intelligence (Al). With neural networks, it is possible to learn from examples, or more precisely to learn from input-output data samples. The combination of the neural and fuzzy approach proved to be a powerful combination for dealing with qualitative data. The problem of automated knowledge modeling is efficiently solved by employment of machine learning techniques. Here, the expertise of prof. dr. Ozer Ciftcioglu in the field of soft computing was crucial for tool development. By combining knowledge from two different disciplines a unique tool could be developed that would enable intelligent modeling of soft data needed for support of the building design process. In this respect, this research is a starting point in that direction. It is multidisciplinary and on the cutting edge between the field of Architecture and the field of Artificial Intelligence. From the architectural viewpoint, the perception of space is considered through relationship between a human being and a built environment. Techniques from the field of Artificial Intelligence are employed to model that relationship. Such an efficient combination of two disciplines makes it possible to extend our knowledge boundaries in the field of architecture and improve design quality. With additional techniques, meta know/edge, or in other words "knowledge about knowledge", can be created. Such techniques involve sensitivity analysis, which determines the amount of dependency of the output of a model (comfort and public safety) on the information fed into the model (input). Another technique is functional relationship modeling between aspects, which is derivation of dependency of a design parameter as a function of user's perceptions. With this technique, it is possible to determine functional relationships between dependent and independent variables. This thesis is a contribution to better understanding of users' perception of underground space, through the prism of public safety and comfort, which was achieved by means of intelligent knowledge modeling. In this respect, this thesis demonstrated an application of ICT (Information and Communication Technology) as a partner in the building design process by employing advanced modeling techniques. The method explained throughout this work is very generic and is possible to apply to not only different areas of architectural design, but also to other domains that involve qualitative data.
keywords Underground Space; Perception; Soft Computing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 6072
authors Orzechowski, M.A., Timmermans, H.J.P. and De Vries, B.
year 2000
title Measuring user satisfaction for design variations through virtual reality
source Timmermans, H.J.P. & Vries, B. de (eds.) Design & Decision Support Systems in Architecture - Proceedings of the 5th International Conference, August 22-25 2000, Nijkerk, pp. 278-288
summary Virtual Reality (VR), and Artificial Intelligence (AI) technology have become increasingly more common in all disciplines of modern life. These new technologies range from simple software assistants to sophisticated modeling of human behavior. In this research project, we are creating an AI agent environment that helps architects to identify user preferences through a Virtual Reality Interface. At the current stage of development, the research project has resulted in a VR application - MuseV2 that allows users to instantly modify an architectural design. The distinctive feature of this application is that a space is considered as a base for all user modifications and as a connection between all design elements. In this paper we provide some technical information about MuseV2. Presentation of a design through VR allows AI agents to observe user-induced modifications and to gather preference information. In addition to allowing for an individualized design, this information generalized across a sample of users should provide the basis for developing basic designs for particular market segments and predict the market potential of those designs. The system that we envision should not become an automated design tool, but an adviser and viewer for users, who have limited knowledge or no knowledge at all about CAD systems, and architectural design. This tool should help investors to assess preferences for new community housing in order to meet the needs of future inhabitants.
series other
email
last changed 2003/04/23 15:50

_id 08ea
authors Clayton, Mark J. and Vasquez de Velasco, Guillermo P. (Eds.)
year 2000
title ACADIA 2000: Eternity, Infinity and Virtuality in Architecture
source Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8 / Washington D.C. 19-22 October 2000, 284 p.
doi https://doi.org/10.52842/conf.acadia.2000
summary Eternity, time without end, infinity, space without limits and virtuality, perception without constraints; provide the conceptual framework in which ACADIA 2000 is conceived. It is in human nature to fill what is empty and to empty what is full. Today, thanks to the power of computer processing we can also make small what is too big, make big what is too small, make fast what is too slow, make slow what is too fast, make real what does not exist, and make our reality omni-present at global scale. These are capabilities for which we have no precedents. What we make of them is our privilege and responsibility. Information about a building flows past our keyboards and on to other people. Although we, as architects, add to the information, it originated before us and will go beyond our touch in time, space and understanding. A building description acquires a life of its own that may surpass our own lives as it is stored, transferred, transformed, and reused by unknown intellects, both human and artificial, and in unknown processes. Our actions right now have unforeseen effects. Digital media blurs the boundaries of space, time and our perception of reality. ACADIA 2000 explores the theme of time, space and perception in relation to the information and knowledge that describes architecture. Our invitation to those who are finding ways to apply computer processing power in architecture received overwhelming response, generating paper submissions from five continents. A selected group of reviewers recommended the publication of 24 original full papers out of 42 submitted and 13 short papers out of 30 submitted. Forty-two projects were submitted to the Digital Media Exhibit and 12 were accepted for publication. The papers cover subjects in design knowledge, design process, design representation, design communication, and design education. Fundamental and applied research has been carefully articulated, resulting in developments that may have an important impact on the way we practice and teach architecture in the future.
series ACADIA
email
more www.acadia.org
last changed 2022/06/07 07:49

_id 9a1e
authors Clayton, Mark J. and Vasquez de Velasco, Guillermo
year 1999
title Stumbling, Backtracking, and Leapfrogging: Two Decades of Introductory Architectural Computing
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 151-158
doi https://doi.org/10.52842/conf.ecaade.1999.151
summary Our collective concept of computing and its relevance to architecture has undergone dramatic shifts in emphasis. A review of popular texts from the past reveals the biases and emphases that were current. In the seventies, architectural computing was generally seen as an elective for data processing specialists. In the early eighties, personal computers and commercial CAD systems were widely adopted. Architectural computing diverged from the "batch" world into the "interactive" world. As personal computing matured, introductory architectural computing courses turned away from a foundation in programming toward instruction in CAD software. By the late eighties, Graphic User Interfaces and windowing operating systems had appeared, leading to a profusion of architecturally relevant applications that needed to be addressed in introductory computing. The introduction of desktop 3D modeling in the early nineties led to increased emphasis upon rendering and animation. The past few years have added new emphases, particularly in the area of network communications, the World Wide Web and Virtual Design Studios. On the horizon are topics of electronic commerce and knowledge markets. This paper reviews these past and current trends and presents an outline for an introductory computing course that is relevant to the year 2000.
keywords Computer-Aided Architectural Design, Computer-Aided Design, Computing Education, Introductory Courses
series eCAADe
email
last changed 2022/06/07 07:56

_id 5477
authors Donath, D., Kruijff, E., Regenbrecht, H., Hirschberg, U., Johnson, B., Kolarevic, B. and Wojtowicz, J.
year 1999
title Virtual Design Studio 1998 - A Place2Wait
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 453-458
doi https://doi.org/10.52842/conf.ecaade.1999.453
summary This article reports on the recent, geographically and temporally distributed, intercollegiate Virtual Design Studio based on the 1998 implementation Phase(x) environment. Students participating in this workshop had to create a place to wait in the form of a folly. This design task was cut in five logical parts, called phases. Every phase had to be finished within a specific timeframe (one day), after which the results would be stored in a common data repository, an online MSQL database environment which holds besides the presentations, consisting of text, 3D models and rendered images, basic project information like the descriptions of the phases and design process visualization tools. This approach to collaborative work is better known as memetic engineering and has successfully been used in several educational programs and past Virtual Design Studios. During the workshop, students made use of a variety of tools, including modeling tools (specifically Sculptor), video-conferencing software and rendering programs. The project distinguishes itself from previous Virtual Design Studios in leaving the design task more open, thereby focusing on the design process itself. From this perspective, this paper represents both a continuation of existing reports about previous Virtual Design Studios and a specific extension by the offered focus. Specific attention will be given at how the different collaborating parties dealt with the data flow and modification, the crux within a successful effort to cooperate on a common design task.
keywords Collaborative design, Design Process, New Media Usage, Global Networks
series eCAADe
email
last changed 2022/06/07 07:55

_id 06e8
authors Knight, Michael W. and Brown, Andre G.P.
year 2000
title Interfaces for Virtual Environments; A Review Recent Developments and Potential Ways forward
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 215-219
doi https://doi.org/10.52842/conf.ecaade.2000.215
summary The physical and visual nature of the interface devices and media that enable the human agent to interact with a virtual world have evolved over the past few years. In this paper we consider the different lines of development that have taken place in the refinement of these interfaces and summarise what has been learned about the appropriate nature of the interface for such interaction. In terms of the physical aspects we report on the kind of devices that have been used to enable the human agent to operate within the computer generated environment. We point out the successes and failures in the systems that have been tried out in recent years. Likewise we consider the kinds of software generated interface that have been used to represent virtual worlds. Again, we review the efficacy of the environments that have been devised; the quality of the Cyberplace. Our aim is to be able to comment on the effectiveness of the systems that have been devised from a number of points of view. We consider the physical and software-based aids for navigation; the nature of the representation of architectural worlds; strengthening “groundedness”; the inclusion of “otherness”; and reinforcement of the idea of “presence”
keywords Virtual Environments, Games Engines, Collaborative Design, Navigation Metaphors
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:51

_id b76e
authors Liggett, Robin S.
year 2000
title Automated facilities layout: past, present and future
source Automation in Construction 9 (2) (2000) pp. 197-215
summary This paper reviews the history of automated facility layout, focusing particularly on a set of techniques which optimize a single objective function. Applications of algorithms to a variety of space allocation problems are presented and evaluated. Guidelines for future implementations of commercial systems are suggested.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 3e01
authors Linnert, C., Encarnacao, M., Storck, A. and Koch, V.
year 2000
title Virtual Building Lifecycle - Giving architects access to the future of buildings by visualizing lifecycle data
source ICCCBE8, Stanford, August 2000
summary Today’s software for architects and civil engineers is lacking support for the evaluation and improvement of building lifecycles. Facility Management Systems and 4D-CAD try to integrate lifecycle data and make them better accessible, but miss the investigation of the development of the structure itself. Much money is inappropriately spent when materials with different life expectancies are combined in the wrong way and building parts are repaired or replaced too early or too late. With the methods of scientific visualization and real-time 3Dgraphics these deficiencies can be eliminated. The project “Virtual Building Lifecycle” (short VBLC, [W-VBLC]) connects 3D geometrical information to research data such as life expectancy and emissions and to standard database information like prices. The automated visualization of critical points of the structure in the past, presence and future is a huge advantage and helps engineers to improve the duration of the lifecycle and reduce the costs.
keywords Visualization; lifecycle; virtual building; realtime 3D graphics; architectural database; 4D-CAD; Facility Management
series other
email
last changed 2003/02/26 18:58

_id 2e50
authors Ozersay, Fevzi and Szalapaj, Peter
year 1999
title Theorising a Sustainable Computer Aided Architectural Education Model
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 186-195
doi https://doi.org/10.52842/conf.ecaade.1999.186
summary The dogmatic structure of architectural education has meant that the production and application of new educational theories, leading to educational models that use computer technology as their central medium of education, is still a relatively under-explored area. Partial models cannot deliver the expected bigger steps, but only bits and pieces. Curricula developments, at many schools of architecture, have been carried out within the closed circuit manner of architectural education, through expanding the traditional curricula and integrating computers into them. There is still no agreed curriculum in schools of architecture, which defines, at least conceptually, the use of computers within it. Do we really know what we are doing? In the words of Aart Bijl; 'If I want to know what I am doing, I need a separate description of my doing it, a theory' [Bijl, 1989]. The word 'sustainability' is defined as understanding the past and responding to the present with concern for the future. Applying this definition to architectural education, this paper aims to outline the necessity and the principles for the construction of a theory of a sustainable computer aided architectural education model, which could lead to an architectural education that is lasting.
keywords Architectural Education, Educational Theories, Computers, Sustainable Models
series eCAADe
email
last changed 2022/06/07 08:00

_id ga0001
id ga0001
authors Soddu, Celestino
year 2000
title From Forming to Transforming
source International Conference on Generative Art
summary The ancient codes of harmony stem from the human vision of the complexity of nature. They allow us to think the possible, to design it and to perform its realization. The first gesture of every designer is to take, in a new application that is born from a need the opportunity to experiment with a possible harmonic code. And to operate in the evolution of the project so that this code buds and breeds beauty as a mirror of the complexity and wonder of nature. In this design activity, project after project, every architect builds his own code. This is strongly present in diverse ways in every architect. The code of harmony born from the attention of every man to the complexity of nature, manifests itself in interpretation, which is logical and therefore feasible, of the laws of formalization of relationships. Every interpretation is different and belongs to the oneness of every architect. Every interpretative code stems from, and reveals, our approach to the world, our cultural references, our history, our present and the memory of our past. Each idea is born as a representation of the interpretative code that is a cryptic and subjective code, even if it refers as constant to history of man. Generative art is the maximum expression of this human challenge: it traces a code as a reference to the complexity of nature, and it makes it feasible. So man is the craftsman of the possible, according to the laws of the natural harmony. What does a code of the harmony contain? As for all codes it contains some rules that trace certain behaviors. It is not therefore a sequence, a database of events, of forms, but it defines behaviors: the transformations. To choose forms and to put them together is an activity that can also resemble that of a designer, but essentially it is the activity of the client. The designer does not choose forms but operates transformations, because only by doing so can he put a code of harmony into effect. Between transforming and choosing forms one can trace the borderline between architects and clients, between who designs and who chooses the projected objects. This difference must be reconsidered especially today because we are going toward a hybridization in which the client wants to feel himself a designer, even if he only chooses. And the designer, using sophisticated tools, works as chooser between different solutions, in practice as a client. To design, to create through transformations is, however, an activity that takes time. The generative design, building a usable and upgradable code, makes time virtual and therefore allows the architect, even in a speeded-up world as today is, to design and reach levels of complexity that mirror the complexity of nature and its beauty.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ga0101
id ga0101
authors Tanzini, Luca
year 2000
title Universal City
source International Conference on Generative Art
summary "Universal City" is a multimedia performance that documents the evolution of the city in history. Whereas in the past the city was symbolically the world, today the world has become a city. The city rose up in an area once scattered and disorganized for so long that most of its ancient elements of culture were destroyed. It absorbed and re synthesized the remnants of this culture, cultivating power and efficiency. By means of this concentration of physical and cultural power, the city accelerated the rhythm of human relationships and converted their products into forms that are easily stockpiled and reproduced. Along with monuments, written documents and ordered associative organizations amplified the impact of all human activities, extending backwards and forwards over time. Since the beginning however, law and order stood alongside brute force, and power was always determined by these new institutions. Written law served to produce a canon of justice and equality that claimed a higher principle: the king's will, synonymous with divine command. The Urban Neolithic Revolution is comparable only to the Industrial Revolution, and the Media Technology in our own era. There is of course a substantial difference: ours is an era of immeasurable technological progress as an end in itself, which leads to the explosion of the city, and the consequent dissemination of its structure across the countryside. The old walled city has not only fallen, it's buried its foundations. Our civilization flees from every possibility of control, by means of its own extra resources not controllable by the egregious ambitions of man. The image of modern industrialization that Charlie Chaplin resurrected from the past in "Modern Times" is the exact opposite of contemporary metropolitan reality. He figured the worker as a slave chained to his machine and fed by machinery as he continued to work at maintaining the machine itself. Today the workplace is not so brutal, but automation has made it much more oppressive. Energy and dedication once directed towards the production process are today shifted towards consumption. The metropolis in the final phase of its evolution, is becoming a collective mechanism for maintaining the function of this system, and for giving the illusion of power, wealth, happiness, and total success, to those who are, in actuality, its victims. It is a concept foreign to the modern metropolitan mentality that life should be an occasion to Live, and not an excuse for generating newspaper articles, television interviews, or mass spectacles for those who know nothing better. Instead the process continues, until people prefer the simulacrum to the real, where image dominates over object, the copy over the original, representation over reality, appearance over Being. The first phase of the Economy's domination over social life brought about the visible degradation of every human accomplishment from "Being" into "Having". The present phase of social life's total occupation by the accumulated effects of the Economy is leading to a general downslide from "Having" into "Seeming". The performance is based on the instantaneous interaction between video and music: the video component is assembled in real time with RandomCinema a software that I developed and projected on a screen. The music-noise is the product of human radical improvisation togheter automatic-computer process. Everything is based on the consideration of the element of chance as a stimulus for the construction of the most options. The unpredictable helps to reveal things as they happen. The montage, the music, and their interaction, are born and die and the same moment: there are no stage directions or scripts.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id b0e7
authors Ahmad Rafi, M.E. and Karboulonis, P.
year 2000
title The Re-Convergence of Art and Science: A Vehicle for Creativity
source CAADRIA 2000 [Proceedings of the Fifth Conference on Computer Aided Architectural Design Research in Asia / ISBN 981-04-2491-4] Singapore 18-19 May 2000, pp. 491-500
doi https://doi.org/10.52842/conf.caadria.2000.491
summary Ever-increasing complexity in product design and the need to deliver a cost-effective solution that benefits from a dynamic approach requires the employment and adoption of innovative design methods which ensure that products are of the highest quality and meet or exceed customers' expectations. According to Bronowski (1976) science and art were originally two faces of the same human creativity. However, as civilisation advances and works became specialised, the dichotomy of science and art gradually became apparent. Hence scientists and artists were born, and began to develop work that was polar opposite. The sense of beauty itself became separated from science and was confined within the field of art. This dichotomy existed through mankind's efforts in advancing civilisation to its present state. This paper briefly examines the relationship between art and science through the ages and discusses their relatively recent re-convergence. Based on this hypothesis, this paper studies the current state of the convergence between arts and sciences and examines the current relationship between the two by considering real world applications and products. The study of such products and their successes and impact they had in the marketplace due to their designs and aesthetics rather than their advanced technology that had partially failed them appears to support this argument. This text further argues that a re-convergence between art and science is currently occurring and highlights the need for accelerating this process. It is suggested that re-convergence is a result of new technologies which are adopted by practitioners that include effective visualisation and communication of ideas and concepts. Such elements are widely found today in multimedia and Virtual Environments (VEs) where such tools offer increased power and new abilities to both scientists and designers as both venture in each other's domains. This paper highlights the need for the employment of emerging computer based real-time interactive technologies that are expected to enhance the design process through real-time prototyping and visualisation, better decision-making, higher quality communication and collaboration, lessor error and reduced design cycles. Effective employment and adoption of innovative design methods that ensure products are delivered on time, and within budget, are of the highest quality and meet customer expectations are becoming of ever increasing importance. Such tools and concepts are outlined and their roles in the industries they currently serve are identified. Case studies from differing fields are also studied. It is also suggested that Virtual Reality interfaces should be used and given access to Computer Aided Design (CAD) model information and data so that users may interrogate virtual models for additional information and functionality. Adoption and appliance of such integrated technologies over the Internet and their relevance to electronic commerce is also discussed. Finally, emerging software and hardware technologies are outlined and case studies from the architecture, electronic games, and retail industries among others are discussed, the benefits are subsequently put forward to support the argument. The requirements for adopting such technologies in financial, skills required and process management terms are also considered and outlined.
series CAADRIA
email
last changed 2022/06/07 07:54

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 37HOMELOGIN (you are user _anon_325827 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002