CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 745

_id 9384
authors Burry, M., Datta, S. and Anson, S.
year 2000
title Introductory Computer Programming as a Means for Extending Spatial and Temporal Understanding
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 129-135
doi https://doi.org/10.52842/conf.acadia.2000.129
summary Should computer programming be taught within schools of architecture? Incorporating even low-level computer programming within architectural education curricula is a matter of debate but we have found it useful to do so for two reasons: as an introduction or at least a consolidation of the realm of descriptive geometry and in providing an environment for experimenting in morphological time-based change. Mathematics and descriptive geometry formed a significant proportion of architectural education until the end of the 19th century. This proportion has declined in contemporary curricula, possibly at some cost for despite major advances in automated manufacture, Cartesian measurement is still the principal ‘language’ with which to describe building for construction purposes. When computer programming is used as a platform for instruction in logic and spatial representation, the waning interest in mathematics as a basis for spatial description can be readdressed using a left-field approach. Students gain insights into topology, Cartesian space and morphology through programmatic form finding, as opposed to through direct manipulation. In this context, it matters to the architect-programmer how the program operates more than what it does. This paper describes an assignment where students are given a figurative conceptual space comprising the three Cartesian axes with a cube at its centre. Six Phileban solids mark the Cartesian axial limits to the space. Any point in this space represents a hybrid of one, two or three transformations from the central cube towards the various Phileban solids. Students are asked to predict the topological and morphological outcomes of the operations. Through programming, they become aware of morphogenesis and hybridisation. Here we articulate the hypothesis above and report on the outcome from a student group, whose work reveals wider learning opportunities for architecture students in computer programming than conventionally assumed.
series ACADIA
email
last changed 2022/06/07 07:54

_id 1206
authors Cabezas, M., Mariano, C., Mitolo, S. and Oliva, S.
year 1999
title Transformaciones en el Proceso Enseñanza-Aprendizaje de la Geometría Descriptiva con la Apliacación de los Medios Digitales (Transformations in the Teaching/Learning Process of Descriptive Geometry with the Aplplication of Digital Media)
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 347-348
summary The insert of the digital technologies in the atmosphere Áulico has left generalizing in a significant way. An example constitutes it the high percentage of students that they manifested general knowledge in the software handling in the introductory course of visual communication, as well as the voluntary presentation of practical works developed with digital means. The necessity of an answer to the requirements that arise of the students sinks to the certainty of a pedagogic compatibility among the matter to try and the teaching attended by the personal computer that would increase the Iconidad and the understanding of a topic of certain complexity like it is the geometry of the space. An educational program designed for the teaching of the Sistema Monge whose general characteristics were presented in the II Ibero-American Seminar of Digital Graph and that it will be applied as experience pilot in the course 2000, it will allow us to respond to the following queries: what place it will be given to the educational program in the formation process in connection with the other pedagogic means.
series SIGRADI
email
last changed 2016/03/10 09:47

_id 28f3
authors Alvarado, R.G., Vildósola, G.V., Parra, J.C. and Jara, M.R.
year 2000
title Creacion/Creatividad: Evaluando Diseños Arquitectónicos con Realidad Virtual (Creation/Creativity: Evaluating Architectural Designs by means of Virtual Reality)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 243-246
summary ¿Can the computer improves the architectural creativity? This question is explored through a Virtual-Reality system developed for the modeling of timber structures, based on parametric elements, constructive programming and immersive visualization on real-time. Making experiences of evaluation with advanced students of architecture, whose use the system in the beginning of projects, compared with other group use not the system. This research faces the possibilities to rationalizate part of the creative process in architecture, broading the role of computer and its contribution to quality of design, and extending the possibilities to teach and share the creation of project. It is argue that major potential in this field is the swiftness, formal variety and spatial living of design, challenging the differences between objective and subjective.
series SIGRADI
email
last changed 2016/03/10 09:47

_id ga0007
id ga0007
authors Coates, Paul and Miranda, Pablo
year 2000
title Swarm modelling. The use of Swarm Intelligence to generate architectural form
source International Conference on Generative Art
summary .neither the human purposes nor the architect's method are fully known in advance. Consequently, if this interpretation of the architectural problem situation is accepted, any problem-solving technique that relies on explicit problem definition, on distinct goal orientation, on data collection, or even on non-adaptive algorithms will distort the design process and the human purposes involved.' Stanford Anderson, "Problem-Solving and Problem-Worrying". The works concentrates in the use of the computer as a perceptive device, a sort of virtual hand or "sense", capable of prompting an environment. From a set of data that conforms the environment (in this case the geometrical representation of the form of the site) this perceptive device is capable of differentiating and generating distinct patterns in its behavior, patterns that an observer has to interpret as meaningful information. As Nicholas Negroponte explains referring to the project GROPE in his Architecture Machine: 'In contrast to describing criteria and asking the machine to generate physical form, this exercise focuses on generating criteria from physical form.' 'The onlooking human or architecture machine observes what is "interesting" by observing GROPE's behavior rather than by receiving the testimony that this or that is "interesting".' The swarm as a learning device. In this case the work implements a Swarm as a perceptive device. Swarms constitute a paradigm of parallel systems: a multitude of simple individuals aggregate in colonies or groups, giving rise to collaborative behaviors. The individual sensors can't learn, but the swarm as a system can evolve in to more stable states. These states generate distinct patterns, a result of the inner mechanics of the swarm and of the particularities of the environment. The dynamics of the system allows it to learn and adapt to the environment; information is stored in the speed of the sensors (the more collisions, the slower) that acts as a memory. The speed increases in the absence of collisions and so providing the system with the ability to forget, indispensable for differentiation of information and emergence of patterns. The swarm is both a perceptive and a spatial phenomenon. For being able to Interact with an environment an observer requires some sort of embodiment. In the case of the swarm, its algorithms for moving, collision detection, and swarm mechanics conform its perceptive body. The way this body interacts with its environment in the process of learning and differentiation of spatial patterns constitutes also a spatial phenomenon. The enactive space of the Swarm. Enaction, a concept developed by Maturana and Varela for the description of perception in biological terms, is the understanding of perception as the result of the structural coupling of an environment and an observer. Enaction does not address cognition in the currently conventional sense as an internal manipulation of extrinsic 'information' or 'signals', but as the relation between environment and observer and the blurring of their identities. Thus, the space generated by the swarm is an enactive space, a space without explicit description, and an invention of the swarm-environment structural coupling. If we consider a gestalt as 'Some property -such as roundness- common to a set of sense data and appreciated by organisms or artefacts' (Gordon Pask), the swarm is also able to differentiate space 'gestalts' or spaces of some characteristics, such as 'narrowness', or 'fluidness' etc. Implicit surfaces and the wrapping algorithm. One of the many ways of describing this space is through the use of implicit surfaces. An implicit surface may be imagined as an infinitesimally thin band of some measurable quantity such as color, density, temperature, pressure, etc. Thus, an implicit surface consists of those points in three-space that satisfy some particular requirement. This allows as to wrap the regions of space where a difference of quantity has been produced, enclosing the spaces in which some particular events in the history of the Swarm have occurred. The wrapping method allows complex topologies, such as manifoldness in one continuous surface. It is possible to transform the information generated by the swarm in to a landscape that is the result of the particular reading of the site by the swarm. Working in real time. Because of the complex nature of the machine, the only possible way to evaluate the resulting behavior is in real time. For this purpose specific applications had to be developed, using OpenGL for the Windows programming environment. The package consisted on translators from DXF format to a specific format used by these applications and viceversa, the Swarm "engine", a simulated parallel environment, and the Wrapping programs, to generate the implicit surfaces. Different versions of each had been produced, in different stages of development of the work.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ga0008
id ga0008
authors Koutamanis, Alexander
year 2000
title Redirecting design generation in architecture
source International Conference on Generative Art
summary Design generation has been the traditional culmination of computational design theory in architecture. Motivated either by programmatic and functional complexity (as in space allocation) or by the elegance and power of representational analyses (shape grammars, rectangular arrangements), research has produced generative systems capable of producing new designs that satisfied certain conditions or of reproducing exhaustively entire classes (such as all possible Palladian villas), comprising known and plausible new designs. Most generative systems aimed at a complete spatial design (detailing being an unpopular subject), with minimal if any intervention by the human user / designer. The reason for doing so was either to give a demonstration of the elegance, power and completeness of a system or simply that the replacement of the designer with the computer was the fundamental purpose of the system. In other words, the problem was deemed either already resolved by the generative system or too complex for the human designer. The ongoing democratization of the computer stimulates reconsideration of the principles underlying existing design generation in architecture. While the domain analysis upon which most systems are based is insightful and interesting, jumping to a generative conclusion was almost always based on a very sketchy understanding of human creativity and of the computer's role in designing and creativity. Our current perception of such matters suggests a different approach, based on the augmentation of intuitive creative capabilities with computational extensions. The paper proposes that architectural generative design systems can be redirected towards design exploration, including the development of alternatives and variations. Human designers are known to follow inconsistent strategies when confronted with conflicts in their designs. These strategies are not made more consistent by the emerging forms of design analysis. The use of analytical means such as simulation, couple to the necessity of considering a rapidly growing number of aspects, means that the designer is confronted with huge amounts of information that have to be processed and integrated in the design. Generative design exploration that can combine the analysis results in directed and responsive redesigning seems an effective method for the early stages of the design process, as well as for partial (local) problems in later stages. The transformation of generative systems into feedback support and background assistance for the human designer presupposes re-orientation of design generation with respect to the issues of local intelligence and autonomy. Design generation has made extensive use of local intelligence but has always kept it subservient to global schemes that tended to be holistic, rigid or deterministic. The acceptance of local conditions as largely independent structures (local coordinating devices) affords a more flexible attitude that permits not only the emergence of internal conflicts but also the resolution of such conflicts in a transparent manner. The resulting autonomy of local coordinating devices can be expanded to practically all aspects and abstraction levels. The ability to have intelligent behaviour built in components of the design representation, as well as in the spatial and building elements they signify, means that we can create the new, sharper tools required by the complexity resulting from the interpretation of the built environment as a dynamic configuration of co-operating yet autonomous parts that have to be considered independently and in conjunction with each other.   P.S. The content of the paper will be illustrated by a couple of computer programs that demonstrate the princples of local intelligence and autonomy in redesigning. It is possible that these programs could be presented as independent interactive exhibits but it all depends upon the time we can make free for the development of self-sufficient, self-running demonstrations until December.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 349e
authors Durmisevic, Sanja
year 2002
title Perception Aspects in Underground Spaces using Intelligent Knowledge Modeling
source Delft University of Technology
summary The intensification, combination and transformation are main strategies for future spatial development of the Netherlands, which are stated in the Fifth Bill regarding Spatial Planning. These strategies indicate that in the future, space should be utilized in a more compact and more efficient way requiring, at the same time, re-evaluation of the existing built environment and finding ways to improve it. In this context, the concept of multiple space usage is accentuated, which would focus on intensive 4-dimensional spatial exploration. The underground space is acknowledged as an important part of multiple space usage. In the document 'Spatial Exploration 2000', the underground space is recognized by policy makers as an important new 'frontier' that could provide significant contribution to future spatial requirements.In a relatively short period, the underground space became an important research area. Although among specialists there is appreciation of what underground space could provide for densely populated urban areas, there are still reserved feelings by the public, which mostly relate to the poor quality of these spaces. Many realized underground projects, namely subways, resulted in poor user satisfaction. Today, there is still a significant knowledge gap related to perception of underground space. There is also a lack of detailed documentation on actual applications of the theories, followed by research results and applied techniques. This is the case in different areas of architectural design, but for underground spaces perhaps most evident due to their infancv role in general architectural practice. In order to create better designs, diverse aspects, which are very often of qualitative nature, should be considered in perspective with the final goal to improve quality and image of underground space. In the architectural design process, one has to establish certain relations among design information in advance, to make design backed by sound rationale. The main difficulty at this point is that such relationships may not be determined due to various reasons. One example may be the vagueness of the architectural design data due to linguistic qualities in them. Another, may be vaguely defined design qualities. In this work, the problem was not only the initial fuzziness of the information but also the desired relevancy determination among all pieces of information given. Presently, to determine the existence of such relevancy is more or less a matter of architectural subjective judgement rather than systematic, non-subjective decision-making based on an existing design. This implies that the invocation of certain tools dealing with fuzzy information is essential for enhanced design decisions. Efficient methods and tools to deal with qualitative, soft data are scarce, especially in the architectural domain. Traditionally well established methods, such as statistical analysis, have been used mainly for data analysis focused on similar types to the present research. These methods mainly fall into a category of pattern recognition. Statistical regression methods are the most common approaches towards this goal. One essential drawback of this method is the inability of dealing efficiently with non-linear data. With statistical analysis, the linear relationships are established by regression analysis where dealing with non-linearity is mostly evaded. Concerning the presence of multi-dimensional data sets, it is evident that the assumption of linear relationships among all pieces of information would be a gross approximation, which one has no basis to assume. A starting point in this research was that there maybe both linearity and non-linearity present in the data and therefore the appropriate methods should be used in order to deal with that non-linearity. Therefore, some other commensurate methods were adopted for knowledge modeling. In that respect, soft computing techniques proved to match the quality of the multi-dimensional data-set subject to analysis, which is deemed to be 'soft'. There is yet another reason why soft-computing techniques were applied, which is related to the automation of knowledge modeling. In this respect, traditional models such as Decision Support Systems and Expert Systems have drawbacks. One important drawback is that the development of these systems is a time-consuming process. The programming part, in which various deliberations are required to form a consistent if-then rule knowledge based system, is also a time-consuming activity. For these reasons, the methods and tools from other disciplines, which also deal with soft data, should be integrated into architectural design. With fuzzy logic, the imprecision of data can be dealt with in a similar way to how humans do it. Artificial neural networks are deemed to some extent to model the human brain, and simulate its functions in the form of parallel information processing. They are considered important components of Artificial Intelligence (Al). With neural networks, it is possible to learn from examples, or more precisely to learn from input-output data samples. The combination of the neural and fuzzy approach proved to be a powerful combination for dealing with qualitative data. The problem of automated knowledge modeling is efficiently solved by employment of machine learning techniques. Here, the expertise of prof. dr. Ozer Ciftcioglu in the field of soft computing was crucial for tool development. By combining knowledge from two different disciplines a unique tool could be developed that would enable intelligent modeling of soft data needed for support of the building design process. In this respect, this research is a starting point in that direction. It is multidisciplinary and on the cutting edge between the field of Architecture and the field of Artificial Intelligence. From the architectural viewpoint, the perception of space is considered through relationship between a human being and a built environment. Techniques from the field of Artificial Intelligence are employed to model that relationship. Such an efficient combination of two disciplines makes it possible to extend our knowledge boundaries in the field of architecture and improve design quality. With additional techniques, meta know/edge, or in other words "knowledge about knowledge", can be created. Such techniques involve sensitivity analysis, which determines the amount of dependency of the output of a model (comfort and public safety) on the information fed into the model (input). Another technique is functional relationship modeling between aspects, which is derivation of dependency of a design parameter as a function of user's perceptions. With this technique, it is possible to determine functional relationships between dependent and independent variables. This thesis is a contribution to better understanding of users' perception of underground space, through the prism of public safety and comfort, which was achieved by means of intelligent knowledge modeling. In this respect, this thesis demonstrated an application of ICT (Information and Communication Technology) as a partner in the building design process by employing advanced modeling techniques. The method explained throughout this work is very generic and is possible to apply to not only different areas of architectural design, but also to other domains that involve qualitative data.
keywords Underground Space; Perception; Soft Computing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 9a1e
authors Clayton, Mark J. and Vasquez de Velasco, Guillermo
year 1999
title Stumbling, Backtracking, and Leapfrogging: Two Decades of Introductory Architectural Computing
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 151-158
doi https://doi.org/10.52842/conf.ecaade.1999.151
summary Our collective concept of computing and its relevance to architecture has undergone dramatic shifts in emphasis. A review of popular texts from the past reveals the biases and emphases that were current. In the seventies, architectural computing was generally seen as an elective for data processing specialists. In the early eighties, personal computers and commercial CAD systems were widely adopted. Architectural computing diverged from the "batch" world into the "interactive" world. As personal computing matured, introductory architectural computing courses turned away from a foundation in programming toward instruction in CAD software. By the late eighties, Graphic User Interfaces and windowing operating systems had appeared, leading to a profusion of architecturally relevant applications that needed to be addressed in introductory computing. The introduction of desktop 3D modeling in the early nineties led to increased emphasis upon rendering and animation. The past few years have added new emphases, particularly in the area of network communications, the World Wide Web and Virtual Design Studios. On the horizon are topics of electronic commerce and knowledge markets. This paper reviews these past and current trends and presents an outline for an introductory computing course that is relevant to the year 2000.
keywords Computer-Aided Architectural Design, Computer-Aided Design, Computing Education, Introductory Courses
series eCAADe
email
last changed 2022/06/07 07:56

_id 9403
authors De Carvalho, Silvana Sá
year 2000
title A Telemática e o Meio Técnico- Científico-Informacional: Um Olhar sobre o Urbano (Telematics and Technical Scientific-Information Environment: An Urban View)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 160-162
summary The instantaneous nature of globalized information has brought places closer together and homogenized space, eliminating regional differences. Contemporary urban architecture and the technical-scientific- informational quality of the human-made environment innovates the rationality of the dominant actors in society. The field of telecommunications has developed substantially in the last 30 years, and today we are participants in a digital era, that has not only shortened distances but revolutionized the concepts of time and space. Telematics is a fundamental element of cities at the end of the millennium and has become a new instrument of social control. Electronic vigilance systems, as an application of telematics, are now widely used in cities, and a new urban space is being configured based on this dynamic. This paper is an introductory essay on the topic, which is essential in the understanding of urban spatial dynamics, and its objective is to point out fields for future research.
series SIGRADI
email
last changed 2016/03/10 09:50

_id 1f5c
authors Beesley, Philip and Seebohm, Thomas
year 2000
title Digital Tectonic Design
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 287-290
doi https://doi.org/10.52842/conf.ecaade.2000.287
summary Digital tectonic design is a fresh approach to architectural design methodology. Tectonics means a focus on assemblies of construction elements. Digital tectonics is an evolving methodology that integrates use of design software with traditional construction methods. We see digital tectonic design as a systematic use of geometric and spatial ordinances, used in combination with details and components directly related to contemporary construction. The current approach will, we hope, lead to an architectural curriculum based on generative form making where the computer can be used to produce systems of forms algorithmically. Digital design has tended to remain abstract, emphasizing visual and spatial arrangements often at the expense of materials and construction. Our pursuit is translation of these methods into more fully realized physical qualities. This method offers a rigorous approach based on close study of geometry and building construction elements. Giving a context for this approach, historical examples employing systematic tectonic design are explored in this paper. The underlying geometric ordinance systems and the highly tuned relationships between the details in these examples offer design vocabularies for use within the studio curriculum. The paper concludes with a detailed example from a recent studio project demonstrating particular qualities developed within the method. The method involves a wide range of scales, relating large-scale gestural and schematic studies to detailed assembly systems. Designing in this way means developing geometric strategies and, in parallel, producing detailed symbols or objects to be inserted. These details are assembled into a variety of arrays and groups. The approach is analogous to computer-aided designÕs tradition of shape grammars in which systems of spatial relationships are used to control the insertion of shapes within a space. Using this approach, a three-dimensional representation of a building is iteratively refined until the final result is an integrated, systematically organized complex of symbols representing physical building components. The resulting complex offers substantial material qualities. Strategies of symbol insertions and hierarchical grouping of elements are familiar in digital design practice. However these strategies are usually used for automated production of preconceived designs. In contrast to thsse normal approaches this presentation focuses on emergent qualities produced directly by means of the complex arrays of symbol insertions. The rhyth
keywords 3D CAD Systems, Design Practice, 3D Design Strategies
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:54

_id 36ab
authors Chiu, M.-L., Lin, Y.-T., Tseng, K.-W. and Chen, C.-H.
year 2000
title Museum of Interface. Designing the Virtual Environment
source CAADRIA 2000 [Proceedings of the Fifth Conference on Computer Aided Architectural Design Research in Asia / ISBN 981-04-2491-4] Singapore 18-19 May 2000, pp. 471-480
doi https://doi.org/10.52842/conf.caadria.2000.471
summary A virtual environment (VE) has been designed for functioning as a three-dimensional interface to a repository of images and sounds. This paper attempts to study design interface in VEs. This study first examines the characteristics of VEs. The difference between physical and virtual environments is also studied. The relationship between both is classified as three types, i.e. complement, replacement, or independence. Then it establishes the design interface in VEs, and presents an experimental project, the virtual architecture museum (VAM). Four elements of VEs are highlighted, i.e. wayfinding, linkage, context, and atmosphere. In VAM, the interface is implemented on the web and is integrated with an architectural database. It is found that the appropriateness of design interface can enhance the users' spatial awareness, and consequently facilitate the task of navigation and wayfinding within VEs. The context and atmosphere of VEs can be defined by means of simile or metaphor through the visual or acoustic experience for gaining senses of a place.
series CAADRIA
email
last changed 2022/06/07 07:55

_id 7e01
authors Earl Mark
year 2000
title A Prospectus on Computers Throughout the Design Curriculum
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 77-83
doi https://doi.org/10.52842/conf.ecaade.2000.077
summary Computer aided architectural design has spread throughout architecture schools in the United States as if sown upon the wind. Yet, the proliferation alone may not be a good measure of the computer’s impact on the curriculum or signify the true emergence of a digital design culture. The aura of a relatively new technology may blind us from understanding its actual place in the continuum of design education. The promise of the technology is to completely revolutionize design; however, the reality of change is perhaps rooted in an underlying connection to core design methods. This paper considers a transitional phase within a School reviewing its entire curriculum. Lessons may be found in the Bauhaus educational program at the beginning of the 20 th century and its response to the changing shape of society and industry.
keywords Pedagogy, Computer Based Visualization, Spatial and Data Analysis Methods, Interdisciplinary Computer Based Models
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:55

_id 2759
authors Hotten, Robert D. and Diprose, Peter R.
year 2000
title From Dreamtime to QuickTime: The Resurgence of the 360-Degree Panoramic View as a Form of Computer-Synthesised Architectural Representation.
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 155-162
doi https://doi.org/10.52842/conf.acadia.2000.155
summary The conference theme ‘eternity, infinity and virtuality’ may be considered in terms of time, space and the other. One form of representation that captures all three of these fundamental dimensions, at a glance, is the 360-degree panorama, a medium that is currently making a comeback in the architectural studio. This paper explores the use of the computer-synthesised panorama as a means of representing architectural space and landscape experience, and as a method of informing the design. The panoramic mural is differentiated from two subcategories of QTVR panorama, the subjective and the objective. The use of panoramic views enable landscape architecture students to design using a 2D image format which can be rendered to provide a 3D spatial effect. In summary, the paper contends that the process of design, in architectural practice and in architectural education, is significantly enhanced by the dynamic representations of time and/or space offered by the computer-synthesised panorama.
series ACADIA
last changed 2022/06/07 07:50

_id 97fc
authors Lonsway, Brian
year 2000
title Testing the Space of the Virtual
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 51-61
doi https://doi.org/10.52842/conf.acadia.2000.051
summary Various modes of electronically mediated communication, perception, and immersive bodily engagement, generally categorized as “virtual experiences,” have offered the designer of space a new array of spatial conditions to address. Each of these modes of virtual experience, from text-based discussion forums to immersive virtual reality environments, presents challenges to traditional assumptions about space and its inhabitation. These challenges require design theorization which extends beyond the notions of design within the electronic space (the textual description of the chat forum, the appearance of the computer generated imagery, etc.), and require a reconsideration of the entire electronic and physical apparatus of the mediating devices (the physical spaces which facilitate the interaction, the manner of their connection to the virtual spaces, etc.). In light of the lack of spatial theorization in this area, this paper presents both an experimental framework for understanding this complete space of the virtual and outlines a current research project addressing these theoretical challenges through the spatial implementation of a synthetic environment.
series ACADIA
last changed 2022/06/07 07:52

_id c991
authors Moorhouse, Jon and Brown,Gary
year 1999
title Autonomous Spatial Redistribution for Cities
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 678-684
doi https://doi.org/10.52842/conf.ecaade.1999.678
summary The paper investigates an automated methodology for the appropriate redistribution of usable space in distressed areas of inner cities. This is achieved by categorising activity space and making these spaces morphologically mobile in relation to the topography within a representative artificial space. The educational module has been influenced by theories from the natural environment, which possess patterns that have inherent evolutionary programmes in which the constituents are recyclable, Information is strategically related to the environment to produce forms of growth and behaviour. Artificial landscape patterns fail to evolve, the inhabited landscape needs a means of starting from simplicity and building into the most complex of systems that are capable of re-permutation over time. The paper then describes the latest methodological development in terms of a shift from the use of the computer as a tool for data manipulation to embracing the computer as a design partner. The use of GDL in particular is investigated as a facilitator for such generation within a global, vectorial environment.
keywords Animated, Urban, Programme, Education, Visual Database
series eCAADe
email
last changed 2022/06/07 07:58

_id 4003
authors Nakakoji, K., Yamamoto, Y., Takada, S. and Reeves, B.
year 2000
title Two-Dimensional Spatial Positioning as a Means for Reflection in Design Design Cases
source Proceedings of DIS'00: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2000 pp. 145-154
summary In the realm of computer support for design, developers have focused primarily on power and expressiveness that are important in framing a design solution. They assume that design is a series of calculated steps that lead to a clearly specified goal. The problem with this focus is that the resulting tools hinder the very process that is critical in early phases of a design task; the reflection-in-action process [15]. In the early phases, what is required as the most important ingredient for a design tool is the ability to interact in ways that require as little commitment as possible. This aspect is most evident in domains where two dimensions play a role, such as sketching in architecture. Surprisingly, it is equally true in linear domains such as writing. In this paper, we present our approach of using two-dimensional positioning of objects as a means for reflection in the early phases of a design task. Taking writing as an example, the ART (Amplifying Representational Talkback) system uses two dimensional positioning to support the early stages of the writing task. An eye-tracking user study illustrates important issues in the domain of computer support for design.
keywords Information Systems; User/Machine Systems; Cognitive Models; Reflection-In-Action; Two-Dimensional Positioning; Writing Support
series other
last changed 2002/07/07 16:01

_id 899f
authors Papamichael, K., Pal, V., Bourassa, N., Loffeld, J. and Capeluto, I.G.
year 2000
title An Expandable Software Model for Collaborative Decision-Making During the Whole Building Life Cycle
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 19-28
doi https://doi.org/10.52842/conf.acadia.2000.019
summary Decisions throughout the life cycle of a building, from design through construction and commissioning to operation and demolition, require the involvement of multiple interested parties (e.g., architects, engineers, owners, occupants and facility managers). The performance of alternative designs and courses of action must be assessed with respect to multiple performance criteria, such as comfort, aesthetics, energy, cost and environmental impact. Several stand-alone computer tools are currently available that address specific performance issues during various stages of a building’s life cycle. Some of these tools support collaboration by providing means for synchronous and asynchronous communications, performance simulations, and monitoring of a variety of performance parameters involved in decisions about a building during building operation. However, these tools are not linked in any way, so significant work is required to maintain and distribute information to all parties. In this paper we describe a software model that provides the data management and process control required for collaborative decision-making throughout a building’s life cycle. The requirements for the model are delineated addressing data and process needs for decision making at different stages of a building’s life cycle. The software model meets these requirements and allows addition of any number of processes and support databases over time. What makes the model infinitely expandable is that it is a very generic conceptualization (or abstraction) of processes as relations among data. The software model supports multiple concurrent users, and facilitates discussion and debate leading to decision-making. The software allows users to define rules and functions for automating tasks and alerting all participants to issues that need attention. It supports management of simulated as well as real data and continuously generates information useful for improving performance prediction and understanding of the effects of proposed technologies and strategies.
keywords Decision Making, Integration, Collaboration, Simulation, Building Life Cycle, Software.
series ACADIA
email
last changed 2022/06/07 08:00

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id eef4
authors Senagala, Mahesh
year 2000
title Architecture, Speed, and Relativity: On the Ethics of Eternity, Infinity, and Virtuality
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 29-37
doi https://doi.org/10.52842/conf.acadia.2000.029
summary The main purpose of this essay is to provide a critical framework and raise a debate to understand the spatial and temporal impact of information technologies on architecture. As the world moves from geopolitics to chronopolitics, architecture with its traditional boundaries still vociferously guarded is becoming further marginalized into sectors of mere infrastructure. The essay begins by clarifying the notions of space, time, and speed through a phenomenological interpretation of Minkowskian/ Einsteinian notion of relativistic space-time. Drawing from the cultural critiques offered by Paul Virilio, Marshall McLuhan, and Jacques Ellul, the essay argues that we are at the end of the reign of spacebased institutions and transitioning rapidly into a time-based culture.
keywords Space-time, Virtuality, Critical Theory, Ethics
series ACADIA
email
last changed 2022/06/07 07:56

_id ga0101
id ga0101
authors Tanzini, Luca
year 2000
title Universal City
source International Conference on Generative Art
summary "Universal City" is a multimedia performance that documents the evolution of the city in history. Whereas in the past the city was symbolically the world, today the world has become a city. The city rose up in an area once scattered and disorganized for so long that most of its ancient elements of culture were destroyed. It absorbed and re synthesized the remnants of this culture, cultivating power and efficiency. By means of this concentration of physical and cultural power, the city accelerated the rhythm of human relationships and converted their products into forms that are easily stockpiled and reproduced. Along with monuments, written documents and ordered associative organizations amplified the impact of all human activities, extending backwards and forwards over time. Since the beginning however, law and order stood alongside brute force, and power was always determined by these new institutions. Written law served to produce a canon of justice and equality that claimed a higher principle: the king's will, synonymous with divine command. The Urban Neolithic Revolution is comparable only to the Industrial Revolution, and the Media Technology in our own era. There is of course a substantial difference: ours is an era of immeasurable technological progress as an end in itself, which leads to the explosion of the city, and the consequent dissemination of its structure across the countryside. The old walled city has not only fallen, it's buried its foundations. Our civilization flees from every possibility of control, by means of its own extra resources not controllable by the egregious ambitions of man. The image of modern industrialization that Charlie Chaplin resurrected from the past in "Modern Times" is the exact opposite of contemporary metropolitan reality. He figured the worker as a slave chained to his machine and fed by machinery as he continued to work at maintaining the machine itself. Today the workplace is not so brutal, but automation has made it much more oppressive. Energy and dedication once directed towards the production process are today shifted towards consumption. The metropolis in the final phase of its evolution, is becoming a collective mechanism for maintaining the function of this system, and for giving the illusion of power, wealth, happiness, and total success, to those who are, in actuality, its victims. It is a concept foreign to the modern metropolitan mentality that life should be an occasion to Live, and not an excuse for generating newspaper articles, television interviews, or mass spectacles for those who know nothing better. Instead the process continues, until people prefer the simulacrum to the real, where image dominates over object, the copy over the original, representation over reality, appearance over Being. The first phase of the Economy's domination over social life brought about the visible degradation of every human accomplishment from "Being" into "Having". The present phase of social life's total occupation by the accumulated effects of the Economy is leading to a general downslide from "Having" into "Seeming". The performance is based on the instantaneous interaction between video and music: the video component is assembled in real time with RandomCinema a software that I developed and projected on a screen. The music-noise is the product of human radical improvisation togheter automatic-computer process. Everything is based on the consideration of the element of chance as a stimulus for the construction of the most options. The unpredictable helps to reveal things as they happen. The montage, the music, and their interaction, are born and die and the same moment: there are no stage directions or scripts.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 90ad
authors Voigt, A., Walchhofer, H.P. and Linzer, H.
year 1999
title The Historico-cultural Past as Spatial-related Cognition Archives: Computer-assisted Methods in the History of Urban Development, Archeology and History of Art
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 672-677
doi https://doi.org/10.52842/conf.ecaade.1999.672
summary Implementation of computer-assisted visualizing methods in studying historico-cultural facts provides archeological and historico-cultural research with a tool adding to consolidation of knowledge resulting from assumptions. The visualizing methods presently available by utilizing of computers have advanced to an extent justifying their implementation in the field of archeological and historico-cultural research. The present contribution covers the above matters by means of a variety of applied examples performed at the Institute for Local Planning at the Vienna University of Technology dealing with history of urban development, archeology and history of art.
keywords Historico-cultural Past, Reconstruction, Visualizing Methods
series eCAADe
email
last changed 2022/06/07 07:58

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 37HOMELOGIN (you are user _anon_801454 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002