CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 726

_id sigradi2006_e183a
id sigradi2006_e183a
authors Costa Couceiro, Mauro
year 2006
title La Arquitectura como Extensión Fenotípica Humana - Un Acercamiento Basado en Análisis Computacionales [Architecture as human phenotypic extension – An approach based on computational explorations]
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 56-60
summary The study describes some of the aspects tackled within a current Ph.D. research where architectural applications of constructive, structural and organization processes existing in biological systems are considered. The present information processing capacity of computers and the specific software development have allowed creating a bridge between two holistic nature disciplines: architecture and biology. The crossover between those disciplines entails a methodological paradigm change towards a new one based on the dynamical aspects of forms and compositions. Recent studies about artificial-natural intelligence (Hawkins, 2004) and developmental-evolutionary biology (Maturana, 2004) have added fundamental knowledge about the role of the analogy in the creative process and the relationship between forms and functions. The dimensions and restrictions of the Evo-Devo concepts are analyzed, developed and tested by software that combines parametric geometries, L-systems (Lindenmayer, 1990), shape-grammars (Stiny and Gips, 1971) and evolutionary algorithms (Holland, 1975) as a way of testing new architectural solutions within computable environments. It is pondered Lamarck´s (1744-1829) and Weismann (1834-1914) theoretical approaches to evolution where can be found significant opposing views. Lamarck´s theory assumes that an individual effort towards a specific evolutionary goal can cause change to descendents. On the other hand, Weismann defended that the germ cells are not affected by anything the body learns or any ability it acquires during its life, and cannot pass this information on to the next generation; this is called the Weismann barrier. Lamarck’s widely rejected theory has recently found a new place in artificial and natural intelligence researches as a valid explanation to some aspects of the human knowledge evolution phenomena, that is, the deliberate change of paradigms in the intentional research of solutions. As well as the analogy between genetics and architecture (Estévez and Shu, 2000) is useful in order to understand and program emergent complexity phenomena (Hopfield, 1982) for architectural solutions, also the consideration of architecture as a product of a human extended phenotype can help us to understand better its cultural dimension.
keywords evolutionary computation; genetic architectures; artificial/natural intelligence
series SIGRADI
email
last changed 2016/03/10 09:49

_id 349e
authors Durmisevic, Sanja
year 2002
title Perception Aspects in Underground Spaces using Intelligent Knowledge Modeling
source Delft University of Technology
summary The intensification, combination and transformation are main strategies for future spatial development of the Netherlands, which are stated in the Fifth Bill regarding Spatial Planning. These strategies indicate that in the future, space should be utilized in a more compact and more efficient way requiring, at the same time, re-evaluation of the existing built environment and finding ways to improve it. In this context, the concept of multiple space usage is accentuated, which would focus on intensive 4-dimensional spatial exploration. The underground space is acknowledged as an important part of multiple space usage. In the document 'Spatial Exploration 2000', the underground space is recognized by policy makers as an important new 'frontier' that could provide significant contribution to future spatial requirements.In a relatively short period, the underground space became an important research area. Although among specialists there is appreciation of what underground space could provide for densely populated urban areas, there are still reserved feelings by the public, which mostly relate to the poor quality of these spaces. Many realized underground projects, namely subways, resulted in poor user satisfaction. Today, there is still a significant knowledge gap related to perception of underground space. There is also a lack of detailed documentation on actual applications of the theories, followed by research results and applied techniques. This is the case in different areas of architectural design, but for underground spaces perhaps most evident due to their infancv role in general architectural practice. In order to create better designs, diverse aspects, which are very often of qualitative nature, should be considered in perspective with the final goal to improve quality and image of underground space. In the architectural design process, one has to establish certain relations among design information in advance, to make design backed by sound rationale. The main difficulty at this point is that such relationships may not be determined due to various reasons. One example may be the vagueness of the architectural design data due to linguistic qualities in them. Another, may be vaguely defined design qualities. In this work, the problem was not only the initial fuzziness of the information but also the desired relevancy determination among all pieces of information given. Presently, to determine the existence of such relevancy is more or less a matter of architectural subjective judgement rather than systematic, non-subjective decision-making based on an existing design. This implies that the invocation of certain tools dealing with fuzzy information is essential for enhanced design decisions. Efficient methods and tools to deal with qualitative, soft data are scarce, especially in the architectural domain. Traditionally well established methods, such as statistical analysis, have been used mainly for data analysis focused on similar types to the present research. These methods mainly fall into a category of pattern recognition. Statistical regression methods are the most common approaches towards this goal. One essential drawback of this method is the inability of dealing efficiently with non-linear data. With statistical analysis, the linear relationships are established by regression analysis where dealing with non-linearity is mostly evaded. Concerning the presence of multi-dimensional data sets, it is evident that the assumption of linear relationships among all pieces of information would be a gross approximation, which one has no basis to assume. A starting point in this research was that there maybe both linearity and non-linearity present in the data and therefore the appropriate methods should be used in order to deal with that non-linearity. Therefore, some other commensurate methods were adopted for knowledge modeling. In that respect, soft computing techniques proved to match the quality of the multi-dimensional data-set subject to analysis, which is deemed to be 'soft'. There is yet another reason why soft-computing techniques were applied, which is related to the automation of knowledge modeling. In this respect, traditional models such as Decision Support Systems and Expert Systems have drawbacks. One important drawback is that the development of these systems is a time-consuming process. The programming part, in which various deliberations are required to form a consistent if-then rule knowledge based system, is also a time-consuming activity. For these reasons, the methods and tools from other disciplines, which also deal with soft data, should be integrated into architectural design. With fuzzy logic, the imprecision of data can be dealt with in a similar way to how humans do it. Artificial neural networks are deemed to some extent to model the human brain, and simulate its functions in the form of parallel information processing. They are considered important components of Artificial Intelligence (Al). With neural networks, it is possible to learn from examples, or more precisely to learn from input-output data samples. The combination of the neural and fuzzy approach proved to be a powerful combination for dealing with qualitative data. The problem of automated knowledge modeling is efficiently solved by employment of machine learning techniques. Here, the expertise of prof. dr. Ozer Ciftcioglu in the field of soft computing was crucial for tool development. By combining knowledge from two different disciplines a unique tool could be developed that would enable intelligent modeling of soft data needed for support of the building design process. In this respect, this research is a starting point in that direction. It is multidisciplinary and on the cutting edge between the field of Architecture and the field of Artificial Intelligence. From the architectural viewpoint, the perception of space is considered through relationship between a human being and a built environment. Techniques from the field of Artificial Intelligence are employed to model that relationship. Such an efficient combination of two disciplines makes it possible to extend our knowledge boundaries in the field of architecture and improve design quality. With additional techniques, meta know/edge, or in other words "knowledge about knowledge", can be created. Such techniques involve sensitivity analysis, which determines the amount of dependency of the output of a model (comfort and public safety) on the information fed into the model (input). Another technique is functional relationship modeling between aspects, which is derivation of dependency of a design parameter as a function of user's perceptions. With this technique, it is possible to determine functional relationships between dependent and independent variables. This thesis is a contribution to better understanding of users' perception of underground space, through the prism of public safety and comfort, which was achieved by means of intelligent knowledge modeling. In this respect, this thesis demonstrated an application of ICT (Information and Communication Technology) as a partner in the building design process by employing advanced modeling techniques. The method explained throughout this work is very generic and is possible to apply to not only different areas of architectural design, but also to other domains that involve qualitative data.
keywords Underground Space; Perception; Soft Computing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id c6db
authors Heylighen, Ann
year 2000
title In Case of Architectural Design. Critique and Praise of Case-Based Design in Architecture
source Dissertation - Doct. Toegepaste wetenschappen, KU Leuven, Fac. Toegepaste wetenschappen, Dep. architectuur, stedebouw en ruimtelijke ordening (ISBN 90-5682-248-9)
summary Architects are said to learn design by experience. Learning design by experience is the essence of Case-Based Design (CBD), a sub-domain of Artificial Intelligence. Part I critically explores the CBD approach from an architectural point of view, tracing its origins in the Theory of Dynamic Memory and highlighting its potential for architectural design. Seven CBD systems are analysed, experienced architects and design teachers are interviewed, and an experiment is carried out to examine how cases affect the design performance of architecture students. The results of this exploration show that despite its sound view on how architects acquire (design) knowledge, CBD is limited in important respects: it reduces architectural design to problem solving, is difficult to implement and has to contend with prejudices among the target group. With a view to stretching these limits, part II covers the design, implementation and evaluation of DYNAMO (Dynamic Architectural Memory On-line). This Web-based design tool tailors the CBD approach to the complexity of architectural design by effecting three transformations: extending the concern with design products towards design processes, turning static case bases into dynamic memories and upgrading users from passive case consumers to active case-based designers.
keywords Architectural Design; Case-Based Design
series thesis:PhD
email
last changed 2002/12/14 19:29

_id f345
authors Mustoe, Julian E. H. and Silva, Neander F.
year 2000
title The Teaching of Knowledge Management Systems in Architecture: a Domain Oriented Approach
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 350-351
summary The teaching of artificial intelligence techniques in architecture has generally adopted a computer science oriented approach. However, most of these teaching experiment have failed to raise enthusiasm on the students or long term interest in the subject. It is argued in this paper that the main cause for this failure is due to the approach adopted. A different approach, that is, an domain oriented one will then be described as a promising teaching strategy.
series SIGRADI
email
last changed 2016/03/10 09:55

_id d8df
authors Naticchia, Berardo
year 1999
title Physical Knowledge in Patterns: Bayesian Network Models for Preliminary Design
doi https://doi.org/10.52842/conf.ecaade.1999.611
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 611-619
summary Computer applications in design have pursued two main development directions: analytical modelling and information technology. The former line has produced a large number of tools for reality simulation (i.e. finite element models), the latter is producing an equally large amount of advances in conceptual design support (i.e. artificial intelligence tools). Nevertheless we can trace rare interactions between computation models related to those different approaches. This lack of integration is the main reason of the difficulty of CAAD application to the preliminary stage of design, where logical and quantitative reasoning are closely related in a process that we often call 'qualitative evaluation'. This paper briefly surveys the current development of qualitative physical models applied in design and propose a general approach for modelling physical behaviour by means of Bayesian network we are employing to develop a tutoring and coaching system for natural ventilation preliminary design of halls, called VENTPad. This tool explores the possibility of modelling the causal mechanism that operate in real systems in order to allow a number of integrated logical and quantitative inference about the fluid-dynamic behaviour of an hall. This application could be an interesting connection tool between logical and analytical procedures in preliminary design aiding, able to help students or unskilled architects, both to guide them through the analysis process of numerical data (i.e. obtained with sophisticate Computational Fluid Dynamics software) or experimental data (i.e. obtained with laboratory test models) and to suggest improvements to the design.
keywords Qualitative Physical Modelling, Preliminary Design, Bayesian Networks
series eCAADe
email
last changed 2022/06/07 07:59

_id 6072
authors Orzechowski, M.A., Timmermans, H.J.P. and De Vries, B.
year 2000
title Measuring user satisfaction for design variations through virtual reality
source Timmermans, H.J.P. & Vries, B. de (eds.) Design & Decision Support Systems in Architecture - Proceedings of the 5th International Conference, August 22-25 2000, Nijkerk, pp. 278-288
summary Virtual Reality (VR), and Artificial Intelligence (AI) technology have become increasingly more common in all disciplines of modern life. These new technologies range from simple software assistants to sophisticated modeling of human behavior. In this research project, we are creating an AI agent environment that helps architects to identify user preferences through a Virtual Reality Interface. At the current stage of development, the research project has resulted in a VR application - MuseV2 that allows users to instantly modify an architectural design. The distinctive feature of this application is that a space is considered as a base for all user modifications and as a connection between all design elements. In this paper we provide some technical information about MuseV2. Presentation of a design through VR allows AI agents to observe user-induced modifications and to gather preference information. In addition to allowing for an individualized design, this information generalized across a sample of users should provide the basis for developing basic designs for particular market segments and predict the market potential of those designs. The system that we envision should not become an automated design tool, but an adviser and viewer for users, who have limited knowledge or no knowledge at all about CAD systems, and architectural design. This tool should help investors to assess preferences for new community housing in order to meet the needs of future inhabitants.
series other
email
last changed 2003/04/23 15:50

_id a4e9
authors Petrovic, Igor and Svetel, Igor
year 1999
title From Number Cruncher to Digital Being: The Changing Role of Computer in CAAD
doi https://doi.org/10.52842/conf.ecaade.1999.033
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 33-39
summary The paper reflects on a thirteen-year period of CAAD research and development by a small group of researchers and practitioners. Starting with simple algorithmic drafting programmes, the work transcended to expert systems and distributed artificial intelligence, using computers as tools. The research cycle is about to begin afresh; computers in the next century shall not be detached entities but the extensions of man. The computer shall be the medium that will enable a designer to be what he/she really is. This future has already begun.
keywords History of CAAD, CAAD Design Paradigms, CAADfuture
series eCAADe
email
last changed 2022/06/07 08:00

_id 1bb0
authors Russell, S. and Norvig, P.
year 1995
title Artificial Intelligence: A Modern Approach
source Prentice Hall, Englewood Cliffs, NJ
summary Humankind has given itself the scientific name homo sapiens--man the wise--because our mental capacities are so important to our everyday lives and our sense of self. The field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves. But unlike philosophy and psychology, which are also concerned with AI strives to build intelligent entities as well as understand them. Another reason to study AI is that these constructed intelligent entities are interesting and useful in their own right. AI has produced many significant and impressive products even at this early stage in its development. Although no one can predict the future in detail, it is clear that computers with human-level intelligence (or better) would have a huge impact on our everyday lives and on the future course of civilization. AI addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain{brain}, whether biological or electronic, to perceive, understand, predict, and manipulate a world far larger and more complicated than itself? How do we go about making something with those properties? These are hard questions, but unlike the search for faster-than-light travel or an antigravity device, the researcher in AI has solid evidence that the quest is possible. All the researcher has to do is look in the mirror to see an example of an intelligent system. AI is one of the newest disciplines. It was formally initiated in 1956, when the name was coined, although at that point work had been under way for about five years. Along with modern genetics, it is regularly cited as the ``field I would most like to be in'' by scientists in other disciplines. A student in physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest, and that it takes many years of study before one can contribute new ideas. AI, on the other hand, still has openings for a full-time Einstein. The study of intelligence is also one of the oldest disciplines. For over 2000 years, philosophers have tried to understand how seeing, learning, remembering, and reasoning could, or should, be done. The advent of usable computers in the early 1950s turned the learned but armchair speculation concerning these mental faculties into a real experimental and theoretical discipline. Many felt that the new ``Electronic Super-Brains'' had unlimited potential for intelligence. ``Faster Than Einstein'' was a typical headline. But as well as providing a vehicle for creating artificially intelligent entities, the computer provides a tool for testing theories of intelligence, and many theories failed to withstand the test--a case of ``out of the armchair, into the fire.'' AI has turned out to be more difficult than many at first imagined, and modern ideas are much richer, more subtle, and more interesting as a result. AI currently encompasses a huge variety of subfields, from general-purpose areas such as perception and logical reasoning, to specific tasks such as playing chess, proving mathematical theorems, writing poetry{poetry}, and diagnosing diseases. Often, scientists in other fields move gradually into artificial intelligence, where they find the tools and vocabulary to systematize and automate the intellectual tasks on which they have been working all their lives. Similarly, workers in AI can choose to apply their methods to any area of human intellectual endeavor. In this sense, it is truly a universal field.
series other
last changed 2003/04/23 15:14

_id 4e0a
authors Bouchlaghem, N., Sher, W. and Beacham, N.
year 2000
title Computer Imagery and Visualization in Civil Engineering Education
source Journal of Computing in Civil Engineering, Vol. 14, No. 2, April 2000, pp. 134-140
summary Higher education institutions in the United Kingdom have invested significantly in the implementation of communication and information technology in teaching, learning, and assessment of civil and building engineering—with mixed results. This paper focuses on the use of digital imagery and visualization materials to improve student understanding. It describes ways in which these materials are being used in the civil and building engineering curriculum, and, in particular, how distributed performance support systems (DPSS) can be applied to make more effective use of digital imagery and visualization material. This paper centers on the extent to which DPSS can be used in a civil and building vocational and continuing professional development context by tutors in the form of an electronic course delivery tool and by students in the form of an open-access student information system. This paper then describes how a DPSS approach to education is being adopted at Loughborough University as part of the CAL-Visual project. After highlighting the main aims and objectives of the project and describing the system, this paper discusses some of the issues encountered during the design and implementation of a DPSS and presents some preliminary results from initial trials.
keywords Computer Aided Instruction; Engineering Education; Imaging Techniques; Information Systems; Professional Development
series journal paper
last changed 2003/05/15 21:45

_id 93ff
authors Chateau, H.B., Alvarado, R.G., Vergara, R.L. Parra Márquez, J.C.
year 2000
title Un Modelo Experimental em el Espacio-Tiempo de la Realidad Virtual (An Experimental Model in the Space-Time of Virtual Reality)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 251-253
summary Virtual environments are a convergence between communicational media and computational capacities, that progressively integrate in interactive and global systems. This technological evolution has been progressively creating artificial contexts that find their latest and most integral expression in virtual environments. The influence of virtual worlds in our culture questions architecture, and arises the challenge of understanding the approach that should exist from architecture into virtual reality. This paper consists on an experimental exercise in virtual time-space oriented to the news information (News Information Centre), recognising that a relevant architectural event of our time is that virtual worlds represent a meeting between communicational technologies and the interest of contemporary society on being always informed (on line). This project is basically an exploration of virtual design that widens the professional field of architectural study, into the new technological and cultural challenges, that will probably influence significantly on the relations between architecture and urban culture.
series SIGRADI
email
last changed 2016/03/10 09:48

_id 42dd
authors Kobayashi, Yoshihiro and Terzidis, Kostas
year 2000
title Extracting the Geometry of Buildings from Satellite Images using Fuzzy Multiple Layer Perceptrons
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 236-239
summary This paper presents Computer Aided Architectural Design (CAAD) system utilizing the technologies of artificial intelligence (AI) and image processing. The goal is to create a CAAD system that detects buildings from satellite images and produces computer city models allowing the system’s users to manipulate the models utilizing machine learning technology. The flexibility and usability of the system was evaluated with case studies. Soft computing technologies including neural networks and fuzzy systems are mainly applied and tested as the system’s methodology.
series SIGRADI
email
last changed 2016/03/10 09:54

_id ga0008
id ga0008
authors Koutamanis, Alexander
year 2000
title Redirecting design generation in architecture
source International Conference on Generative Art
summary Design generation has been the traditional culmination of computational design theory in architecture. Motivated either by programmatic and functional complexity (as in space allocation) or by the elegance and power of representational analyses (shape grammars, rectangular arrangements), research has produced generative systems capable of producing new designs that satisfied certain conditions or of reproducing exhaustively entire classes (such as all possible Palladian villas), comprising known and plausible new designs. Most generative systems aimed at a complete spatial design (detailing being an unpopular subject), with minimal if any intervention by the human user / designer. The reason for doing so was either to give a demonstration of the elegance, power and completeness of a system or simply that the replacement of the designer with the computer was the fundamental purpose of the system. In other words, the problem was deemed either already resolved by the generative system or too complex for the human designer. The ongoing democratization of the computer stimulates reconsideration of the principles underlying existing design generation in architecture. While the domain analysis upon which most systems are based is insightful and interesting, jumping to a generative conclusion was almost always based on a very sketchy understanding of human creativity and of the computer's role in designing and creativity. Our current perception of such matters suggests a different approach, based on the augmentation of intuitive creative capabilities with computational extensions. The paper proposes that architectural generative design systems can be redirected towards design exploration, including the development of alternatives and variations. Human designers are known to follow inconsistent strategies when confronted with conflicts in their designs. These strategies are not made more consistent by the emerging forms of design analysis. The use of analytical means such as simulation, couple to the necessity of considering a rapidly growing number of aspects, means that the designer is confronted with huge amounts of information that have to be processed and integrated in the design. Generative design exploration that can combine the analysis results in directed and responsive redesigning seems an effective method for the early stages of the design process, as well as for partial (local) problems in later stages. The transformation of generative systems into feedback support and background assistance for the human designer presupposes re-orientation of design generation with respect to the issues of local intelligence and autonomy. Design generation has made extensive use of local intelligence but has always kept it subservient to global schemes that tended to be holistic, rigid or deterministic. The acceptance of local conditions as largely independent structures (local coordinating devices) affords a more flexible attitude that permits not only the emergence of internal conflicts but also the resolution of such conflicts in a transparent manner. The resulting autonomy of local coordinating devices can be expanded to practically all aspects and abstraction levels. The ability to have intelligent behaviour built in components of the design representation, as well as in the spatial and building elements they signify, means that we can create the new, sharper tools required by the complexity resulting from the interpretation of the built environment as a dynamic configuration of co-operating yet autonomous parts that have to be considered independently and in conjunction with each other.   P.S. The content of the paper will be illustrated by a couple of computer programs that demonstrate the princples of local intelligence and autonomy in redesigning. It is possible that these programs could be presented as independent interactive exhibits but it all depends upon the time we can make free for the development of self-sufficient, self-running demonstrations until December.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 60a7
authors Monedero, Javier
year 2000
title Parametric design: a review and some experiences
source Automation in Construction 9 (4) (2000) pp. 369-377
summary During the last few years there has been an extraordinary development of computer-aided tools intended to present or communicate the results of architectural projects. But there has not been a comparable progress in the development of tools intended to assist design to generate architectural forms in an easy and interactive way. Even worse, architects who use the powerful means provided by computers as a direct tool to create architectural forms are still an exception. Architecture continues to be produced by traditional means using the computer as little more than a drafting tool. The main reasons that may explain this situation can be identified rather easily, although there will be significant differences of opinion. In my opinion, it is a mistake trying to advance too rapidly and, for instance, proposing integrated design methods using expert systems and artificial intelligence while no adequate tools to generate and modify simple 3D-models are available. The modeling tools we have at the present moment are unsatisfactory. Their principal limitation is the lack of appropriate instruments to modify interactively the model once it has been created. This is a fundamental aspect in any design activity, where the designer is constantly going forward and backwards, re-elaborating once and again some particular aspect of the model, or its general layout, or even coming back to a previous solution that had been temporarily abandoned. This paper presents a general summary of the actual situation and recent developments that may be incorporated to architectural design tools in a near future, together with some critical remarks about their relevance to architecture.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id ga0010
id ga0010
authors Moroni, A., Zuben, F. Von and Manzolli, J.
year 2000
title ArTbitrariness in Music
source International Conference on Generative Art
summary Evolution is now considered not only powerful enough to bring about the biological entities as complex as humans and conciousness, but also useful in simulation to create algorithms and structures of higher levels of complexity than could easily be built by design. In the context of artistic domains, the process of human-machine interaction is analyzed as a good framework to explore creativity and to produce results that could not be obtained without this interaction. When evolutionary computation and other computational intelligence methodologies are involved, every attempt to improve aesthetic judgement we denote as ArTbitrariness, and is interpreted as an interactive iterative optimization process. ArTbitrariness is also suggested as an effective way to produce art through an efficient manipulation of information and a proper use of computational creativity to increase the complexity of the results without neglecting the aesthetic aspects [Moroni et al., 2000]. Our emphasis will be in an approach to interactive music composition. The problem of computer generation of musical material has received extensive attention and a subclass of the field of algorithmic composition includes those applications which use the computer as something in between an instrument, in which a user "plays" through the application's interface, and a compositional aid, which a user experiments with in order to generate stimulating and varying musical material. This approach was adopted in Vox Populi, a hybrid made up of an instrument and a compositional environment. Differently from other systems found in genetic algorithms or evolutionary computation, in which people have to listen to and judge the musical items, Vox Populi uses the computer and the mouse as real-time music controllers, acting as a new interactive computer-based musical instrument. The interface is designed to be flexible for the user to modify the music being generated. It explores evolutionary computation in the context of algorithmic composition and provides a graphical interface that allows to modify the tonal center and the voice range, changing the evolution of the music by using the mouse[Moroni et al., 1999]. A piece of music consists of several sets of musical material manipulated and exposed to the listener, for example pitches, harmonies, rhythms, timbres, etc. They are composed of a finite number of elements and basically, the aim of a composer is to organize those elements in an esthetic way. Modeling a piece as a dynamic system implies a view in which the composer draws trajectories or orbits using the elements of each set [Manzolli, 1991]. Nonlinear iterative mappings are associated with interface controls. In the next page two examples of nonlinear iterative mappings with their resulting musical pieces are shown.The mappings may give rise to attractors, defined as geometric figures that represent the set of stationary states of a non-linear dynamic system, or simply trajectories to which the system is attracted. The relevance of this approach goes beyond music applications per se. Computer music systems that are built on the basis of a solid theory can be coherently embedded into multimedia environments. The richness and specialty of the music domain are likely to initiate new thinking and ideas, which will have an impact on areas such as knowledge representation and planning, and on the design of visual formalisms and human-computer interfaces in general. Above and bellow, Vox Populi interface is depicted, showing two nonlinear iterative mappings with their resulting musical pieces. References [Manzolli, 1991] J. Manzolli. Harmonic Strange Attractors, CEM BULLETIN, Vol. 2, No. 2, 4 -- 7, 1991. [Moroni et al., 1999] Moroni, J. Manzolli, F. Von Zuben, R. Gudwin. Evolutionary Computation applied to Algorithmic Composition, Proceedings of CEC99 - IEEE International Conference on Evolutionary Computation, Washington D. C., p. 807 -- 811,1999. [Moroni et al., 2000] Moroni, A., Von Zuben, F. and Manzolli, J. ArTbitration, Las Vegas, USA: Proceedings of the 2000 Genetic and Evolutionary Computation Conference Workshop Program – GECCO, 143 -- 145, 2000.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ddssar0021
id ddssar0021
authors Orzechowski, M.A., Timmermans, H.J.P. and Vries, B. de
year 2000
title Measuring user satisfaction for design variations through virtual reality
source Timmermans, Harry (Ed.), Fifth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Nijkerk, the Netherlands)
summary This paper describes Virtual Reality as an environment to collect information about user satisfaction. Because Virtual Reality (VR) allows visualization with added interactivity, this form of representation has particular advantages when presenting new designs. The paper reports on the development of a VR system that supports architects to collect opinions about their design alternatives in terms of user preferences. An alternative to conjoint analysis, that uses statistical choice variations to estimate user preference functions, is developed. Artificial Intelligence (AI) Agent technology will be implemented to build a model for data collection, prediction, and learning processes.
series DDSS
last changed 2003/08/07 16:36

_id 1a3d
authors Willey, David
year 1999
title Sketchpad to 2000: From Computer Systems to Digital Environments
doi https://doi.org/10.52842/conf.ecaade.1999.526
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 526-532
summary It can be argued that over the last thirty five years computer aided architectural design (CAAD) has made little impact in terms of aiding design. The paper provides a broadbrush review of the last 35 years of CAAD research and suggests that the SKETCHPAD notion that has dominated CAAD since 1963 is now a flawed concept. Then the discipline was replete with Modernist concepts of optimal solutions, objective design criteria and universal design standards. Now CAD needs to proceed on the basis of the Post Modern ways of thinking and designing opened up by digital techniques - the Internet, multimedia, virtual reality, electronic games, distance learning. Computers facilitate information flow and storage. In the late seventies and eighties the CAAD research community's response to the difficulties it had identified with the construction of integrated digital building models was to attempt to improve the intelligence of the computer systems to better match the understanding of designers. Now it is clear that the future could easily lie with CAAD systems that have almost no intelligence and make no attempt to aid the designer. Communication is much more central to designing than computing.
keywords History, Intelligence, Interface, Sketchpad, Web
series eCAADe
email
last changed 2022/06/07 07:56

_id 958e
authors Coppola, Carlo and Ceso, Alessandro
year 2000
title Computer Aided Design and Artificial Intelligence in Urban and Architectural Design
doi https://doi.org/10.52842/conf.ecaade.2000.301
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 301-307
summary In general, computer-aided design is still limited to a rather elementary use of the medium, as it is mainly used for the representation/simulation of a design idea w an electronic drawing-table. hich is not computer-generated. The procedures used to date have been basically been those of an electronic drawing-table. At the first stage of development the objective was to find a different and better means of communication, to give form to an idea so as to show its quality. The procedures used were 2D design and 3D simulation models, usually used when the design was already defined. The second stage is when solid 3D modelling is used to define the formal design at the conception stage, using virtual models instead of study models in wood, plastic, etc. At the same time in other connected fields the objective is to evaluate the feasibility of the formal idea by means of structural and technological analysis. The third stage, in my opinion, should aim to develop procedures capable of contributing to both the generation of the formal idea and the simultaneous study of technical feasibility by means of a decision-making support system aided by an Artificial Intelligence procedure which will lead to what I would describe as the definition of the design in its totality. The approach to architectural and urban design has been strongly influenced by the first two stages, though these have developed independently and with very specific objectives. It is my belief that architectural design is now increasingly the result of a structured and complex process, not a simple act of pure artistic invention. Consequently, I feel that the way forward is a procedure able to virtually represent all the features of the object designed, not only in its definitive configuration but also and more importantly in the interactions which determine the design process as it develops. Thus A.I. becomes the means of synthesis for models which are hierarchically subordinated which together determine the design object in its developmental process, supporting decision-making by applying processing criteria which generative modelling has already identified. This trend is currently being experimented, giving rise to interesting results from process design in the field of industrial production.
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:56

_id ec4d
authors Croser, J.
year 2001
title GDL Object
source The Architect’s Journal, 14 June 2001, pp. 49-50
summary It is all too common for technology companies to seek a new route to solving the same problem but for the most part the solutions address the effect and not the cause. The good old-fashioned pencil is the perfect example where inventors have sought to design-out the effect of the inherent brittleness of lead. Traditionally different methods of sharpening were suggested and more recently the propelling pencil has reigned king, the lead being supported by the dispensing sleeve thus reducing the likelihood of breakage. Developers convinced by the Single Building Model approach to design development have each embarked on a difficult journey to create an easy to use feature packed application. Unfortunately it seems that the two are not mutually compatible if we are to believe what we see emanating from Technology giants Autodesk in the guise of Architectural Desktop 3. The effect of their development is a feature rich environment but the cost and in this case the cause is a tool which is far from easy to use. However, this is only a small part of a much bigger problem, Interoperability. You see when one designer develops a model with one tool the information is typically locked in that environment. Of course the geometry can be distributed and shared amongst the team for use with their tools but the properties, or as often misquoted, the intelligence is lost along the way. The effect is the technological version of rubble; the cause is the low quality of data-translation available to us. Fortunately there is one company, which is making rapid advancements on the whole issue of collaboration, and data sharing. An old timer (Graphisoft - famous for ArchiCAD) has just donned a smart new suit, set up a new company called GDL Technology and stepped into the ring to do battle, with a difference. The difference is that GDL Technology does not rely on conquering the competition, quite the opposite in fact their success relies upon the continued success of all the major CAD platforms including AutoCAD, MicroStation and ArchiCAD (of course). GDL Technology have created a standard data format for manufacturers called GDL Objects. Product manufacturers such as Velux are now able to develop product libraries using GDL Objects, which can then be placed in a CAD model, or drawing using almost any CAD tool. The product libraries can be stored on the web or on CD giving easy download access to any building industry professional. These objects are created using scripts which makes them tiny for downloading from the web. Each object contains 3 important types of information: · Parametric scale dependant 2d plan symbols · Full 3d geometric data · Manufacturers information such as material, colour and price Whilst manufacturers are racing to GDL Technologies door to sign up, developers and clients are quick to see the benefit too. Porsche are using GDL Objects to manage their brand identity as they build over 300 new showrooms worldwide. Having defined the building style and interior Porsche, in conjunction with the product suppliers, have produced a CD-ROM with all of the selected building components such as cladding, doors, furniture, and finishes. Designing and detailing the various schemes will therefore be as straightforward as using Lego. To ease the process of accessing, sizing and placing the product libraries GDL Technology have developed a product called GDL Object Explorer, a free-standing application which can be placed on the CD with the product libraries. Furthermore, whilst the Object Explorer gives access to the GDL Objects it also enables the user to save the object in one of many file formats including DWG, DGN, DXF, 3DS and even the IAI's IFC. However, if you are an AutoCAD user there is another tool, which has been designed especially for you, it is called the Object Adapter and it works inside of AutoCAD 14 and 2000. The Object Adapter will dynamically convert all GDL Objects to AutoCAD Blocks during placement, which means that they can be controlled with standard AutoCAD commands. Furthermore, each object can be linked to an online document from the manufacturer web site, which is ideal for more extensive product information. Other tools, which have been developed to make the most of the objects, are the Web Plug-in and SalesCAD. The Plug-in enables objects to be dynamically modified and displayed on web pages and Sales CAD is an easy to learn and use design tool for sales teams to explore, develop and cost designs on a Notebook PC whilst sitting in the architects office. All sales quotations are directly extracted from the model and presented in HTML format as a mixture of product images, product descriptions and tables identifying quantities and costs. With full lifecycle information stored in each GDL Object it is no surprise that GDL Technology see their objects as the future for building design. Indeed they are not alone, the IAI have already said that they are going to explore the possibility of associating GDL Objects with their own data sharing format the IFC. So down to the dirty stuff, money and how much it costs? Well, at the risk of sounding like a market trader in Petticoat Lane, "To you guv? Nuffin". That's right as a user of this technology it will cost you nothing! Not a penny, it is gratis, free. The product manufacturer pays for the license to host their libraries on the web or on CD and even then their costs are small costing from as little as 50p for each CD filled with objects. GDL Technology has come up trumps with their GDL Objects. They have developed a new way to solve old problems. If CAD were a pencil then GDL Objects would be ballistic lead, which would never break or loose its point. A much better alternative to the strategy used by many of their competitors who seek to avoid breaking the pencil by persuading the artist not to press down so hard. If you are still reading and you have not already dropped the magazine and run off to find out if your favorite product supplier has already signed up then I suggest you check out the following web sites www.gdlcentral.com and www.gdltechnology.com. If you do not see them there, pick up the phone and ask them why.
series journal paper
email
last changed 2003/04/23 15:14

_id d59a
authors Zarnowiecka, Jadwiga C.
year 1999
title AI and Regional Architecture
doi https://doi.org/10.52842/conf.ecaade.1999.584
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 584-588
summary In 1976 Richard Foqué established periods in the development of methods of designing. The first stage (the 50's and early 60's) - automatization of the designing process - properly identified language of description that is understood by a machine is vital. Christopher Alexander publishes 'Pattern Language'. The second stage (late 60's) - the use of the Arts - research techniques as interview, questionnaire, active observation; ergonomic aspects are also taken into consideration. The third stage (starts at the turn of the 60's and 70's) - co-participation of all of the parties involved in the designing process, and especially the user. The designing process becomes more complex but at the same time more intelligible to a non-professional - Alexander's 'Pattern Language' returns. It's been over 20 years now since the publication of this work. In the mid 70's prototypes of integrate building description are created. We are dealing now with the next stage of the designing methods development. Unquestionable progress of computer optimalization of technical and economical solutions has taken place. It's being forecasted that the next stage would be using computer as a simulator of the designing process. This stage may be combined with the development of AI. (Already in 1950 Alan Turing had formulated the theoretical grounds of Artificial Intelligence.) Can the development of the AI have the influence on the creation of present time regional architecture? Hereby I risk a conclusion that the development of AI can contribute to the creation of modern regional architecture.
keywords Design Process, Artificial Intelligence, Regional Architecture
series eCAADe
email
last changed 2022/06/07 07:57

_id ddssar0001
id ddssar0001
authors Achten, Henri and Leeuwen, Jos van
year 2000
title Towards generic representations of designs formalised as features
source Timmermans, Harry (Ed.), Fifth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Nijkerk, the Netherlands)
summary Feature-Based Modelling (FBM) is an information modelling technique that allows the formalisation of design concepts and using these formal definitions in design modelling. The dynamic nature of design and design information calls for a specialised approach to FBM that takes into account flexibility and extensibility of Feature Models of designs. Research work in Eindhoven has led to a FBM framework and implementation that can be used to support design.. Feature models of a design process has demonstrated the feasibility of using this information modelling technique. To develop the work on FBM in design, three tracks are initiated: Feature model descriptions of design processes, automated generic representation recognition in graphic representations, and Feature models of generic representations. The paper shows the status of the work in the first two tracks, and present the results of the research work.
series DDSS
last changed 2003/11/21 15:15

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 36HOMELOGIN (you are user _anon_489468 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002