CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 632

_id a0b6
authors Bhavnani, S., John, B. and Flemming, U.
year 1999
title The Strategic Use of CAD: An Empirically Inspired, Theory-Based Course
source Proceedings of ACM CHI 99 Conference on Human Factors in Computing Systems 1999 v.1 pp. 183-190
summary The inefficient use of complex computer systems has been widely reported. These studies show the persistence of inefficient methods despite many years of experience and formal training. To counteract this phenomenon, we present the design of a new course, called the Strategic Use of CAD. The course aims at teaching students efficient strategies to use a computer-aided drafting system through a two-pronged approach. Learning to See teaches students to recognize opportunities to use efficient strategies by studying the nature of the task, and Learning to Do teaches students to implement the strategies. Results from a pilot experiment show that this approach had a positive effect on the strategic behavior of students who did not exhibit knowledge of efficient strategies before the class, and had no effect on the strategic behavior of those who did. Strategic training can thus assist users in recognizing opportunities to use efficient strategies. We present the ramifications of these results on the design of training and future experiments.
keywords Strategy; Training; GOMS; Learning; Efficiency
series other
email
last changed 2003/11/21 15:16

_id 7546
authors Coyne, R.
year 1999
title Technoromanticism - digital narrative, holism, and the romance of the real
source MIT Press
summary It's no secret that contemporary culture romanticizes digital technologies. In books, articles, and movies about virtual community, virtual reality, artificial intelligence, artificial life, and other wonders of the digital age, breathless anticipation of vast and thrilling changes has become a running theme. But as Richard Coyne makes clear in Technoromanticism: Digital Narrative, Holism, and the Romance of the Real, a dense but rewarding piece of academic criticism, we also get romantic about the new technologies in a more rigorous sense of the word. Whether heralding an electronic return to village communalism or celebrating cyberspace as a realm of pure mind, today's utopian thinking about the digital, Coyne argues, essentially replays the 18th- and 19th-century cultural movement called Romanticism, with its powerful yearnings for transcendence and wholeness. And this apparently is not a good thing. Romanticism, like the more sober Enlightenment rationalism against which it rebelled, has outlived its usefulness as a way of understanding the world, Coyne argues. And so he spends the duration of the book bombarding both the romantic and the rationalist tendencies in cyberculture with every weapon in the arsenal of 20th-century critical theory: poststructuralism, Freudianism, postmodern pragmatism, Heideggerian phenomenology, surrealism--Coyne uses each in turn to whack away at conventional wisdoms about digital tech. Whether the conventional wisdoms remain standing at the end is an open question, but Coyne's tour of the contemporary intellectual landscape is a tour de force, and never before has digital technology's place in that landscape been mapped so thoroughly. --
series other
email
last changed 2003/04/23 15:14

_id b9d3
authors Galán, B., Argumedo, C. and Paganini, A.
year 1999
title Possibilities of the Computer for the Simulation of the Designer's Constructive Strategies
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 74-78
summary The dynamic analysis (prospective), of products and systems, it is a methodological resource of the design that allows synthetically, and with great economy of investigation resources and time, to put in evidence the tendencies in the evolution of the object. Finally, the design strategies are defined as postures in front of these tendencies of evolution of the significant variables in the cycle of the product. Having as theoretical context the theory of systems,we explored the dynamic analysis of products and systems, taking their evolution along a temporary series that embraces a complete cycle, from the birth of the object until their maturation in the period of saturation of the market. Starting from the analysis of the evolution of the diverse subsystems, and the conflicts among the world of the necessities, (as pressure exercised from the context), and the technical agreement, it shows the evolutionary dynamics,the underlying conflicts to the logic of the system for each product. They are revealed to the design like a cultural operation that should keep in mind the processes of transformation of the mental representations of the object whose evolution should respect certain rules for its as, clearly such as the well-known maya threshold, (most advanced, yet accepted).
series SIGRADI
email
last changed 2016/03/10 09:52

_id avocaad_2001_22
id avocaad_2001_22
authors Jos van Leeuwen, Joran Jessurun
year 2001
title XML for Flexibility an Extensibility of Design Information Models
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary The VR-DIS research programme aims at the development of a Virtual Reality – Design Information System. This is a design and decision support system for collaborative design that provides a VR interface for the interaction with both the geometric representation of a design and the non-geometric information concerning the design throughout the design process. The major part of the research programme focuses on early stages of design. The programme is carried out by a large number of researchers from a variety of disciplines in the domain of construction and architecture, including architectural design, building physics, structural design, construction management, etc.Management of design information is at the core of this design and decision support system. Much effort in the development of the system has been and still is dedicated to the underlying theory for information management and its implementation in an Application Programming Interface (API) that the various modules of the system use. The theory is based on a so-called Feature-based modelling approach and is described in the PhD thesis by [first author, 1999] and in [first author et al., 2000a]. This information modelling approach provides three major capabilities: (1) it allows for extensibility of conceptual schemas, which is used to enable a designer to define new typologies to model with; (2) it supports sharing of conceptual schemas, called type-libraries; and (3) it provides a high level of flexibility that offers the designer the opportunity to easily reuse design information and to model information constructs that are not foreseen in any existing typologies. The latter aspect involves the capability to expand information entities in a model with relationships and properties that are not typologically defined but applicable to a particular design situation only; this helps the designer to represent the actual design concepts more accurately.The functional design of the information modelling system is based on a three-layered framework. In the bottom layer, the actual design data is stored in so-called Feature Instances. The middle layer defines the typologies of these instances in so-called Feature Types. The top layer is called the meta-layer because it provides the class definitions for both the Types layer and the Instances layer; both Feature Types and Feature Instances are objects of the classes defined in the top layer. This top layer ensures that types can be defined on the fly and that instances can be created from these types, as well as expanded with non-typological properties and relationships while still conforming to the information structures laid out in the meta-layer.The VR-DIS system consists of a growing number of modules for different kinds of functionality in relation with the design task. These modules access the design information through the API that implements the meta-layer of the framework. This API has previously been implemented using an Object-Oriented Database (OODB), but this implementation had a number of disadvantages. The dependency of the OODB, a commercial software library, was considered the most problematic. Not only are licenses of the OODB library rather expensive, also the fact that this library is not common technology that can easily be shared among a wide range of applications, including existing applications, reduces its suitability for a system with the aforementioned specifications. In addition, the OODB approach required a relatively large effort to implement the desired functionality. It lacked adequate support to generate unique identifications for worldwide information sources that were understandable for human interpretation. This strongly limited the capabilities of the system to share conceptual schemas.The approach that is currently being implemented for the core of the VR-DIS system is based on eXtensible Markup Language (XML). Rather than implementing the meta-layer of the framework into classes of Feature Types and Feature Instances, this level of meta-definitions is provided in a document type definition (DTD). The DTD is complemented with a set of rules that are implemented into a parser API, based on the Document Object Model (DOM). The advantages of the XML approach for the modelling framework are immediate. Type-libraries distributed through Internet are now supported through the mechanisms of namespaces and XLink. The implementation of the API is no longer dependent of a particular database system. This provides much more flexibility in the implementation of the various modules of the VR-DIS system. Being based on the (supposed to become) standard of XML the implementation is much more versatile in its future usage, specifically in a distributed, Internet-based environment.These immediate advantages of the XML approach opened the door to a wide range of applications that are and will be developed on top of the VR-DIS core. Examples of these are the VR-based 3D sketching module [VR-DIS ref., 2000]; the VR-based information-modelling tool that allows the management and manipulation of information models for design in a VR environment [VR-DIS ref., 2000]; and a design-knowledge capturing module that is now under development [first author et al., 2000a and 2000b]. The latter module aims to assist the designer in the recognition and utilisation of existing and new typologies in a design situation. The replacement of the OODB implementation of the API by the XML implementation enables these modules to use distributed Feature databases through Internet, without many changes to their own code, and without the loss of the flexibility and extensibility of conceptual schemas that are implemented as part of the API. Research in the near future will result in Internet-based applications that support designers in the utilisation of distributed libraries of product-information, design-knowledge, case-bases, etc.The paper roughly follows the outline of the abstract, starting with an introduction to the VR-DIS project, its objectives, and the developed theory of the Feature-modelling framework that forms the core of it. It briefly discusses the necessity of schema evolution, flexibility and extensibility of conceptual schemas, and how these capabilities have been addressed in the framework. The major part of the paper describes how the previously mentioned aspects of the framework are implemented in the XML-based approach, providing details on the so-called meta-layer, its definition in the DTD, and the parser rules that complement it. The impact of the XML approach on the functionality of the VR-DIS modules and the system as a whole is demonstrated by a discussion of these modules and scenarios of their usage for design tasks. The paper is concluded with an overview of future work on the sharing of Internet-based design information and design knowledge.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id ga0010
id ga0010
authors Moroni, A., Zuben, F. Von and Manzolli, J.
year 2000
title ArTbitrariness in Music
source International Conference on Generative Art
summary Evolution is now considered not only powerful enough to bring about the biological entities as complex as humans and conciousness, but also useful in simulation to create algorithms and structures of higher levels of complexity than could easily be built by design. In the context of artistic domains, the process of human-machine interaction is analyzed as a good framework to explore creativity and to produce results that could not be obtained without this interaction. When evolutionary computation and other computational intelligence methodologies are involved, every attempt to improve aesthetic judgement we denote as ArTbitrariness, and is interpreted as an interactive iterative optimization process. ArTbitrariness is also suggested as an effective way to produce art through an efficient manipulation of information and a proper use of computational creativity to increase the complexity of the results without neglecting the aesthetic aspects [Moroni et al., 2000]. Our emphasis will be in an approach to interactive music composition. The problem of computer generation of musical material has received extensive attention and a subclass of the field of algorithmic composition includes those applications which use the computer as something in between an instrument, in which a user "plays" through the application's interface, and a compositional aid, which a user experiments with in order to generate stimulating and varying musical material. This approach was adopted in Vox Populi, a hybrid made up of an instrument and a compositional environment. Differently from other systems found in genetic algorithms or evolutionary computation, in which people have to listen to and judge the musical items, Vox Populi uses the computer and the mouse as real-time music controllers, acting as a new interactive computer-based musical instrument. The interface is designed to be flexible for the user to modify the music being generated. It explores evolutionary computation in the context of algorithmic composition and provides a graphical interface that allows to modify the tonal center and the voice range, changing the evolution of the music by using the mouse[Moroni et al., 1999]. A piece of music consists of several sets of musical material manipulated and exposed to the listener, for example pitches, harmonies, rhythms, timbres, etc. They are composed of a finite number of elements and basically, the aim of a composer is to organize those elements in an esthetic way. Modeling a piece as a dynamic system implies a view in which the composer draws trajectories or orbits using the elements of each set [Manzolli, 1991]. Nonlinear iterative mappings are associated with interface controls. In the next page two examples of nonlinear iterative mappings with their resulting musical pieces are shown.The mappings may give rise to attractors, defined as geometric figures that represent the set of stationary states of a non-linear dynamic system, or simply trajectories to which the system is attracted. The relevance of this approach goes beyond music applications per se. Computer music systems that are built on the basis of a solid theory can be coherently embedded into multimedia environments. The richness and specialty of the music domain are likely to initiate new thinking and ideas, which will have an impact on areas such as knowledge representation and planning, and on the design of visual formalisms and human-computer interfaces in general. Above and bellow, Vox Populi interface is depicted, showing two nonlinear iterative mappings with their resulting musical pieces. References [Manzolli, 1991] J. Manzolli. Harmonic Strange Attractors, CEM BULLETIN, Vol. 2, No. 2, 4 -- 7, 1991. [Moroni et al., 1999] Moroni, J. Manzolli, F. Von Zuben, R. Gudwin. Evolutionary Computation applied to Algorithmic Composition, Proceedings of CEC99 - IEEE International Conference on Evolutionary Computation, Washington D. C., p. 807 -- 811,1999. [Moroni et al., 2000] Moroni, A., Von Zuben, F. and Manzolli, J. ArTbitration, Las Vegas, USA: Proceedings of the 2000 Genetic and Evolutionary Computation Conference Workshop Program – GECCO, 143 -- 145, 2000.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id f9f7
authors Mullins, Michael
year 1999
title Forming, Planning, Imaging and Connecting
doi https://doi.org/10.52842/conf.ecaade.1999.178
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 178-185
summary This paper sets out to define aspects of the architectural design process, using historical precedent and architectural theory, and tests the relationship of those aspects to the application of computers in architectural design, particularly in an educational context. The design process sub-sets are defined as: Forming, Planning, Imaging and Connecting. Historical precedents are uncovered in Classical, Modern, Postmodern and Contemporary architecture. The defined categories of the design process are related to current usages of computers in architectural education towards elucidating the strengths and weaknesses of digital media in those areas. Indications of their concurrent usage in digital design will be demonstrated in analysis of design studio programs presented at recent ACADIA conferences. An example of a current design studio programme set at the School of Architecture University of Natal, South Africa in which the above described categories give an underlying structure to the introduction of 3D digital modelling to undergraduates through design process. The definition of this set of design activities may offer a useful method for other educators in assessing existing and future design programs where digital tools are used.
keywords Design-Process, Digital-Media, Design-Programmes
series eCAADe
email
last changed 2022/06/07 07:59

_id 2e50
authors Ozersay, Fevzi and Szalapaj, Peter
year 1999
title Theorising a Sustainable Computer Aided Architectural Education Model
doi https://doi.org/10.52842/conf.ecaade.1999.186
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 186-195
summary The dogmatic structure of architectural education has meant that the production and application of new educational theories, leading to educational models that use computer technology as their central medium of education, is still a relatively under-explored area. Partial models cannot deliver the expected bigger steps, but only bits and pieces. Curricula developments, at many schools of architecture, have been carried out within the closed circuit manner of architectural education, through expanding the traditional curricula and integrating computers into them. There is still no agreed curriculum in schools of architecture, which defines, at least conceptually, the use of computers within it. Do we really know what we are doing? In the words of Aart Bijl; 'If I want to know what I am doing, I need a separate description of my doing it, a theory' [Bijl, 1989]. The word 'sustainability' is defined as understanding the past and responding to the present with concern for the future. Applying this definition to architectural education, this paper aims to outline the necessity and the principles for the construction of a theory of a sustainable computer aided architectural education model, which could lead to an architectural education that is lasting.
keywords Architectural Education, Educational Theories, Computers, Sustainable Models
series eCAADe
email
last changed 2022/06/07 08:00

_id ga9911
id ga9911
authors Riley, Howard
year 1999
title Semiotics and Generative Art
source International Conference on Generative Art
summary The paper begins with a brief explanation of David Marr’s computational theory of visual perception, and his key terms. Marr argued that vision consists in the algorithmic transformation of retinal images so as to produce output of viewer-centred and object-centred representations from an input at the retinae. Those two kinds of output, the viewer-centred and the object-centred representations, enable us to negotiate the physical world. The paper goes on to suggest that the activity of Drawing is comparable as a process of transformation: a picture is a transformation from either viewer-centred, or object-centred descriptions, or a combination of both types of representation, to a two-dimensional drawn representation. These pictures may be described as resulting from algorithmic transformations since picture-making utilises specific geometric procedures for transforming input (our perceptions) into output (our drawings). However, a key point is made about such algorithms: they are culturally-determined. They may be defined in terms of the procedure of selecting and combining choices from the matrix of semiotic systems available within a particular social context. These systems are presented in the paper as a Chart, and are further correlated with the social functions of a communication system such as Drawing. Thus, the paper proposes a systemic-functional semiotics of Drawing, within which algorithms operate to realise specific cultural values in material form. Familiar algorithms are illustrated, such as those governing the transformation of the physics of an array of light at the eye into the set of representations known as perspective projection systems; and also illustrated in the paper are less familiar algorithms devised by artists such as Kenneth Martin and Sol LeWitt.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id bacd
authors Abadí Abbo, Isaac
year 1999
title APPLICATION OF SPATIAL DESIGN ABILITY IN A POSTGRADUATE COURSE
source Full-scale Modeling and the Simulation of Light [Proceedings of the 7th European Full-scale Modeling Association Conference / ISBN 3-85437-167-5] Florence (Italy) 18-20 February 1999, pp. 75-82
summary Spatial Design Ability (SDA) has been defined by the author (1983) as the capacity to anticipate the effects (psychological impressions) that architectural spaces or its components produce in observers or users. This concept, which requires the evaluation of spaces by the people that uses it, was proposed as a guideline to a Masters Degree Course in Architectural Design at the Universidad Autonoma de Aguascalientes in Mexico. The theory and the exercises required for the experience needed a model that could simulate spaces in terms of all the variables involved. Full-scale modeling as has been tested in previous research, offered the most effective mean to experiment with space. A simple, primitive model was designed and built: an articulated ceiling that allows variation in height and shape, and a series of wooden panels for the walls and structure. Several exercises were carried out, mainly to experience cause -effect relationships between space and the psychological impressions they produce. Students researched into spatial taxonomy, intentional sequences of space and spatial character. Results showed that students achieved the expected anticipation of space and that full-scale modeling, even with a simple model, proved to be an effective tool for this purpose. The low cost of the model and the short time it took to be built, opens an important possibility for Institutions involved in architectural studies, both as a research and as a learning tool.
keywords Spatial Design Ability, Architectural Space, User Evaluation, Learning, Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 11:27

_id 5cba
authors Anders, Peter
year 1999
title Beyond Y2k: A Look at Acadia's Present and Future
doi https://doi.org/10.52842/conf.acadia.1999.x.o3r
source ACADIA Quarterly, vol. 18, no. 1, p. 10
summary The sky may not be falling, but it sure is getting closer. Where will you when the last three zeros of our millennial odometer click into place? Computer scientists tell us that Y2K will bring the world’s computer infrastructure to its knees. Maybe, maybe not. But it is interesting that Y2K is an issue at all. Speculating on the future is simultaneously a magnifying glass for examining our technologies and a looking glass for what we become through them. "The future" is nothing new. Orwell's vision of totalitarian mass media did come true, if only as Madison Avenue rather than Big Brother. Futureboosters of the '50s were convinced that each garage would house a private airplane by the year 2000. But world citizens of the 60's and 70's feared a nuclear catastrophe that would replace the earth with a smoking crater. Others - perhaps more optimistically -predicted that computers were going to drive all our activities by the year 2000. And, in fact, theymay not be far off... The year 2000 is symbolic marker, a point of reflection and assessment. And - as this date is approaching rapidly - this may be a good time to come to grips with who we are and where we want to be.
series ACADIA
email
last changed 2022/06/07 07:49

_id d5c8
authors Angelo, C.V., Bueno, A.P., Ludvig, C., Reis, A.F. and Trezub, D.
year 1999
title Image and Shape: Two Distinct Approaches
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 410-415
summary This paper is the result of two researches done at the district of Campeche, Florianópolis, by the Grupo PET/ARQ/UFSC/CAPES. Different aspects and conceptual approaches were used to study the spatial attributes of this district located in the Southern part of Santa Catarina Island. The readings and analysis of two researches were based on graphic pistures builded with the use of Corel 7.0 e AutoCadR14. The first research – "Urban Development in the Island of Santa Catarina: Public Space Study"- examined the urban structures of Campeche based on the Spatial Syntax Theory developed by Hillier and Hanson (1984) that relates form and social appropriation of public spaces. The second research – "Topoceptive Characterisation of Campeche: The Image of a Locality in Expansion in the Island of Santa Catarina" -, based on the methodology developed by Kohlsdorf (1996) and also on the visual analysis proposed by Lynch (1960), identified characteristics of this locality with the specific goal of selecting attributes that contributed to the ideas of the place its population held. The paper consists of an initial exercise of linking these two methods in order to test the complementarity of their analytical tools. Exemplifying the analytical procedures undertaken in the two approaches, the readings done - global (of the locality as a whole) and partial (from parts of the settlement) - are presented and compared.
series SIGRADI
email
last changed 2016/03/10 09:47

_id ga9926
id ga9926
authors Antonini, Riccardo
year 1999
title Let's Improvise Together
source International Conference on Generative Art
summary The creators of ‘Let's-Improvise-Together’ adhere to the idea that while there is a multitude of online games now available in cyberspace, it appears that relatively few are focused on providing a positive, friendly and productive experience for the user. Producing this kind of experience is one the goals of our Amusement Project.To this end, the creation of ‘Let's Improvise Together’ has been guided by dedication to the importance of three themes:* the importance of cooperation,* the importance of creativity, and* the importance of emotion.Description of the GameThe avatar arrives in a certain area where there are many sound-blocks/objects. Or he may add sound "property" to existing ones. He can add new objects at will. Each object may represents a different sound, they do not have to though. The avatar walks around and chooses which objects he likes. Makes copies of these and add sounds or change the sounds on existing ones, then with all of the sound-blocks combined make his personalized "instrument". Now any player can make sounds on the instrument by approaching or bumping into a sound-block. The way that the avatar makes sounds on the instrument can vary. At the end of the improvising session, the ‘composition’ will be saved on the instrument site, along with the personalized instrument. In this way, each user of the Amusement Center will leave behind him a unique instrumental creation, that others who visit the Center later will be able to play on and listen to. The fully creative experience of making a new instrument can be obtained connecting to Active Worlds world ‘Amuse’ and ‘Amuse2’.Animated colorful sounding objects can be assembled by the user in the Virtual Environment as a sort of sounding instrument. We refrain here deliberately from using the word musical instrument, because the level of control we have on the sound in terms of rythm and melody, among other parameters, is very limited. It resembles instead, very closely, to the primitive instruments used by humans in some civilizations or to the experience made by children making sound out of ordinary objects. The dimension of cooperation is of paramount importance in the process of building and using the virtual sounding instrument. The instrument can be built on ones own effort but preferably by a team of cooperating users. The cooperation has as an important corolary: the sharing of the experience. The shared experience finds its permanence in the collective memory of the sounding instruments built. The sounding instrument can be seen also as a virtual sculpture, indeed this sculpture is a multimedial one. The objects have properties that ranges from video animation to sound to virtual physical properties like solidity. The role of the user representation in the Virtual World, called avatar, is important because it conveys, among other things, the user’s emotions. It is worth pointing out that the Avatar has no emotions on its own but it simply expresses the emotions of the user behind it. In a way it could be considered a sort of actor performing the script that the user gives it in real-time while playing.The other important element of the integration is related to the memory of the experience left by the user into the Virtual World. The new layout is explored and experienced. The layout is a permanent editable memory. The generative aspects of Let's improvise together are the following.The multi-media virtual sculpture left behind any participating avatar is not the creation of a single author/artist. The outcome of the sinergic interaction of various authors is not deterministic, nor predictable. The authors can indeed use generative algorythm in order to create the texture to be used on the objects. Usually, in our experience, the visitors of the Amuse worlds use shareware programs in order to generate their texture. In most cases the shareware programs are simple fractals generators. In principle, it is possible to generate also the shape of the object in a generative way. Taking into account the usual audience of our world, we expected visitors to use very simple algorythm that could generate shapes as .rwx files. Indeed, noone has attempted to do so insofar. As far as the music is concerned, the availability of shareware programs that allow simple generation of sounds sequences has made possible, for some users, to generate sounds sequences to be put in our world. In conclusion, the Let's improvise section of the Amuse worlds could be open for experimentation on generative art as a very simple entry point platform. We will be very happy to help anybody that for educational purposes would try to use our platform in order to create and exhibit generative forms of art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 1071
authors Asanowicz, Aleksander
year 1999
title Evolution of Computer Aided Design: Three Generations of CAD
doi https://doi.org/10.52842/conf.ecaade.1999.094
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 94-100
summary This paper describes the three generations of CAD systems. The first generation of (primarily analytical) computer programmes really aided designing. These programmes were the tools for finding a functional solution in different areas of designing, from flat plans to the space organisation of a hospital. One of the shortcomings of these programmes was the lack of graphic interface. With time, however, this kind of interface was developed. As a result of this second generation of CAD systems the computer was transformed into a drafting machine and CAD meant Computer Aided Drafting. The main thesis of this consideration is that only now we have the chance to return to the idea of Computer Aided Design. One of the examples of these trends is the AVOCAAD programme in which Added Value of CAAD is analysed. The development of the third generation of CAD systems will be possible in the near future. Aiding the process of designing will demand the elaboration of new methods of using the computer at the early stages of this process. The computer should be used not for generating variants of functional solutions only but for also for the creation of 3D forms by 3D sketching. For this, the computer should be transformed from a tool into a medium; only then will designing become true Designing in Cyber Space.
keywords Generations of CAAD, Design Process, Creation, Medium
series eCAADe
email
last changed 2022/06/07 07:54

_id 0f1e
authors Barrionuevo, Luis F.
year 1999
title Posicionamiento de Volúmenes Arquitectónicos Mediante Algoritmos Evolucionistas (Positioning of Architectural Volumes by Means of Evolutionist Algorithms)
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 176-181
summary Configurational studies involve the groups of elements fulfilling restrictions defined by the designer in Architectural design. According to its necessities and intentions, the planner distributes the components of the group in a certain tridimensional way, establishing a composition. This operative procedure implies a classification system according to typologies that respond to a bigger system, and this in turn to another, until the whole is obtained. From the beginning the pattern should satisfy form restrictions, as well as dimensional and positional restrictions for each part that conforms the whole. Functional requirements are attended for each object satisfying relationships of connectivity and adjacency among them. In this work the parts are restricted by their relative position to a central element. Evolutionary Algorithms (EA) are used to solve this type of problem. Using evolutionary metaphors they originate concepts such as "genes", "chromosomes", "mutation", "crosses” and " population " (among other), which come closer to one of the solutions looked for by the designer, under combinatory stochastic methods. The most appropriate use of EA corresponds to problems of complexity NP-completeness (for example, problems of generation of cases of composition), allowing an efficient although not exhaustive analysis. Applying this technique to the generation of architectural volumes, some obtained results are exemplified.
series SIGRADI
email
last changed 2016/03/10 09:47

_id 7921
authors Carp, John
year 1999
title Discovering Theory
source AVOCAAD Second International Conference [AVOCAAD Conference Proceedings / ISBN 90-76101-02-07] Brussels (Belgium) 8-10 April 1999, pp. 347-348
summary This abstract describes a course in design theory and methods for students in architecture. It deals with the principles of Morphogenetic Design, in a didactical setting. The course is special in the way that theory and methods are not being taught formally, but discovered by the students themselves. The mechanism for acquiring this knowledge is by means of a series of exercises the students have to carry out. Each exercise is put before the students with a minimum of explanation. The purpose of the exercise is explained afterwards, making maximal use of group discussions on the results of the exercise. The discussion is being focused on comparing the similarities and differences in the variants the students have produced. The aim of the discussion is firstly to discover the structural properties of these variants (the theory) and secondly to discover the most appropriate way of establishing these structural properties (the methods). The variants have come on the table by means of the group effort. The habitual reluctance of architects, when required to produce variants, is thus being overcome. Morphogenetic themes like creation, evaluation and selection are being dealt with in a natural way.
series AVOCAAD
last changed 2005/09/09 10:48

_id a9b0
authors Cha, Myung Yeol and Gero, John
year 1999
title Style Learning: Inductive Generalisation of Architectural Shape Patterns
doi https://doi.org/10.52842/conf.ecaade.1999.629
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 629-644
summary Art historians and critics have defined the style as common features appeared in a class of objects. Abstract common features from a set of objects have been used as a bench mark for date and location of original works. Common features in shapes are identified by relationships as well as physical properties from shape descriptions. This paper will focus on how the computer recognises common shape properties from a class of shape objects to learn style. Shape representation using schema theory has been explored and possible inductive generalisation from shape descriptions has been investigated.
keywords Style, Inductive Generalisation, Knowledge Representation, Shape
series eCAADe
email
last changed 2022/06/07 07:55

_id 1ea1
authors Cheng, Nancy Yen-wen
year 1999
title Digital Design at UO
doi https://doi.org/10.52842/conf.acadia.1999.x.l0k
source ACADIA Quarterly, vol. 18, no. 4, p. 18
summary University of Oregon Architecture Department has developed a spectrum of digital design from introductory methods courses to advanced design studios. With a computing curriculum that stresses a variety of tools, architectural issues such as form-making, communication, collaboration,theory-driven design, and presentation are explored. During the first year, all entering students are required to learn 3D modeling, rendering, image-processing and web-authoring in our Introduction to Architectural ComputerGraphics course. Through the use of cross-platform software, the two hundred beginning students are able to choose to work in either MacOS or Windows. Students begin learning the software by ‘playing’ with geometric elements and further develop their control by describing assigned architectural monuments. In describing the monuments, they begin with 2D diagrams and work up to complete 3D compositions, refining their modelswith symbol libraries. By visualizing back and forth between the drafting and modeling modes, the students quickly connect orthogonal plans and sections with their spatial counterparts. Such connections are an essential foundation for further learning.
series ACADIA
email
last changed 2022/06/07 07:49

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id ga9907
id ga9907
authors Ciao, Quinsan
year 1999
title Breeds of Artificial Design: Design Thinking in Computing Creation
source International Conference on Generative Art
summary There are many different paradigms or breeds of artificial design schemes. They each address artificial design from a different perspective. For instance, design by optimization emphasizes the iterative "trial-and-error" process of alternating generation and evaluation. Design by argumentation addresses the need of objectifying and communicating design thinking. Design by rues attempts to summary design knowledge into recipes. Design by simulation and electronic media offers a forum for design trial evaluation. Case-based design emphasizes experience-based design thinking. Fuzzy reasoning system provides a computing media to model and execute design reasoning. Although different, all of these paradigms are related and complement each other. Unification or collaboration of these different paradigms may lie ahead of future research and practice of artificial design.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 5716
authors Cohen Egler, Tamara Tania
year 1999
title Cyberspace: New Forms of Social Interaction
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 253-258
summary The cyberspace becomes into news forms of communication that transform and expand interaction among men. The objective of our reflection is to understand how space-time relations are changed by the new technologies of communication and information. The starting point of this analysis is the historic dimension of production, interaction and appropriation of space-time processes, proceeding in the se of solving their contemporary forms defined by the growing technology of daily life. It is possible to notice how communication expands the interaction among companies, institutions and society because processes and procedures are publicized, reducing the disorder and uncertain. It is a way of making social complexities more accessible, more clear, being easier read by individuals so they are able to lead with the complex of opportunities and responsibilities that compound the social system. The fundamental constitution of cybernetic spaces is on its capacity of make accessible the processes of communication and information which expand the interaction eliminating intermediaries. The condition of material localization dissolves itself to give place tommunicative interaction. The essential of the question can be stated in the theory that explains that social practices are the result of a cognitive system. That statement send us to the heart of analysis over the importance of comprehending as a moment that precede the action. When societies can be read through a union of knowledge condensed all along their social and cultural development. The development of new technologies of communication and information make nations capable to produce, accumulate diffuse knowledge, conducting to an action of intelligent individuals who write the social development.
series SIGRADI
email
last changed 2016/03/10 09:49

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 31HOMELOGIN (you are user _anon_995459 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002