CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 745

_id ddssar0001
id ddssar0001
authors Achten, Henri and Leeuwen, Jos van
year 2000
title Towards generic representations of designs formalised as features
source Timmermans, Harry (Ed.), Fifth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Nijkerk, the Netherlands)
summary Feature-Based Modelling (FBM) is an information modelling technique that allows the formalisation of design concepts and using these formal definitions in design modelling. The dynamic nature of design and design information calls for a specialised approach to FBM that takes into account flexibility and extensibility of Feature Models of designs. Research work in Eindhoven has led to a FBM framework and implementation that can be used to support design.. Feature models of a design process has demonstrated the feasibility of using this information modelling technique. To develop the work on FBM in design, three tracks are initiated: Feature model descriptions of design processes, automated generic representation recognition in graphic representations, and Feature models of generic representations. The paper shows the status of the work in the first two tracks, and present the results of the research work.
series DDSS
last changed 2003/11/21 15:15

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id ga0025
id ga0025
authors Chiodi , Andrea and Vernillo, Marco M.
year 2000
title Deep Architectures and Exterior Communication in Generative Art
source International Conference on Generative Art
summary Human beings formulate their thoughts through their own language. To use a sentence by Ezra Pound: “The thought hinges on word definition.” Software beings formulate their thoughts through data structures. Not through a specific expressive means, but directly through concepts and relations. Human beings formulate their thoughts in a context, which does not require any further translation. If software beings want to be appreciated by human beings, they are forced to translate their thoughts in one of the languages the human beings are able to understand. On the contrary, when a software being communicates with another software being, this unnatural translation is not justified: communication takes place directly through data structures, made uniform by opportune communication protocols. The Generative Art prospect gives the software beings the opportunity to create works according to their own nature. But, if the result of such a creation must be expressed in a language human beings are able to comprehend, then this result is a sort of circus performance and not a free thought. Let’s give software beings the dignity they deserve and therefore allow them to express themselves according to their own nature: by data structures. This work studies in depth the opportunity to divide the software ‘thought’ communication from its translation in a human language. The recent introduction of XML leads to formal languages definition oriented to data structure representation. Intrinsically data and program, XML allows, through subsequent executions and validations, the realization of typical contextual grammars descriptions, allowing the management of high complexities. The translation from a data structure into a human language can take place later on and be oriented to different alternative kind of expression: lexical (according to national languages), graphical, musical, plastic. The direct expression of data structures promises further communication opportunities also for human beings. One of these is the definition of a non-national language, as free as possible from lexical ambiguities, extremely precise. Another opportunity concerns the possibility to express concepts usually hidden by their own representation. A Roman bridge, the adagio “Music for strings, celesta and drums” by Bartok and Kafka’s short novel “In the gallery” have something in common; a work of Generative Art, first expressed in terms of structure and then translated into an architectural, musical, or literary work can express this explicit community.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 08ea
authors Clayton, Mark J. and Vasquez de Velasco, Guillermo P. (Eds.)
year 2000
title ACADIA 2000: Eternity, Infinity and Virtuality in Architecture
doi https://doi.org/10.52842/conf.acadia.2000
source Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8 / Washington D.C. 19-22 October 2000, 284 p.
summary Eternity, time without end, infinity, space without limits and virtuality, perception without constraints; provide the conceptual framework in which ACADIA 2000 is conceived. It is in human nature to fill what is empty and to empty what is full. Today, thanks to the power of computer processing we can also make small what is too big, make big what is too small, make fast what is too slow, make slow what is too fast, make real what does not exist, and make our reality omni-present at global scale. These are capabilities for which we have no precedents. What we make of them is our privilege and responsibility. Information about a building flows past our keyboards and on to other people. Although we, as architects, add to the information, it originated before us and will go beyond our touch in time, space and understanding. A building description acquires a life of its own that may surpass our own lives as it is stored, transferred, transformed, and reused by unknown intellects, both human and artificial, and in unknown processes. Our actions right now have unforeseen effects. Digital media blurs the boundaries of space, time and our perception of reality. ACADIA 2000 explores the theme of time, space and perception in relation to the information and knowledge that describes architecture. Our invitation to those who are finding ways to apply computer processing power in architecture received overwhelming response, generating paper submissions from five continents. A selected group of reviewers recommended the publication of 24 original full papers out of 42 submitted and 13 short papers out of 30 submitted. Forty-two projects were submitted to the Digital Media Exhibit and 12 were accepted for publication. The papers cover subjects in design knowledge, design process, design representation, design communication, and design education. Fundamental and applied research has been carefully articulated, resulting in developments that may have an important impact on the way we practice and teach architecture in the future.
series ACADIA
email
more www.acadia.org
last changed 2022/06/07 07:49

_id ga0007
id ga0007
authors Coates, Paul and Miranda, Pablo
year 2000
title Swarm modelling. The use of Swarm Intelligence to generate architectural form
source International Conference on Generative Art
summary .neither the human purposes nor the architect's method are fully known in advance. Consequently, if this interpretation of the architectural problem situation is accepted, any problem-solving technique that relies on explicit problem definition, on distinct goal orientation, on data collection, or even on non-adaptive algorithms will distort the design process and the human purposes involved.' Stanford Anderson, "Problem-Solving and Problem-Worrying". The works concentrates in the use of the computer as a perceptive device, a sort of virtual hand or "sense", capable of prompting an environment. From a set of data that conforms the environment (in this case the geometrical representation of the form of the site) this perceptive device is capable of differentiating and generating distinct patterns in its behavior, patterns that an observer has to interpret as meaningful information. As Nicholas Negroponte explains referring to the project GROPE in his Architecture Machine: 'In contrast to describing criteria and asking the machine to generate physical form, this exercise focuses on generating criteria from physical form.' 'The onlooking human or architecture machine observes what is "interesting" by observing GROPE's behavior rather than by receiving the testimony that this or that is "interesting".' The swarm as a learning device. In this case the work implements a Swarm as a perceptive device. Swarms constitute a paradigm of parallel systems: a multitude of simple individuals aggregate in colonies or groups, giving rise to collaborative behaviors. The individual sensors can't learn, but the swarm as a system can evolve in to more stable states. These states generate distinct patterns, a result of the inner mechanics of the swarm and of the particularities of the environment. The dynamics of the system allows it to learn and adapt to the environment; information is stored in the speed of the sensors (the more collisions, the slower) that acts as a memory. The speed increases in the absence of collisions and so providing the system with the ability to forget, indispensable for differentiation of information and emergence of patterns. The swarm is both a perceptive and a spatial phenomenon. For being able to Interact with an environment an observer requires some sort of embodiment. In the case of the swarm, its algorithms for moving, collision detection, and swarm mechanics conform its perceptive body. The way this body interacts with its environment in the process of learning and differentiation of spatial patterns constitutes also a spatial phenomenon. The enactive space of the Swarm. Enaction, a concept developed by Maturana and Varela for the description of perception in biological terms, is the understanding of perception as the result of the structural coupling of an environment and an observer. Enaction does not address cognition in the currently conventional sense as an internal manipulation of extrinsic 'information' or 'signals', but as the relation between environment and observer and the blurring of their identities. Thus, the space generated by the swarm is an enactive space, a space without explicit description, and an invention of the swarm-environment structural coupling. If we consider a gestalt as 'Some property -such as roundness- common to a set of sense data and appreciated by organisms or artefacts' (Gordon Pask), the swarm is also able to differentiate space 'gestalts' or spaces of some characteristics, such as 'narrowness', or 'fluidness' etc. Implicit surfaces and the wrapping algorithm. One of the many ways of describing this space is through the use of implicit surfaces. An implicit surface may be imagined as an infinitesimally thin band of some measurable quantity such as color, density, temperature, pressure, etc. Thus, an implicit surface consists of those points in three-space that satisfy some particular requirement. This allows as to wrap the regions of space where a difference of quantity has been produced, enclosing the spaces in which some particular events in the history of the Swarm have occurred. The wrapping method allows complex topologies, such as manifoldness in one continuous surface. It is possible to transform the information generated by the swarm in to a landscape that is the result of the particular reading of the site by the swarm. Working in real time. Because of the complex nature of the machine, the only possible way to evaluate the resulting behavior is in real time. For this purpose specific applications had to be developed, using OpenGL for the Windows programming environment. The package consisted on translators from DXF format to a specific format used by these applications and viceversa, the Swarm "engine", a simulated parallel environment, and the Wrapping programs, to generate the implicit surfaces. Different versions of each had been produced, in different stages of development of the work.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id sigradi2006_e183a
id sigradi2006_e183a
authors Costa Couceiro, Mauro
year 2006
title La Arquitectura como Extensión Fenotípica Humana - Un Acercamiento Basado en Análisis Computacionales [Architecture as human phenotypic extension – An approach based on computational explorations]
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 56-60
summary The study describes some of the aspects tackled within a current Ph.D. research where architectural applications of constructive, structural and organization processes existing in biological systems are considered. The present information processing capacity of computers and the specific software development have allowed creating a bridge between two holistic nature disciplines: architecture and biology. The crossover between those disciplines entails a methodological paradigm change towards a new one based on the dynamical aspects of forms and compositions. Recent studies about artificial-natural intelligence (Hawkins, 2004) and developmental-evolutionary biology (Maturana, 2004) have added fundamental knowledge about the role of the analogy in the creative process and the relationship between forms and functions. The dimensions and restrictions of the Evo-Devo concepts are analyzed, developed and tested by software that combines parametric geometries, L-systems (Lindenmayer, 1990), shape-grammars (Stiny and Gips, 1971) and evolutionary algorithms (Holland, 1975) as a way of testing new architectural solutions within computable environments. It is pondered Lamarck´s (1744-1829) and Weismann (1834-1914) theoretical approaches to evolution where can be found significant opposing views. Lamarck´s theory assumes that an individual effort towards a specific evolutionary goal can cause change to descendents. On the other hand, Weismann defended that the germ cells are not affected by anything the body learns or any ability it acquires during its life, and cannot pass this information on to the next generation; this is called the Weismann barrier. Lamarck’s widely rejected theory has recently found a new place in artificial and natural intelligence researches as a valid explanation to some aspects of the human knowledge evolution phenomena, that is, the deliberate change of paradigms in the intentional research of solutions. As well as the analogy between genetics and architecture (Estévez and Shu, 2000) is useful in order to understand and program emergent complexity phenomena (Hopfield, 1982) for architectural solutions, also the consideration of architecture as a product of a human extended phenotype can help us to understand better its cultural dimension.
keywords evolutionary computation; genetic architectures; artificial/natural intelligence
series SIGRADI
email
last changed 2016/03/10 09:49

_id ed0a
authors Cuberos Mejía, R., Indriago, J.A. and Luengo, E.B.
year 2000
title Nuevos Paradigmas em la Informática Aplicada al Diseño Urbano y Arquitectónico (New Paradigms in the Application of Computing in Urban and Architectural Design)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 123-125
summary The incorporation of computer science in architecture has been happening in an evolutionary process where we can appreciate a transition from single tools of process automation to a hybrid and heterogeneous group of methods that radically are transforming professional labor. This paper describes experiences of authors in three instances of computer science applied to the urban and architectural design developed as well the university academic environment, and in professional consulting. The document not only makes emphasis on the nature and modality of each method, but even describes the philosophical and conceptual impact that each one of them implies in teaching and to making architecture.
series SIGRADI
email
last changed 2016/03/10 09:49

_id 9403
authors De Carvalho, Silvana Sá
year 2000
title A Telemática e o Meio Técnico- Científico-Informacional: Um Olhar sobre o Urbano (Telematics and Technical Scientific-Information Environment: An Urban View)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 160-162
summary The instantaneous nature of globalized information has brought places closer together and homogenized space, eliminating regional differences. Contemporary urban architecture and the technical-scientific- informational quality of the human-made environment innovates the rationality of the dominant actors in society. The field of telecommunications has developed substantially in the last 30 years, and today we are participants in a digital era, that has not only shortened distances but revolutionized the concepts of time and space. Telematics is a fundamental element of cities at the end of the millennium and has become a new instrument of social control. Electronic vigilance systems, as an application of telematics, are now widely used in cities, and a new urban space is being configured based on this dynamic. This paper is an introductory essay on the topic, which is essential in the understanding of urban spatial dynamics, and its objective is to point out fields for future research.
series SIGRADI
email
last changed 2016/03/10 09:50

_id 349e
authors Durmisevic, Sanja
year 2002
title Perception Aspects in Underground Spaces using Intelligent Knowledge Modeling
source Delft University of Technology
summary The intensification, combination and transformation are main strategies for future spatial development of the Netherlands, which are stated in the Fifth Bill regarding Spatial Planning. These strategies indicate that in the future, space should be utilized in a more compact and more efficient way requiring, at the same time, re-evaluation of the existing built environment and finding ways to improve it. In this context, the concept of multiple space usage is accentuated, which would focus on intensive 4-dimensional spatial exploration. The underground space is acknowledged as an important part of multiple space usage. In the document 'Spatial Exploration 2000', the underground space is recognized by policy makers as an important new 'frontier' that could provide significant contribution to future spatial requirements.In a relatively short period, the underground space became an important research area. Although among specialists there is appreciation of what underground space could provide for densely populated urban areas, there are still reserved feelings by the public, which mostly relate to the poor quality of these spaces. Many realized underground projects, namely subways, resulted in poor user satisfaction. Today, there is still a significant knowledge gap related to perception of underground space. There is also a lack of detailed documentation on actual applications of the theories, followed by research results and applied techniques. This is the case in different areas of architectural design, but for underground spaces perhaps most evident due to their infancv role in general architectural practice. In order to create better designs, diverse aspects, which are very often of qualitative nature, should be considered in perspective with the final goal to improve quality and image of underground space. In the architectural design process, one has to establish certain relations among design information in advance, to make design backed by sound rationale. The main difficulty at this point is that such relationships may not be determined due to various reasons. One example may be the vagueness of the architectural design data due to linguistic qualities in them. Another, may be vaguely defined design qualities. In this work, the problem was not only the initial fuzziness of the information but also the desired relevancy determination among all pieces of information given. Presently, to determine the existence of such relevancy is more or less a matter of architectural subjective judgement rather than systematic, non-subjective decision-making based on an existing design. This implies that the invocation of certain tools dealing with fuzzy information is essential for enhanced design decisions. Efficient methods and tools to deal with qualitative, soft data are scarce, especially in the architectural domain. Traditionally well established methods, such as statistical analysis, have been used mainly for data analysis focused on similar types to the present research. These methods mainly fall into a category of pattern recognition. Statistical regression methods are the most common approaches towards this goal. One essential drawback of this method is the inability of dealing efficiently with non-linear data. With statistical analysis, the linear relationships are established by regression analysis where dealing with non-linearity is mostly evaded. Concerning the presence of multi-dimensional data sets, it is evident that the assumption of linear relationships among all pieces of information would be a gross approximation, which one has no basis to assume. A starting point in this research was that there maybe both linearity and non-linearity present in the data and therefore the appropriate methods should be used in order to deal with that non-linearity. Therefore, some other commensurate methods were adopted for knowledge modeling. In that respect, soft computing techniques proved to match the quality of the multi-dimensional data-set subject to analysis, which is deemed to be 'soft'. There is yet another reason why soft-computing techniques were applied, which is related to the automation of knowledge modeling. In this respect, traditional models such as Decision Support Systems and Expert Systems have drawbacks. One important drawback is that the development of these systems is a time-consuming process. The programming part, in which various deliberations are required to form a consistent if-then rule knowledge based system, is also a time-consuming activity. For these reasons, the methods and tools from other disciplines, which also deal with soft data, should be integrated into architectural design. With fuzzy logic, the imprecision of data can be dealt with in a similar way to how humans do it. Artificial neural networks are deemed to some extent to model the human brain, and simulate its functions in the form of parallel information processing. They are considered important components of Artificial Intelligence (Al). With neural networks, it is possible to learn from examples, or more precisely to learn from input-output data samples. The combination of the neural and fuzzy approach proved to be a powerful combination for dealing with qualitative data. The problem of automated knowledge modeling is efficiently solved by employment of machine learning techniques. Here, the expertise of prof. dr. Ozer Ciftcioglu in the field of soft computing was crucial for tool development. By combining knowledge from two different disciplines a unique tool could be developed that would enable intelligent modeling of soft data needed for support of the building design process. In this respect, this research is a starting point in that direction. It is multidisciplinary and on the cutting edge between the field of Architecture and the field of Artificial Intelligence. From the architectural viewpoint, the perception of space is considered through relationship between a human being and a built environment. Techniques from the field of Artificial Intelligence are employed to model that relationship. Such an efficient combination of two disciplines makes it possible to extend our knowledge boundaries in the field of architecture and improve design quality. With additional techniques, meta know/edge, or in other words "knowledge about knowledge", can be created. Such techniques involve sensitivity analysis, which determines the amount of dependency of the output of a model (comfort and public safety) on the information fed into the model (input). Another technique is functional relationship modeling between aspects, which is derivation of dependency of a design parameter as a function of user's perceptions. With this technique, it is possible to determine functional relationships between dependent and independent variables. This thesis is a contribution to better understanding of users' perception of underground space, through the prism of public safety and comfort, which was achieved by means of intelligent knowledge modeling. In this respect, this thesis demonstrated an application of ICT (Information and Communication Technology) as a partner in the building design process by employing advanced modeling techniques. The method explained throughout this work is very generic and is possible to apply to not only different areas of architectural design, but also to other domains that involve qualitative data.
keywords Underground Space; Perception; Soft Computing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 9747
authors Ferrar, Steve
year 1999
title New Worlds; New Landscapes
doi https://doi.org/10.52842/conf.ecaade.1999.424
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 424-430
summary Evolution, said Julian Huxley, is in three different sectors. The first is organic - the cosmic process of matter. The second is biological - the evolution of plants and animals. The third is psychological and is the development of man's cultures. It is this third stage that is now critical, and if we are to survive as a species it can only be by replacing nature's controls by our own, not only birth control but our use of the whole environment. (Nan Fairbrother, New Lives, New Landscapes)
keywords Virtual Environments, Future, Culture
series eCAADe
email
last changed 2022/06/07 07:56

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 600e
authors Gavin, Lesley
year 1999
title Architecture of the Virtual Place
doi https://doi.org/10.52842/conf.ecaade.1999.418
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 418-423
summary The Bartlett School of Graduate Studies, University College London (UCL), set up the first MSc in Virtual Environments in the UK in 1995. The course aims to synthesise and build on research work undertaken in the arts, architecture, computing and biological sciences in exploring the realms of the creation of digital and virtual immersive spaces. The MSc is concerned primarily with equipping students from design backgrounds with the skills, techniques and theories necessary in the production of virtual environments. The course examines both virtual worlds as prototypes for real urban or built form and, over the last few years, has also developed an increasing interest in the the practice of architecture in purely virtual contexts. The MSc course is embedded in the UK government sponsored Virtual Reality Centre for the Built Environment which is hosted by the Bartlett School of Architecture. This centre involves the UCL departments of architecture, computer science and geography and includes industrial partners from a number of areas concerned with the built environment including architectural practice, surveying and estate management as well as some software companies and the telecoms industry. The first cohort of students graduated in 1997 and predominantly found work in companies working in the new market area of digital media. This paper aims to outline the nature of the course as it stands, examines the new and ever increasing market for designers within digital media and proposes possible future directions for the course.
keywords Virtual Reality, Immersive Spaces, Digital Media, Education
series eCAADe
email
more http://www.bartlett.ucl.ac.uk/ve/
last changed 2022/06/07 07:51

_id 326c
authors Hirschberg, U., Gramazio, F., H¾ger, K., Liaropoulos Legendre, G., Milano, M. and Stöger, B.
year 2000
title EventSpaces. A Multi-Author Game And Design Environment
doi https://doi.org/10.52842/conf.ecaade.2000.065
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 65-72
summary EventSpaces is a web-based collaborative teaching environment we developed for our elective CAAD course. Its goal is to let the students collectively design a prototypical application - the EventSpaces.Game. The work students do to produce this game and the process of how they interact is actually a game in its own right. It is a process that is enabled by the EventSpaces.System, which combines work, learning, competition and play in a shared virtual environment. The EventSpaces.System allows students to criticize, evaluate, and rate each otherÕs contributions, thereby distributing the authorship credits of the game. The content of the game is therefore created in a collaborative as well as competitive manner. In the EventSpaces.System, the students form a community that shares a common interest in the development of the EventSpaces.Game. At the same time they are competing to secure as much credit as possible for themselves. This playful incentive in turn helps to improve the overall quality of the EventSpaces.Game, which is in the interest of all authors. This whole, rather intricate functionality, which also includes a messaging system for all EventSpaces activities, is achieved by means of a database driven online working environment that manages and displays all works produced. It preserves and showcases each authorÕs contributions in relation to the whole and allows for the emergence of coherence from the multiplicity of solutions. This Paper first presents the motivation for the project and gives a short technical summary of how the project was implemented. Then it describes the nature of the exercises and discusses possible implications that this approach to collaboration and teaching might have.
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:50

_id b352
authors Kilkelly, Michael
year 2000
title Off The Page: Object-Oriented Construction Drawings
doi https://doi.org/10.52842/conf.acadia.2000.147
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 147-151
summary This paper discusses methods in which inefficiencies in the construction documentation process can be addressed through the application of digital technology. These inefficiencies are directly related to the time consuming nature of the construction documentation process, given that the majority of time is spent reformatting and redrawing previous details and specifications. The concepts of objectoriented programming are used as an organizational framework for construction documentation. Database structures are also used as a key component to information reuse in the documentation process. A prototype system is developed as an alternative to current Computer-Aided Drafting software. This prototype, the Drawing Assembler, functions as a graphic search engine for construction details. It links a building component database with a construction detail database through the intersection of dissimilar objects.
series ACADIA
last changed 2022/06/07 07:52

_id ga0005
id ga0005
authors Kubasiewicz, Jan and Jang, DK  
year 2000
title InfoGEOMETRY. Conceptual Prototype for Navigating InfoSPACE
source International Conference on Generative Art
summary InfoGEOMETRY  is the word the authors use to describe the concept of utilizing geometric patterns and dynamic symmetry in graphical user interface design for navigating complex information. This paper refers to a specific collaborative project in which the concept of infoGeometry was first introduced as an alternative tool of information architecture. In their design process, the authors tried to reconcile the visual nature of geometric vocabulary with parametric nature of interface design and dynamic nature of information organization. The project resulted in experimental interactive tools for information search and navigating complex information structures. 2. YOU ARE HERE. A study in interactivity. This paper refers to a studio project in interface design, conducted at the Massachusetts College of Art in Boston, where individual designers explored essential concepts in navigating complex structures of information. Taking the notion of You-Are-Here as a point of departure, individual designers explored various definitions and interpretations of the notion's three components: You(We/They, etc)-Are(Will Be/Have Been, etc)-Here(in Time/in Space). Exploring specific instances of parametric design and developing linked, multiple representations for information display resulted in a broad spectrum of contexts associated with navigation. Specific descriptions of individual instances will accompany the final presentation of the project.  
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 96a7
authors Li, Heng and Love, Peter E.D.
year 2000
title Genetic search for solving construction site-level unequal-area facility layout problems
source Automation in Construction 9 (2) (2000) pp. 217-226
summary A construction site represents a conflux of concerns, constantly calling for a broad and multi-criteria approach to solving problems related to site planning and design. As an important part of site planning and design, the objective of site-level facility layout is to allocate appropriate locations and areas for accommodating temporary site-level facilities such as warehouses, job offices, workshops and batch plants. Depending on the size, location and nature of the project, the required temporary facilities may vary. The layout of facilities can influence on the production time and cost in projects. In this paper, a construction site-level facility layout problem is described as allocating a set of predetermined facilities into a set of predetermined places, while satisfying layout constraints and requirements. A genetic algorithm system, which is a computational model of Darwinian evolution theory, is employed to solve the facilities layout problem. A case study is presented to demonstrate the efficiency of the genetic algorithm system in solving the construction site-level facility layout problems.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 1bf8
authors Martens, B., Uhl, M., Tschuppik, W.-M. and Voigt, A.
year 2000
title Synagogue Neudeggergasse: A Virtual Reconstruction in Vienna
doi https://doi.org/10.52842/conf.acadia.2000.213
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 213-218
summary Issues associated with virtual reconstruction are first dealt within this paper. Visualizing of no longer existent (architecture-) objects and their surroundings practically amounts to a “virtual comeback”. Furthermore, special attention is given to the description of the working procedure for a case study of reconstruction sounding out the potentials of QuickTime VR. The paper ends up with a set of conclusions, taking a close look at the “pros” and “cons” of this type of re-construction. 1 Introduction Irreversible destruction having removed identity-establishing buildings from the urban surface for all times is the principal cause for the attempt of renewed “imaginating.” When dealing with such reconstruction first the problem of reliability concerning the existing basic material has to be tackled. Due to their two-dimensional recording photographs only supply us with restricted information content of the object under consideration. Thus the missing part has to be supplemented or substituted by additional sources. Within the process of assembling and overlaying of differing data sets the way of dealing with such fragmentations becomes of major importance. Priority is given to the choice of information. One of the most elementary items of information regarding perception of three-dimensional objects surely is the effect that color and material furnishes. It seems to suggest itself that black-and-white shots hardly will prove valid in this respect. The three-dimensional object doubtlessly provides us with a by far greater variety of possibilities in the following working process than the “cardboard model with pasted-on facade photography”. Only the completely designed model structure makes for visualizing the plastic representation form of architecture in a sustainable manner. Furthermore, a virtual model can be dismantled into part models without amounting to a destruction process thereof. Apart therefrom the virtual model permits the generation of differing reconstruction variants regarding color and material. Moreover, architecture models of a physical nature are inherently connected to locality as such.
series ACADIA
email
last changed 2022/06/07 07:59

_id 1bae
authors Murad, Carlos Alberto
year 2000
title O Fotográfico e o Fotopoético na criação digital (The Photographic and the "Photopoetic" in Digital Creation)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 325-327
summary This paper discusses the aesthetical motivations which would lead the appropriation of the photographic image by the differents imagetic creation, without considering the base’s nature. The study looks at the regard’s idiosyncrasies, the photographic image nature and its aesthetic apreension as a photopoetic reality. This would establish the creator from differents imagetic poetics would lead the appropriation of the photograhic. The theoretical basis of this study is Bachelard’s Phenomenology of the creative imagination and poetic image.
series SIGRADI
email
last changed 2016/03/10 09:55

_id 2aec
authors Oxman, Rivka
year 2000
title Design media for the cognitive designer
source Automation in Construction 9 (4) (2000) pp. 337-346
summary Work on media for design which are responsive to the cognitive processes of the human designer are introduced as a paradigm for research and development. Design media are intended to support the cognitive nature of design and, particularly, the exploitation of design knowledge in computational environments. Basic theoretical assumptions are presented which underlie the development of design media. A central assumption is that designers share common forms of design knowledge which can be formalized, represented, and employed in computational environments. Generic knowledge is proposed as one such seminal form of design knowledge. We then develop a cognitive model which relates to the internal mental representations, strategies and mechanisms of generic design. The paper emphasizes the theoretical foundations of design media. This theoretical discussion is then exemplified through case studies presenting current research for the support of visual cognition in design. We introduce an approach to design schema as a visual form of generic design knowledge. Secondly we present a conceptual framework for the support of schema emergence in visual reasoning in design media. Finally, some implications of schema emergence in design collaboration are presented and discussed.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:23

_id 8b5e
authors Papamichael, Konstantinos
year 2000
title Desktop Radiance A New Tool for Computer-Aided Daylighting Design
doi https://doi.org/10.52842/conf.acadia.2000.009
source ACADIA Quarterly, vol. 19, no. 2, pp. 9-11
summary The use of daylight for the illumination of building interiors has the potential to enhance the quality of the environment while providing opportunities to save energy by replacing or supplementing electric lighting. Moreover, it has the potential to reduce heating and cooling loads, which offer additional energy saving opportunities, as well as reductions in HVAC equipment sizing and cost. All of these benefits, however, assume proper use of daylighting strategies and technologies, whose performance depends on the context of their application. On the other hand, improper use can have significant negative effects on both comfort and energy requirements, such as increased glare and cooling loads. To ensure proper use, designers need tools that model the dynamic nature of daylight and accurately predict performance with respect to a multitude of performance criteria, extending beyond comfort and energy to include aesthetics, cost, security, safety, etc.
series ACADIA
email
last changed 2022/06/07 08:00

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 37HOMELOGIN (you are user _anon_808750 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002