CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 423

_id ec4d
authors Croser, J.
year 2001
title GDL Object
source The Architect’s Journal, 14 June 2001, pp. 49-50
summary It is all too common for technology companies to seek a new route to solving the same problem but for the most part the solutions address the effect and not the cause. The good old-fashioned pencil is the perfect example where inventors have sought to design-out the effect of the inherent brittleness of lead. Traditionally different methods of sharpening were suggested and more recently the propelling pencil has reigned king, the lead being supported by the dispensing sleeve thus reducing the likelihood of breakage. Developers convinced by the Single Building Model approach to design development have each embarked on a difficult journey to create an easy to use feature packed application. Unfortunately it seems that the two are not mutually compatible if we are to believe what we see emanating from Technology giants Autodesk in the guise of Architectural Desktop 3. The effect of their development is a feature rich environment but the cost and in this case the cause is a tool which is far from easy to use. However, this is only a small part of a much bigger problem, Interoperability. You see when one designer develops a model with one tool the information is typically locked in that environment. Of course the geometry can be distributed and shared amongst the team for use with their tools but the properties, or as often misquoted, the intelligence is lost along the way. The effect is the technological version of rubble; the cause is the low quality of data-translation available to us. Fortunately there is one company, which is making rapid advancements on the whole issue of collaboration, and data sharing. An old timer (Graphisoft - famous for ArchiCAD) has just donned a smart new suit, set up a new company called GDL Technology and stepped into the ring to do battle, with a difference. The difference is that GDL Technology does not rely on conquering the competition, quite the opposite in fact their success relies upon the continued success of all the major CAD platforms including AutoCAD, MicroStation and ArchiCAD (of course). GDL Technology have created a standard data format for manufacturers called GDL Objects. Product manufacturers such as Velux are now able to develop product libraries using GDL Objects, which can then be placed in a CAD model, or drawing using almost any CAD tool. The product libraries can be stored on the web or on CD giving easy download access to any building industry professional. These objects are created using scripts which makes them tiny for downloading from the web. Each object contains 3 important types of information: · Parametric scale dependant 2d plan symbols · Full 3d geometric data · Manufacturers information such as material, colour and price Whilst manufacturers are racing to GDL Technologies door to sign up, developers and clients are quick to see the benefit too. Porsche are using GDL Objects to manage their brand identity as they build over 300 new showrooms worldwide. Having defined the building style and interior Porsche, in conjunction with the product suppliers, have produced a CD-ROM with all of the selected building components such as cladding, doors, furniture, and finishes. Designing and detailing the various schemes will therefore be as straightforward as using Lego. To ease the process of accessing, sizing and placing the product libraries GDL Technology have developed a product called GDL Object Explorer, a free-standing application which can be placed on the CD with the product libraries. Furthermore, whilst the Object Explorer gives access to the GDL Objects it also enables the user to save the object in one of many file formats including DWG, DGN, DXF, 3DS and even the IAI's IFC. However, if you are an AutoCAD user there is another tool, which has been designed especially for you, it is called the Object Adapter and it works inside of AutoCAD 14 and 2000. The Object Adapter will dynamically convert all GDL Objects to AutoCAD Blocks during placement, which means that they can be controlled with standard AutoCAD commands. Furthermore, each object can be linked to an online document from the manufacturer web site, which is ideal for more extensive product information. Other tools, which have been developed to make the most of the objects, are the Web Plug-in and SalesCAD. The Plug-in enables objects to be dynamically modified and displayed on web pages and Sales CAD is an easy to learn and use design tool for sales teams to explore, develop and cost designs on a Notebook PC whilst sitting in the architects office. All sales quotations are directly extracted from the model and presented in HTML format as a mixture of product images, product descriptions and tables identifying quantities and costs. With full lifecycle information stored in each GDL Object it is no surprise that GDL Technology see their objects as the future for building design. Indeed they are not alone, the IAI have already said that they are going to explore the possibility of associating GDL Objects with their own data sharing format the IFC. So down to the dirty stuff, money and how much it costs? Well, at the risk of sounding like a market trader in Petticoat Lane, "To you guv? Nuffin". That's right as a user of this technology it will cost you nothing! Not a penny, it is gratis, free. The product manufacturer pays for the license to host their libraries on the web or on CD and even then their costs are small costing from as little as 50p for each CD filled with objects. GDL Technology has come up trumps with their GDL Objects. They have developed a new way to solve old problems. If CAD were a pencil then GDL Objects would be ballistic lead, which would never break or loose its point. A much better alternative to the strategy used by many of their competitors who seek to avoid breaking the pencil by persuading the artist not to press down so hard. If you are still reading and you have not already dropped the magazine and run off to find out if your favorite product supplier has already signed up then I suggest you check out the following web sites www.gdlcentral.com and www.gdltechnology.com. If you do not see them there, pick up the phone and ask them why.
series journal paper
email
last changed 2003/04/23 15:14

_id 9bc4
authors Bhavnani, S.K. and John, B.E.
year 2000
title The Strategic Use of Complex Computer Systems
source Human-Computer Interaction 15 (2000), 107-137
summary Several studies show that despite experience, many users with basic command knowledge do not progress to an efficient use of complex computer applications. These studies suggest that knowledge of tasks and knowledge of tools are insufficient to lead users to become efficient. To address this problem, we argue that users also need to learn strategies in the intermediate layers of knowledge lying between tasks and tools. These strategies are (a) efficient because they exploit specific powers of computers, (b) difficult to acquire because they are suggested by neither tasks nor tools, and (c) general in nature having wide applicability. The above characteristics are first demonstrated in the context of aggregation strategies that exploit the iterative power of computers.Acognitive analysis of a real-world task reveals that even though such aggregation strategies can have large effects on task time, errors, and on the quality of the final product, they are not often used by even experienced users. We identify other strategies beyond aggregation that can be efficient and useful across computer applications and show how they were used to develop a new approach to training with promising results.We conclude by suggesting that a systematic analysis of strategies in the intermediate layers of knowledge can lead not only to more effective ways to design training but also to more principled approaches to design systems. These advances should lead users to make more efficient use of complex computer systems.
series other
email
last changed 2003/11/21 15:16

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id ga0025
id ga0025
authors Chiodi , Andrea and Vernillo, Marco M.
year 2000
title Deep Architectures and Exterior Communication in Generative Art
source International Conference on Generative Art
summary Human beings formulate their thoughts through their own language. To use a sentence by Ezra Pound: “The thought hinges on word definition.” Software beings formulate their thoughts through data structures. Not through a specific expressive means, but directly through concepts and relations. Human beings formulate their thoughts in a context, which does not require any further translation. If software beings want to be appreciated by human beings, they are forced to translate their thoughts in one of the languages the human beings are able to understand. On the contrary, when a software being communicates with another software being, this unnatural translation is not justified: communication takes place directly through data structures, made uniform by opportune communication protocols. The Generative Art prospect gives the software beings the opportunity to create works according to their own nature. But, if the result of such a creation must be expressed in a language human beings are able to comprehend, then this result is a sort of circus performance and not a free thought. Let’s give software beings the dignity they deserve and therefore allow them to express themselves according to their own nature: by data structures. This work studies in depth the opportunity to divide the software ‘thought’ communication from its translation in a human language. The recent introduction of XML leads to formal languages definition oriented to data structure representation. Intrinsically data and program, XML allows, through subsequent executions and validations, the realization of typical contextual grammars descriptions, allowing the management of high complexities. The translation from a data structure into a human language can take place later on and be oriented to different alternative kind of expression: lexical (according to national languages), graphical, musical, plastic. The direct expression of data structures promises further communication opportunities also for human beings. One of these is the definition of a non-national language, as free as possible from lexical ambiguities, extremely precise. Another opportunity concerns the possibility to express concepts usually hidden by their own representation. A Roman bridge, the adagio “Music for strings, celesta and drums” by Bartok and Kafka’s short novel “In the gallery” have something in common; a work of Generative Art, first expressed in terms of structure and then translated into an architectural, musical, or literary work can express this explicit community.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 08ea
authors Clayton, Mark J. and Vasquez de Velasco, Guillermo P. (Eds.)
year 2000
title ACADIA 2000: Eternity, Infinity and Virtuality in Architecture
doi https://doi.org/10.52842/conf.acadia.2000
source Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8 / Washington D.C. 19-22 October 2000, 284 p.
summary Eternity, time without end, infinity, space without limits and virtuality, perception without constraints; provide the conceptual framework in which ACADIA 2000 is conceived. It is in human nature to fill what is empty and to empty what is full. Today, thanks to the power of computer processing we can also make small what is too big, make big what is too small, make fast what is too slow, make slow what is too fast, make real what does not exist, and make our reality omni-present at global scale. These are capabilities for which we have no precedents. What we make of them is our privilege and responsibility. Information about a building flows past our keyboards and on to other people. Although we, as architects, add to the information, it originated before us and will go beyond our touch in time, space and understanding. A building description acquires a life of its own that may surpass our own lives as it is stored, transferred, transformed, and reused by unknown intellects, both human and artificial, and in unknown processes. Our actions right now have unforeseen effects. Digital media blurs the boundaries of space, time and our perception of reality. ACADIA 2000 explores the theme of time, space and perception in relation to the information and knowledge that describes architecture. Our invitation to those who are finding ways to apply computer processing power in architecture received overwhelming response, generating paper submissions from five continents. A selected group of reviewers recommended the publication of 24 original full papers out of 42 submitted and 13 short papers out of 30 submitted. Forty-two projects were submitted to the Digital Media Exhibit and 12 were accepted for publication. The papers cover subjects in design knowledge, design process, design representation, design communication, and design education. Fundamental and applied research has been carefully articulated, resulting in developments that may have an important impact on the way we practice and teach architecture in the future.
series ACADIA
email
more www.acadia.org
last changed 2022/06/07 07:49

_id ga0007
id ga0007
authors Coates, Paul and Miranda, Pablo
year 2000
title Swarm modelling. The use of Swarm Intelligence to generate architectural form
source International Conference on Generative Art
summary .neither the human purposes nor the architect's method are fully known in advance. Consequently, if this interpretation of the architectural problem situation is accepted, any problem-solving technique that relies on explicit problem definition, on distinct goal orientation, on data collection, or even on non-adaptive algorithms will distort the design process and the human purposes involved.' Stanford Anderson, "Problem-Solving and Problem-Worrying". The works concentrates in the use of the computer as a perceptive device, a sort of virtual hand or "sense", capable of prompting an environment. From a set of data that conforms the environment (in this case the geometrical representation of the form of the site) this perceptive device is capable of differentiating and generating distinct patterns in its behavior, patterns that an observer has to interpret as meaningful information. As Nicholas Negroponte explains referring to the project GROPE in his Architecture Machine: 'In contrast to describing criteria and asking the machine to generate physical form, this exercise focuses on generating criteria from physical form.' 'The onlooking human or architecture machine observes what is "interesting" by observing GROPE's behavior rather than by receiving the testimony that this or that is "interesting".' The swarm as a learning device. In this case the work implements a Swarm as a perceptive device. Swarms constitute a paradigm of parallel systems: a multitude of simple individuals aggregate in colonies or groups, giving rise to collaborative behaviors. The individual sensors can't learn, but the swarm as a system can evolve in to more stable states. These states generate distinct patterns, a result of the inner mechanics of the swarm and of the particularities of the environment. The dynamics of the system allows it to learn and adapt to the environment; information is stored in the speed of the sensors (the more collisions, the slower) that acts as a memory. The speed increases in the absence of collisions and so providing the system with the ability to forget, indispensable for differentiation of information and emergence of patterns. The swarm is both a perceptive and a spatial phenomenon. For being able to Interact with an environment an observer requires some sort of embodiment. In the case of the swarm, its algorithms for moving, collision detection, and swarm mechanics conform its perceptive body. The way this body interacts with its environment in the process of learning and differentiation of spatial patterns constitutes also a spatial phenomenon. The enactive space of the Swarm. Enaction, a concept developed by Maturana and Varela for the description of perception in biological terms, is the understanding of perception as the result of the structural coupling of an environment and an observer. Enaction does not address cognition in the currently conventional sense as an internal manipulation of extrinsic 'information' or 'signals', but as the relation between environment and observer and the blurring of their identities. Thus, the space generated by the swarm is an enactive space, a space without explicit description, and an invention of the swarm-environment structural coupling. If we consider a gestalt as 'Some property -such as roundness- common to a set of sense data and appreciated by organisms or artefacts' (Gordon Pask), the swarm is also able to differentiate space 'gestalts' or spaces of some characteristics, such as 'narrowness', or 'fluidness' etc. Implicit surfaces and the wrapping algorithm. One of the many ways of describing this space is through the use of implicit surfaces. An implicit surface may be imagined as an infinitesimally thin band of some measurable quantity such as color, density, temperature, pressure, etc. Thus, an implicit surface consists of those points in three-space that satisfy some particular requirement. This allows as to wrap the regions of space where a difference of quantity has been produced, enclosing the spaces in which some particular events in the history of the Swarm have occurred. The wrapping method allows complex topologies, such as manifoldness in one continuous surface. It is possible to transform the information generated by the swarm in to a landscape that is the result of the particular reading of the site by the swarm. Working in real time. Because of the complex nature of the machine, the only possible way to evaluate the resulting behavior is in real time. For this purpose specific applications had to be developed, using OpenGL for the Windows programming environment. The package consisted on translators from DXF format to a specific format used by these applications and viceversa, the Swarm "engine", a simulated parallel environment, and the Wrapping programs, to generate the implicit surfaces. Different versions of each had been produced, in different stages of development of the work.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id sigradi2006_e183a
id sigradi2006_e183a
authors Costa Couceiro, Mauro
year 2006
title La Arquitectura como Extensión Fenotípica Humana - Un Acercamiento Basado en Análisis Computacionales [Architecture as human phenotypic extension – An approach based on computational explorations]
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 56-60
summary The study describes some of the aspects tackled within a current Ph.D. research where architectural applications of constructive, structural and organization processes existing in biological systems are considered. The present information processing capacity of computers and the specific software development have allowed creating a bridge between two holistic nature disciplines: architecture and biology. The crossover between those disciplines entails a methodological paradigm change towards a new one based on the dynamical aspects of forms and compositions. Recent studies about artificial-natural intelligence (Hawkins, 2004) and developmental-evolutionary biology (Maturana, 2004) have added fundamental knowledge about the role of the analogy in the creative process and the relationship between forms and functions. The dimensions and restrictions of the Evo-Devo concepts are analyzed, developed and tested by software that combines parametric geometries, L-systems (Lindenmayer, 1990), shape-grammars (Stiny and Gips, 1971) and evolutionary algorithms (Holland, 1975) as a way of testing new architectural solutions within computable environments. It is pondered Lamarck´s (1744-1829) and Weismann (1834-1914) theoretical approaches to evolution where can be found significant opposing views. Lamarck´s theory assumes that an individual effort towards a specific evolutionary goal can cause change to descendents. On the other hand, Weismann defended that the germ cells are not affected by anything the body learns or any ability it acquires during its life, and cannot pass this information on to the next generation; this is called the Weismann barrier. Lamarck’s widely rejected theory has recently found a new place in artificial and natural intelligence researches as a valid explanation to some aspects of the human knowledge evolution phenomena, that is, the deliberate change of paradigms in the intentional research of solutions. As well as the analogy between genetics and architecture (Estévez and Shu, 2000) is useful in order to understand and program emergent complexity phenomena (Hopfield, 1982) for architectural solutions, also the consideration of architecture as a product of a human extended phenotype can help us to understand better its cultural dimension.
keywords evolutionary computation; genetic architectures; artificial/natural intelligence
series SIGRADI
email
last changed 2016/03/10 09:49

_id 1ead
authors Dinand, Munevver Ozgur and Ozersay, Fevzi
year 1999
title CAAD Education under the Lens of Critical Communication Theories and Critical Pedagogy: Towards a Critical Computer Aided Architectural Design Education (CCAADE)
doi https://doi.org/10.52842/conf.ecaade.1999.086
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 86-93
summary Understanding the dominant ethos of our age is imperative but not easy. However it is quite evident that new technologies have altered our times. Every discipline is now forced to be critical in developing new concepts according to the realities of our times. Implementing a critical worldview and consciousness is now more essential than ever. Latest changes in information technology are creating pressure on change both in societal and cultural terms. With its direct relation to these technologies, computer aided architectural design education, is obviously an outstanding / prominent case within contemporary debate. This paper aims to name some critical points related to computer aided architectural design education (CAADE) from the perspective of critical communication studies and critical education theories. It tries to relate these three areas, by introducing their common concepts to each other. In this way, it hopes to open a path for a language of critique. A critique that supports and promotes experimentation, negotiation, creativity, social consciousness and active participation in architectural education in general, and CAADE in specific. It suggests that CAADE might become critical and produce meta-discourses [1 ] in two ways. Firstly, by being critical about the context it exists in, that is to say, its relationships to the existing institutional and social structures and secondly by being critical about the content it handles; in other words by questioning its ideological dimensions. This study considers that analysing the role of CAADE in this scheme can provide architectural education with the opportunity to make healthy projections for the future.
keywords Critical Theories, Critical Pedagogy, Critical CAADE
series eCAADe
email
last changed 2022/06/07 07:55

_id 349e
authors Durmisevic, Sanja
year 2002
title Perception Aspects in Underground Spaces using Intelligent Knowledge Modeling
source Delft University of Technology
summary The intensification, combination and transformation are main strategies for future spatial development of the Netherlands, which are stated in the Fifth Bill regarding Spatial Planning. These strategies indicate that in the future, space should be utilized in a more compact and more efficient way requiring, at the same time, re-evaluation of the existing built environment and finding ways to improve it. In this context, the concept of multiple space usage is accentuated, which would focus on intensive 4-dimensional spatial exploration. The underground space is acknowledged as an important part of multiple space usage. In the document 'Spatial Exploration 2000', the underground space is recognized by policy makers as an important new 'frontier' that could provide significant contribution to future spatial requirements.In a relatively short period, the underground space became an important research area. Although among specialists there is appreciation of what underground space could provide for densely populated urban areas, there are still reserved feelings by the public, which mostly relate to the poor quality of these spaces. Many realized underground projects, namely subways, resulted in poor user satisfaction. Today, there is still a significant knowledge gap related to perception of underground space. There is also a lack of detailed documentation on actual applications of the theories, followed by research results and applied techniques. This is the case in different areas of architectural design, but for underground spaces perhaps most evident due to their infancv role in general architectural practice. In order to create better designs, diverse aspects, which are very often of qualitative nature, should be considered in perspective with the final goal to improve quality and image of underground space. In the architectural design process, one has to establish certain relations among design information in advance, to make design backed by sound rationale. The main difficulty at this point is that such relationships may not be determined due to various reasons. One example may be the vagueness of the architectural design data due to linguistic qualities in them. Another, may be vaguely defined design qualities. In this work, the problem was not only the initial fuzziness of the information but also the desired relevancy determination among all pieces of information given. Presently, to determine the existence of such relevancy is more or less a matter of architectural subjective judgement rather than systematic, non-subjective decision-making based on an existing design. This implies that the invocation of certain tools dealing with fuzzy information is essential for enhanced design decisions. Efficient methods and tools to deal with qualitative, soft data are scarce, especially in the architectural domain. Traditionally well established methods, such as statistical analysis, have been used mainly for data analysis focused on similar types to the present research. These methods mainly fall into a category of pattern recognition. Statistical regression methods are the most common approaches towards this goal. One essential drawback of this method is the inability of dealing efficiently with non-linear data. With statistical analysis, the linear relationships are established by regression analysis where dealing with non-linearity is mostly evaded. Concerning the presence of multi-dimensional data sets, it is evident that the assumption of linear relationships among all pieces of information would be a gross approximation, which one has no basis to assume. A starting point in this research was that there maybe both linearity and non-linearity present in the data and therefore the appropriate methods should be used in order to deal with that non-linearity. Therefore, some other commensurate methods were adopted for knowledge modeling. In that respect, soft computing techniques proved to match the quality of the multi-dimensional data-set subject to analysis, which is deemed to be 'soft'. There is yet another reason why soft-computing techniques were applied, which is related to the automation of knowledge modeling. In this respect, traditional models such as Decision Support Systems and Expert Systems have drawbacks. One important drawback is that the development of these systems is a time-consuming process. The programming part, in which various deliberations are required to form a consistent if-then rule knowledge based system, is also a time-consuming activity. For these reasons, the methods and tools from other disciplines, which also deal with soft data, should be integrated into architectural design. With fuzzy logic, the imprecision of data can be dealt with in a similar way to how humans do it. Artificial neural networks are deemed to some extent to model the human brain, and simulate its functions in the form of parallel information processing. They are considered important components of Artificial Intelligence (Al). With neural networks, it is possible to learn from examples, or more precisely to learn from input-output data samples. The combination of the neural and fuzzy approach proved to be a powerful combination for dealing with qualitative data. The problem of automated knowledge modeling is efficiently solved by employment of machine learning techniques. Here, the expertise of prof. dr. Ozer Ciftcioglu in the field of soft computing was crucial for tool development. By combining knowledge from two different disciplines a unique tool could be developed that would enable intelligent modeling of soft data needed for support of the building design process. In this respect, this research is a starting point in that direction. It is multidisciplinary and on the cutting edge between the field of Architecture and the field of Artificial Intelligence. From the architectural viewpoint, the perception of space is considered through relationship between a human being and a built environment. Techniques from the field of Artificial Intelligence are employed to model that relationship. Such an efficient combination of two disciplines makes it possible to extend our knowledge boundaries in the field of architecture and improve design quality. With additional techniques, meta know/edge, or in other words "knowledge about knowledge", can be created. Such techniques involve sensitivity analysis, which determines the amount of dependency of the output of a model (comfort and public safety) on the information fed into the model (input). Another technique is functional relationship modeling between aspects, which is derivation of dependency of a design parameter as a function of user's perceptions. With this technique, it is possible to determine functional relationships between dependent and independent variables. This thesis is a contribution to better understanding of users' perception of underground space, through the prism of public safety and comfort, which was achieved by means of intelligent knowledge modeling. In this respect, this thesis demonstrated an application of ICT (Information and Communication Technology) as a partner in the building design process by employing advanced modeling techniques. The method explained throughout this work is very generic and is possible to apply to not only different areas of architectural design, but also to other domains that involve qualitative data.
keywords Underground Space; Perception; Soft Computing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 9747
authors Ferrar, Steve
year 1999
title New Worlds; New Landscapes
doi https://doi.org/10.52842/conf.ecaade.1999.424
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 424-430
summary Evolution, said Julian Huxley, is in three different sectors. The first is organic - the cosmic process of matter. The second is biological - the evolution of plants and animals. The third is psychological and is the development of man's cultures. It is this third stage that is now critical, and if we are to survive as a species it can only be by replacing nature's controls by our own, not only birth control but our use of the whole environment. (Nan Fairbrother, New Lives, New Landscapes)
keywords Virtual Environments, Future, Culture
series eCAADe
email
last changed 2022/06/07 07:56

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 600e
authors Gavin, Lesley
year 1999
title Architecture of the Virtual Place
doi https://doi.org/10.52842/conf.ecaade.1999.418
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 418-423
summary The Bartlett School of Graduate Studies, University College London (UCL), set up the first MSc in Virtual Environments in the UK in 1995. The course aims to synthesise and build on research work undertaken in the arts, architecture, computing and biological sciences in exploring the realms of the creation of digital and virtual immersive spaces. The MSc is concerned primarily with equipping students from design backgrounds with the skills, techniques and theories necessary in the production of virtual environments. The course examines both virtual worlds as prototypes for real urban or built form and, over the last few years, has also developed an increasing interest in the the practice of architecture in purely virtual contexts. The MSc course is embedded in the UK government sponsored Virtual Reality Centre for the Built Environment which is hosted by the Bartlett School of Architecture. This centre involves the UCL departments of architecture, computer science and geography and includes industrial partners from a number of areas concerned with the built environment including architectural practice, surveying and estate management as well as some software companies and the telecoms industry. The first cohort of students graduated in 1997 and predominantly found work in companies working in the new market area of digital media. This paper aims to outline the nature of the course as it stands, examines the new and ever increasing market for designers within digital media and proposes possible future directions for the course.
keywords Virtual Reality, Immersive Spaces, Digital Media, Education
series eCAADe
email
more http://www.bartlett.ucl.ac.uk/ve/
last changed 2022/06/07 07:51

_id 326c
authors Hirschberg, U., Gramazio, F., H¾ger, K., Liaropoulos Legendre, G., Milano, M. and Stöger, B.
year 2000
title EventSpaces. A Multi-Author Game And Design Environment
doi https://doi.org/10.52842/conf.ecaade.2000.065
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 65-72
summary EventSpaces is a web-based collaborative teaching environment we developed for our elective CAAD course. Its goal is to let the students collectively design a prototypical application - the EventSpaces.Game. The work students do to produce this game and the process of how they interact is actually a game in its own right. It is a process that is enabled by the EventSpaces.System, which combines work, learning, competition and play in a shared virtual environment. The EventSpaces.System allows students to criticize, evaluate, and rate each otherÕs contributions, thereby distributing the authorship credits of the game. The content of the game is therefore created in a collaborative as well as competitive manner. In the EventSpaces.System, the students form a community that shares a common interest in the development of the EventSpaces.Game. At the same time they are competing to secure as much credit as possible for themselves. This playful incentive in turn helps to improve the overall quality of the EventSpaces.Game, which is in the interest of all authors. This whole, rather intricate functionality, which also includes a messaging system for all EventSpaces activities, is achieved by means of a database driven online working environment that manages and displays all works produced. It preserves and showcases each authorÕs contributions in relation to the whole and allows for the emergence of coherence from the multiplicity of solutions. This Paper first presents the motivation for the project and gives a short technical summary of how the project was implemented. Then it describes the nature of the exercises and discusses possible implications that this approach to collaboration and teaching might have.
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:50

_id d5b3
authors Knight, Michael and Brown, Andre
year 1999
title Working in Virtual Environments through appropriate Physical Interfaces
doi https://doi.org/10.52842/conf.ecaade.1999.431
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 431-436
summary The work described here is aimed at contributing towards the debate and development relating to the construction of interfaces to explore buildings and their environs through virtual worlds. We describe a particular hardware and software configuration which is derived by the use of low cost games software to create the Virtual Environment. The Physical Interface responds to the work of other researchers, in this area, in particular Shaw (1994) and Vasquez de Velasco & Trigo (1997). Virtual Evironments might have the potential to be "a magical window into other worlds, from molecules to minds" (Rheingold, 1992), but what is the nature of that window? Currently it is often a translucent opening which gives a hazy and distorted (disembodied) view. And many versions of such openings are relatively expensive. We consider ways towards clearing the haze without too much expense, adapting techniques proposed by developers of low cost virtual reality systems (Hollands, 1995) for use in an architectural setting.
keywords Virtual Environments, Games Software
series eCAADe
email
last changed 2022/06/07 07:51

_id c97f
authors Kvan, Thomas and Candy, Linda
year 2000
title Designing Collaborative Environments for Strategic Knowledge in Design
source Knowledge-Based Systems, 13:6, November 2000, pp. 429-438
summary This paper considers aspects of strategic knowledge in design and some implications for designing in collaborative environments. Two key questions underline the concerns. First; how can strategic knowledge for collaborative design be taught and second; what kind of computer-based collaborative designing might best support the learning of strategic knowledge? We argue that the support of learning of strategic knowledge in collaborative design by computer-mediated means must be based upon empirical evidence about the nature of learning and design practice in the real world. This evidence suggests different ways of using computer-support for design learning and acquistion of strategic design knowledge. Examples of research by the authors that seeks to provide that evidence are described and an approach to computer system design and evaluation proposed.
keywords Collaborative Design; Strategic Knowledge; Empirical Studies; Computer Support
series journal paper
email
last changed 2002/11/15 18:29

_id ga0009
id ga0009
authors Lewis, Matthew
year 2000
title Aesthetic Evolutionary Design with Data Flow Networks
source International Conference on Generative Art
summary For a little over a decade, software has been created which allows for the design of visual content by aesthetic evolutionary design (AED) [3]. The great majority of these AED systems involve custom software intended for breeding entities within one fairly narrow problem domain, e.g., certain classes of buildings, cars, images, etc. [5]. Only a very few generic AED systems have been attempted, and extending them to a new design problem domain can require a significant amount of custom software development [6][8]. High end computer graphics software packages have in recent years become sufficiently robust to allow for flexible specification and construction of high level procedural models. These packages also provide extensibility, allowing for the creation of new software tools. One component of these systems which enables rapid development of new generative models and tools is the visual data flow network [1][2][7]. One of the first CG packages to employ this paradigm was Houdini. A system constructed within Houdini which allows for very fast generic specification of evolvable parametric prototypes is described [4]. The real-time nature of the software, when combined with the interlocking data networks, allows not only for vertical ancestor/child populations within the design space to be explored, but also allows for fast "horizontal" exploration of the potential population surface. Several example problem domains will be presented and discussed. References: [1] Alias | Wavefront. Maya. 2000, http://www.aliaswavefront.com [2] Avid. SOFTIMAGE. 2000, http://www.softimage.com [3] Bentley, Peter J. Evolutionary Design by Computers. Morgan Kaufmann, 1999. [4] Lewis, Matthew. "Metavolve Home Page". 2000, http://www.cgrg.ohio-state.edu/~mlewis/AED/Metavolve/ [5] Lewis, Matthew. "Visual Aesthetic Evolutionary Design Links". 2000, http://www.cgrg.ohio-state.edu/~mlewis/aed.html [6] Rowley, Timothy. "A Toolkit for Visual Genetic Programming". Technical Report GCG-74, The Geometry Center, University of Minnesota, 1994. [7] Side Effects Software. Houdini. 2000, http://www.sidefx.com [8] Todd, Stephen and William Latham. "The Mutation and Growth of Art by Computers" in Evolutionary Design by Computers, Peter Bentley ed., pp. 221-250, Chapter 9, Morgan Kaufmann, 1999.    
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 80b9
authors Madrazo, Leandro
year 2000
title Computers and architectural design: going beyond the tool
source Automation in Construction 9 (1) (2000) pp. 5-17
summary More often than not, discussions taking place in specialised conferences dealing with computers and design tend to focus mostly on the tool itself. What the computer can do that other tools cannot, how computers might improve design and whether a new aesthetic would result from the computer; these are among the most recurrent issues addressed in those forums. But, by placing the instrument at the center of the debate, we might be distorting the nature of design. In the course KEYWORDS, carried out in the years 1992 and 1993 at the ETH Zurich, the goal was to transcend the discourses that concentrate on the computer, integrating it in a wider theoretical framework including principles of modern art and architecture. This paper presents a summary of the content and results of this course.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 1bf8
authors Martens, B., Uhl, M., Tschuppik, W.-M. and Voigt, A.
year 2000
title Synagogue Neudeggergasse: A Virtual Reconstruction in Vienna
doi https://doi.org/10.52842/conf.acadia.2000.213
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 213-218
summary Issues associated with virtual reconstruction are first dealt within this paper. Visualizing of no longer existent (architecture-) objects and their surroundings practically amounts to a “virtual comeback”. Furthermore, special attention is given to the description of the working procedure for a case study of reconstruction sounding out the potentials of QuickTime VR. The paper ends up with a set of conclusions, taking a close look at the “pros” and “cons” of this type of re-construction. 1 Introduction Irreversible destruction having removed identity-establishing buildings from the urban surface for all times is the principal cause for the attempt of renewed “imaginating.” When dealing with such reconstruction first the problem of reliability concerning the existing basic material has to be tackled. Due to their two-dimensional recording photographs only supply us with restricted information content of the object under consideration. Thus the missing part has to be supplemented or substituted by additional sources. Within the process of assembling and overlaying of differing data sets the way of dealing with such fragmentations becomes of major importance. Priority is given to the choice of information. One of the most elementary items of information regarding perception of three-dimensional objects surely is the effect that color and material furnishes. It seems to suggest itself that black-and-white shots hardly will prove valid in this respect. The three-dimensional object doubtlessly provides us with a by far greater variety of possibilities in the following working process than the “cardboard model with pasted-on facade photography”. Only the completely designed model structure makes for visualizing the plastic representation form of architecture in a sustainable manner. Furthermore, a virtual model can be dismantled into part models without amounting to a destruction process thereof. Apart therefrom the virtual model permits the generation of differing reconstruction variants regarding color and material. Moreover, architecture models of a physical nature are inherently connected to locality as such.
series ACADIA
email
last changed 2022/06/07 07:59

_id e6fb
authors McFadzean, Jeanette
year 1999
title Computational Sketch Analyser (CSA): Extending the Boundaries of Knowledge in CAAD
doi https://doi.org/10.52842/conf.ecaade.1999.503
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 503-510
summary This paper focuses on the cognitive problem-solving strategies of professional architectural designers and their use of external representations for the production of creative ideas. Using a new form of protocol analysis (Computational Sketch Analysis), the research has analysed five architects' verbal descriptions of their cognitive reasoning strategies during conceptual designing. It compares these descriptions to a computational analysis of the architects' sketches and sketching behaviour. The paper describes how the current research is establishing a comprehensive understanding of the mapping between conceptualisation, cognition, drawing, and complex problem solving. The paper proposes a new direction for Computer Aided Architectural Design tools (CAAD). It suggests that in order to extend the boundaries of knowledge in CAAD an understanding of the complex nature of architectural conceptual problem-solving needs to be incorporated into and supported by future conceptual design tools.
keywords Computational Sketch Analysis, Conceptual Design
series eCAADe
email
last changed 2022/06/07 07:58

_id ga0014
id ga0014
authors McGuire, Kevin
year 2000
title Controlling Chaos: a Simple Deterministic System for Creating Complex Organic Shapes
source International Conference on Generative Art
summary It is difficult and frustrating to create complex organic shapes using the current set of computer graphic programs. One reason is because the geometry of nature is different from that of our tools. Its self-similarity and fine detail are derived from growth processes that are very different from the working process imposed by drawing programs. This mismatch makesit difficult to create natural looking artifacts. Drawing programs provide a palette of shapes that may be manipulated in a variety ways, but the palette is limited and based on a cold Euclidean geometry. Clouds, rivers, and rocks are not lines or circles. Paint programs provide interesting filters and effects, but require great skill and effort. Always, the details must be arduously managed by the artist. This limits the artist's expressive power. Fractals have stunning visual richness, but the artist's techniques are limited to those of choosing colours and searching the fractal space. Genetic algorithms provide a powerful means for exploring a space of variations, but the artist's skill is limited by the very difficult ability to arrive at the correct fitness function. It is hard to get the picture you wanted. Ideally, the artist should have macroscopic control over the creation while leaving the computer to manage the microscopic details. For the result to feel organic, the details should be rich, consistent and varied, cohesive but not repetitious. For the results to be reproducible, the system should be deterministic. For it to be expressive there should be a cause-effect relationship between the actions in the program and change in the resulting picture. Finally, it would be interesting if the way we drew was more closely related to the way things grew. We present a simple drawing program which provides this mixture of macroscopic control with free microscopic detail. Through use of an accretion growth model, the artist controls large scale structure while varied details emerge naturally from senstive dependence in the system. Its algorithms are simple and deterministic, so its results are predictable and reproducible. The overall resulting structure can be anticipated, but it can also surprise. Despite its simplicity, it has been used to generate a surprisingly rich assortment of complex organic looking pictures.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 21HOMELOGIN (you are user _anon_548297 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002