CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 619

_id ga9926
id ga9926
authors Antonini, Riccardo
year 1999
title Let's Improvise Together
source International Conference on Generative Art
summary The creators of ‘Let's-Improvise-Together’ adhere to the idea that while there is a multitude of online games now available in cyberspace, it appears that relatively few are focused on providing a positive, friendly and productive experience for the user. Producing this kind of experience is one the goals of our Amusement Project.To this end, the creation of ‘Let's Improvise Together’ has been guided by dedication to the importance of three themes:* the importance of cooperation,* the importance of creativity, and* the importance of emotion.Description of the GameThe avatar arrives in a certain area where there are many sound-blocks/objects. Or he may add sound "property" to existing ones. He can add new objects at will. Each object may represents a different sound, they do not have to though. The avatar walks around and chooses which objects he likes. Makes copies of these and add sounds or change the sounds on existing ones, then with all of the sound-blocks combined make his personalized "instrument". Now any player can make sounds on the instrument by approaching or bumping into a sound-block. The way that the avatar makes sounds on the instrument can vary. At the end of the improvising session, the ‘composition’ will be saved on the instrument site, along with the personalized instrument. In this way, each user of the Amusement Center will leave behind him a unique instrumental creation, that others who visit the Center later will be able to play on and listen to. The fully creative experience of making a new instrument can be obtained connecting to Active Worlds world ‘Amuse’ and ‘Amuse2’.Animated colorful sounding objects can be assembled by the user in the Virtual Environment as a sort of sounding instrument. We refrain here deliberately from using the word musical instrument, because the level of control we have on the sound in terms of rythm and melody, among other parameters, is very limited. It resembles instead, very closely, to the primitive instruments used by humans in some civilizations or to the experience made by children making sound out of ordinary objects. The dimension of cooperation is of paramount importance in the process of building and using the virtual sounding instrument. The instrument can be built on ones own effort but preferably by a team of cooperating users. The cooperation has as an important corolary: the sharing of the experience. The shared experience finds its permanence in the collective memory of the sounding instruments built. The sounding instrument can be seen also as a virtual sculpture, indeed this sculpture is a multimedial one. The objects have properties that ranges from video animation to sound to virtual physical properties like solidity. The role of the user representation in the Virtual World, called avatar, is important because it conveys, among other things, the user’s emotions. It is worth pointing out that the Avatar has no emotions on its own but it simply expresses the emotions of the user behind it. In a way it could be considered a sort of actor performing the script that the user gives it in real-time while playing.The other important element of the integration is related to the memory of the experience left by the user into the Virtual World. The new layout is explored and experienced. The layout is a permanent editable memory. The generative aspects of Let's improvise together are the following.The multi-media virtual sculpture left behind any participating avatar is not the creation of a single author/artist. The outcome of the sinergic interaction of various authors is not deterministic, nor predictable. The authors can indeed use generative algorythm in order to create the texture to be used on the objects. Usually, in our experience, the visitors of the Amuse worlds use shareware programs in order to generate their texture. In most cases the shareware programs are simple fractals generators. In principle, it is possible to generate also the shape of the object in a generative way. Taking into account the usual audience of our world, we expected visitors to use very simple algorythm that could generate shapes as .rwx files. Indeed, noone has attempted to do so insofar. As far as the music is concerned, the availability of shareware programs that allow simple generation of sounds sequences has made possible, for some users, to generate sounds sequences to be put in our world. In conclusion, the Let's improvise section of the Amuse worlds could be open for experimentation on generative art as a very simple entry point platform. We will be very happy to help anybody that for educational purposes would try to use our platform in order to create and exhibit generative forms of art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id b4d2
authors Caldas, Luisa G. and Norford, Leslie K.
year 1999
title A Genetic Algorithm Tool for Design Optimization
doi https://doi.org/10.52842/conf.acadia.1999.260
source Media and Design Process [ACADIA ‘99 / ISBN 1-880250-08-X] Salt Lake City 29-31 October 1999, pp. 260-271
summary Much interest has been recently devoted to generative processes in design. Advances in computational tools for design applications, coupled with techniques from the field of artificial intelligence, have lead to new possibilities in the way computers can inform and actively interact with the design process. In this paper we use the concepts of generative and goal-oriented design to propose a computer tool that can help the designer to generate and evaluate certain aspects of a solution towards an optimized behavior of the final configuration. This work focuses mostly on those aspects related to the environmental performance of the building. Genetic Algorithms are applied as a generative and search procedure to look for optimized design solutions in terms of thermal and lighting performance in a building. The Genetic Algorithm (GA) is first used to generate possible design solutions, which are then evaluated in terms of lighting and thermal behavior using a detailed thermal analysis program (DOE2.1E). The results from the simulations are subsequently used to further guide the GA search towards finding low-energy solutions to the problem under study. Solutions can be visualized using an AutoLisp routine. The specific problem addressed in this study is the placing and sizing of windows in an office building. The same method is applicable to a wide range of design problems like the choice of construction materials, design of shading elements, or sizing of lighting and mechanical systems for buildings.
series ACADIA
email
last changed 2022/06/07 07:54

_id ga9916
id ga9916
authors Elzenga, R. Neal and Pontecorvo, Michael S.
year 1999
title Arties: Meta-Design as Evolving Colonies of Artistic Agents
source International Conference on Generative Art
summary Meta-design, the act of designing a system or species of design instead of a design instance, is an important concept in modern design practice and in the generative design paradigm. For meta-design to be a useful tool, the designer must have more formal support for both design species definition/expression and the abstract attributes which the designer is attempting to embody within a design. Arties is an exploration of one possible avenue for supporting meta-design. Arties is an artistic system emphasizing the co-evolution of colonies of Artificial Life design or artistic agents (called arties) and the environment they inhabit. Generative design systems have concentrated on biological genetics metaphors where a population of design instances are evolved directly from a set of ‘parent’ designs in a succession of generations. In Arties, the a-life agent which is evolved, produces the design instance as a byproduct of interacting with its environment. Arties utilize an attraction potential curve as their primary dynamic. They sense the relative attraction of entities in their environment, using multiple sensory channels. Arties then associate an attractiveness score to each entity. This attractiveness score is combined with a 'taste' function built into the artie that is sensitized to that observation channel, entity, and distance by a transfer function. Arties use this attraction to guide decisions and behaviors. A community of arties, with independent evolving attraction criteria can pass collective judgement on each point in an art space. As the Artie moves within this space it modifies the environment in reaction to what it senses. Arties support for Meta-design is in (A) the process of evolving arties, breeding their attraction potential curve parameters using a genetic algorithm and (B) their use of sensory channels to support abstract attributes geometries. Adjustment of these parameters tunes the attraction of the artie along various sensing channels. The multi-agent co-evolution of Arties is one approach to creating a system for supporting meta-design. Arties is part of an on-going exploration of how to support meta-design in computer augmented design systems. Our future work with Arties-like systems will be concerned with applications in areas such as modeling adaptive directives in Architecture, Object Structure Design, spatio-temporal behaviors design (for games and simulations), virtual ambient spaces, and representation and computation of abstract design attributes.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id e336
authors Achten, H., Roelen, W., Boekholt, J.-Th., Turksma, A. and Jessurun, J.
year 1999
title Virtual Reality in the Design Studio: The Eindhoven Perspective
doi https://doi.org/10.52842/conf.ecaade.1999.169
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 169-177
summary Since 1991 Virtual Reality has been used in student projects in the Building Information Technology group. It started as an experimental tool to assess the impact of VR technology in design, using the environment of the associated Calibre Institute. The technology was further developed in Calibre to become an important presentation tool for assessing design variants and final design solutions. However, it was only sporadically used in student projects. A major shift occurred in 1997 with a number of student projects in which various computer technologies including VR were used in the whole of the design process. In 1998, the new Design Systems group started a design studio with the explicit aim to integrate VR in the whole design process. The teaching effort was combined with the research program that investigates VR as a design support environment. This has lead to increasing number of innovative student projects. The paper describes the context and history of VR in Eindhoven and presents the current set-UP of the studio. It discusses the impact of the technology on the design process and outlines pedagogical issues in the studio work.
keywords Virtual Reality, Design Studio, Student Projects
series eCAADe
email
last changed 2022/06/07 07:54

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id b78f
authors Clayton, M.J., Warden, Robert B., Parker, Th.W.
year 1999
title Virtual Construction of Architecture Using 3D CAD and Simulation
doi https://doi.org/10.52842/conf.acadia.1999.316
source Media and Design Process [ACADIA ‘99 / ISBN 1-880250-08-X] Salt Lake City 29-31 October 1999, pp. 316-324
summary 3D modeling and computer simulations provide new ways for architecture students to study the relationship between the design and construction of buildings. Digital media help to integrate and expand the content of courses in drafting, construction and design. This paper describes computer-based exercises that intensify the students’ experience of construction in several courses from sophomore to senior level. The courses integrate content from drafting and design communication, construction, CAD, and design. Several techniques are used to strengthen students’ awareness and ability in construction. These include: · Virtual design - build projects in which students construct 3D CAD models that include all elements that are used in construction. · Virtual office in which several students must collaborate under the supervision of a student acting as project architect to create a 3D CAD model and design development documents. · Virtual sub-contracting in which each student builds a trade specific 3D CAD model of a building and all of the trade specific models must be combined into a single model. · Construction simulations (4D CAD) in which students build 3D CAD models showing all components and then animate them to illustrate the assembly process. · Cost estimating using spreadsheets. These techniques are applied and reapplied at several points in the curriculum in both technical laboratory courses and design studios. This paper compares virtual construction methods to physical design – build projects and provides our pedagogical arguments for the use of digital media for understanding construction.
series ACADIA
email
last changed 2022/06/07 07:56

_id 837b
authors Elger, Dietrich and Russell, Peter
year 2000
title Using the World Wide Web as a Communication and Presentation Forum for Students of Architecture
doi https://doi.org/10.52842/conf.ecaade.2000.061
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 61-64
summary Since 1997, the Institute for Industrial Building Production (ifib) has been carrying out upper level design studios under the framework of the Netzentwurf or Net-Studio. The Netzentwurf is categorized as a virtual design studio in that the environment for presentation, criticism and communication is web based. This allows lessons learned from research into Computer Supported Cooperative Work (CSCW) to be adapted to the special conditions indigenous to the architectural design studio. Indeed, an aim of the Netzentwurf is the creation and evolution of a design studio planing platform. In the Winter semester 1999-2000, ifib again carried out two Netzentwurf studios. involving approximately 30 students from the Faculty of Architecture, University of Karlsruhe. The projects differed from previous net studios in that both studios encompassed an inter-university character in addition to the established framework of the Netzentwurf. The first project, the re-use of Fort Kleber in Wolfisheim by Strasbourg, was carried out as part of the Virtual Upperrhine University of Architecture (VuuA) involving over 140 students from various disciplines in six institutions from five universities in France, Switzerland and Germany. The second project, entitled "Future, Inc.", involved the design of an office building for a scenario 20 years hence. This project was carried out in parallel with the Technical University Cottbus using the same methodology and program for two separate building sites.
keywords Virtual Design Studios, Architectural Graphics, Presentation Techniques
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:55

_id 933a
authors GVU
year 1999
title Conceptual Design Space Project
source Center Virtual Environments Group. GeorgiaTech,Virginia, USA
summary The Conceptual Design Space (CDS) is a real-time, interactive virtual environments application which attempts to address the issue of 3D design in general and immersive design in particular. We are researching innovative tools and interface elements for virtual worlds. The first application of these techniques is an architectural one. Graduate students from Georgia Tech's College of Architecture will be using CDS to create conceptual building designs. The students will not only be able to inspect and "inhabit" their buildings, but will also have the ability to modify them, add details, or create new designs, all while immersed in the virtual world.
series other
last changed 2003/04/23 15:50

_id avocaad_2001_22
id avocaad_2001_22
authors Jos van Leeuwen, Joran Jessurun
year 2001
title XML for Flexibility an Extensibility of Design Information Models
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary The VR-DIS research programme aims at the development of a Virtual Reality – Design Information System. This is a design and decision support system for collaborative design that provides a VR interface for the interaction with both the geometric representation of a design and the non-geometric information concerning the design throughout the design process. The major part of the research programme focuses on early stages of design. The programme is carried out by a large number of researchers from a variety of disciplines in the domain of construction and architecture, including architectural design, building physics, structural design, construction management, etc.Management of design information is at the core of this design and decision support system. Much effort in the development of the system has been and still is dedicated to the underlying theory for information management and its implementation in an Application Programming Interface (API) that the various modules of the system use. The theory is based on a so-called Feature-based modelling approach and is described in the PhD thesis by [first author, 1999] and in [first author et al., 2000a]. This information modelling approach provides three major capabilities: (1) it allows for extensibility of conceptual schemas, which is used to enable a designer to define new typologies to model with; (2) it supports sharing of conceptual schemas, called type-libraries; and (3) it provides a high level of flexibility that offers the designer the opportunity to easily reuse design information and to model information constructs that are not foreseen in any existing typologies. The latter aspect involves the capability to expand information entities in a model with relationships and properties that are not typologically defined but applicable to a particular design situation only; this helps the designer to represent the actual design concepts more accurately.The functional design of the information modelling system is based on a three-layered framework. In the bottom layer, the actual design data is stored in so-called Feature Instances. The middle layer defines the typologies of these instances in so-called Feature Types. The top layer is called the meta-layer because it provides the class definitions for both the Types layer and the Instances layer; both Feature Types and Feature Instances are objects of the classes defined in the top layer. This top layer ensures that types can be defined on the fly and that instances can be created from these types, as well as expanded with non-typological properties and relationships while still conforming to the information structures laid out in the meta-layer.The VR-DIS system consists of a growing number of modules for different kinds of functionality in relation with the design task. These modules access the design information through the API that implements the meta-layer of the framework. This API has previously been implemented using an Object-Oriented Database (OODB), but this implementation had a number of disadvantages. The dependency of the OODB, a commercial software library, was considered the most problematic. Not only are licenses of the OODB library rather expensive, also the fact that this library is not common technology that can easily be shared among a wide range of applications, including existing applications, reduces its suitability for a system with the aforementioned specifications. In addition, the OODB approach required a relatively large effort to implement the desired functionality. It lacked adequate support to generate unique identifications for worldwide information sources that were understandable for human interpretation. This strongly limited the capabilities of the system to share conceptual schemas.The approach that is currently being implemented for the core of the VR-DIS system is based on eXtensible Markup Language (XML). Rather than implementing the meta-layer of the framework into classes of Feature Types and Feature Instances, this level of meta-definitions is provided in a document type definition (DTD). The DTD is complemented with a set of rules that are implemented into a parser API, based on the Document Object Model (DOM). The advantages of the XML approach for the modelling framework are immediate. Type-libraries distributed through Internet are now supported through the mechanisms of namespaces and XLink. The implementation of the API is no longer dependent of a particular database system. This provides much more flexibility in the implementation of the various modules of the VR-DIS system. Being based on the (supposed to become) standard of XML the implementation is much more versatile in its future usage, specifically in a distributed, Internet-based environment.These immediate advantages of the XML approach opened the door to a wide range of applications that are and will be developed on top of the VR-DIS core. Examples of these are the VR-based 3D sketching module [VR-DIS ref., 2000]; the VR-based information-modelling tool that allows the management and manipulation of information models for design in a VR environment [VR-DIS ref., 2000]; and a design-knowledge capturing module that is now under development [first author et al., 2000a and 2000b]. The latter module aims to assist the designer in the recognition and utilisation of existing and new typologies in a design situation. The replacement of the OODB implementation of the API by the XML implementation enables these modules to use distributed Feature databases through Internet, without many changes to their own code, and without the loss of the flexibility and extensibility of conceptual schemas that are implemented as part of the API. Research in the near future will result in Internet-based applications that support designers in the utilisation of distributed libraries of product-information, design-knowledge, case-bases, etc.The paper roughly follows the outline of the abstract, starting with an introduction to the VR-DIS project, its objectives, and the developed theory of the Feature-modelling framework that forms the core of it. It briefly discusses the necessity of schema evolution, flexibility and extensibility of conceptual schemas, and how these capabilities have been addressed in the framework. The major part of the paper describes how the previously mentioned aspects of the framework are implemented in the XML-based approach, providing details on the so-called meta-layer, its definition in the DTD, and the parser rules that complement it. The impact of the XML approach on the functionality of the VR-DIS modules and the system as a whole is demonstrated by a discussion of these modules and scenarios of their usage for design tasks. The paper is concluded with an overview of future work on the sharing of Internet-based design information and design knowledge.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id a70b
authors Jung, Th., Do, E.Y.-L. and Gross, M.D.
year 1999
title Immersive Redlining and Annotation of 3D Design Models on the Web
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, pp. 81-98
summary The Web now enables people in different places to view three-dimensional models of buildings and places in a collaborative design discussion. Already design firms with offices around the world are exploiting this capability. In a typical application, design drawings and models are posted by one party for review by others, and a dialogue is carried out either synchronously using on line streamed video and audio, or asynchronously using email, chat room, and bulletin board software. However, most of these systems do not allow designers to embed annotations and proposed design changes in the threedimensional design model under discussion. We present a working prototype of a system that has these capabilities and describe the configuration of Web technologies we used to construct it.
keywords VRML, Immersive Environment, Virtual Annotation, Computer-aided Design, Building Models
series CAAD Futures
email
last changed 2006/11/07 07:22

_id 53a4
authors Vélez Jahn, Gonzalo
year 1999
title The MUMOVIAR (Museum for Modeling Virtual Architecture) - A Proposal for a Research Theme (The Mumoviar (For Museum Virtual Modeling Architecture) - to for Proposal to Research Theme)
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 379-383
summary One of the most interesting areas in the forefront of non-immersive virtual reality (VRML) applications to architecture is the one that concerns the design, construction and exploration of on-line multi-access worlds using the Internet-WWW. However, and despite the great proliferation of earlier single-access models built on VRML, attempts to collect, classify and provide accesibility that type of models has proved almost nil. On the other side, one of the architectural typologies that promises the greatest transformation potential in the virtual architecture area in cyberspace is the one that concerns virtual museums and galleries. This paper seeks to provide a bridge between the two aforementioned approaches by formulating a conceptual basis for the creation of a virtual, on-line, multi-access museum intended to house collections of VRML building models. Such models, initially shown at a conventional model scale, would be accessed by visitors through an interface intended to transport those visitors into the models’ environments, where changes in scale could provide navigation access to interior and exterior view of the building . Accordingly, the museum would act as a sort of "spaceport” toward different routes of exploration. This modelistic cascading seems to offer interesting possibilities as regards future virtual architecture applications.
series SIGRADI
email
last changed 2016/03/10 10:02

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 975b
authors Sato, I., Sato, Y. and Ikeuchi, K.
year 1999
title Acquiring a Radiance Distribution to Superimpose Virtual Objects onto a Real Scene
source IEEE Transactions on Visualization and Computer Graphics, vol. 5, no. 1, pp. 1-12, March 1999
summary This paper describes a new method for superimposing virtual objects with correct shadings onto an image of a real scene. Unlike the previously proposed methods, our method can measure a radiance distribution of a real scene automatically and use it for superimposing virtual objects appropriately onto a real scene. First, a geometric model of the scene is constructed from a pair of omnidirectional images by using an omnidirectional stereo algorithm. Then, radiance of the scene is computed from a sequence of omnidirectional images taken with different shutter speeds and mapped onto the constructed geometric model. The radiance distribution mapped onto the geometric model is used for rendering virtual objects superimposed onto the scene image. As a result, even for a complex radiance distribution, our method can superimpose virtual objects with convincing shadings and shadows cast onto the real scene. We successfully tested the proposed method by using real images to show its effectiveness.
series journal paper
last changed 2003/04/23 15:50

_id cf2011_p109
id cf2011_p109
authors Abdelmohsen, Sherif; Lee Jinkook, Eastman Chuck
year 2011
title Automated Cost Analysis of Concept Design BIM Models
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 403-418.
summary AUTOMATED COST ANALYSIS OF CONCEPT DESIGN BIM MODELS Interoperability: BIM models and cost models This paper introduces the automated cost analysis developed for the General Services Administration (GSA) and the analysis results of a case study involving a concept design courthouse BIM model. The purpose of this study is to investigate interoperability issues related to integrating design and analysis tools; specifically BIM models and cost models. Previous efforts to generate cost estimates from BIM models have focused on developing two necessary but disjoint processes: 1) extracting accurate quantity take off data from BIM models, and 2) manipulating cost analysis results to provide informative feedback. Some recent efforts involve developing detailed definitions, enhanced IFC-based formats and in-house standards for assemblies that encompass building models (e.g. US Corps of Engineers). Some commercial applications enhance the level of detail associated to BIM objects with assembly descriptions to produce lightweight BIM models that can be used by different applications for various purposes (e.g. Autodesk for design review, Navisworks for scheduling, Innovaya for visual estimating, etc.). This study suggests the integration of design and analysis tools by means of managing all building data in one shared repository accessible to multiple domains in the AEC industry (Eastman, 1999; Eastman et al., 2008; authors, 2010). Our approach aims at providing an integrated platform that incorporates a quantity take off extraction method from IFC models, a cost analysis model, and a comprehensive cost reporting scheme, using the Solibri Model Checker (SMC) development environment. Approach As part of the effort to improve the performance of federal buildings, GSA evaluates concept design alternatives based on their compliance with specific requirements, including cost analysis. Two basic challenges emerge in the process of automating cost analysis for BIM models: 1) At this early concept design stage, only minimal information is available to produce a reliable analysis, such as space names and areas, and building gross area, 2) design alternatives share a lot of programmatic requirements such as location, functional spaces and other data. It is thus crucial to integrate other factors that contribute to substantial cost differences such as perimeter, and exterior wall and roof areas. These are extracted from BIM models using IFC data and input through XML into the Parametric Cost Engineering System (PACES, 2010) software to generate cost analysis reports. PACES uses this limited dataset at a conceptual stage and RSMeans (2010) data to infer cost assemblies at different levels of detail. Functionalities Cost model import module The cost model import module has three main functionalities: generating the input dataset necessary for the cost model, performing a semantic mapping between building type specific names and name aggregation structures in PACES known as functional space areas (FSAs), and managing cost data external to the BIM model, such as location and construction duration. The module computes building data such as footprint, gross area, perimeter, external wall and roof area and building space areas. This data is generated through SMC in the form of an XML file and imported into PACES. Reporting module The reporting module uses the cost report generated by PACES to develop a comprehensive report in the form of an excel spreadsheet. This report consists of a systems-elemental estimate that shows the main systems of the building in terms of UniFormat categories, escalation, markups, overhead and conditions, a UniFormat Level III report, and a cost breakdown that provides a summary of material, equipment, labor and total costs. Building parameters are integrated in the report to provide insight on the variations among design alternatives.
keywords building information modeling, interoperability, cost analysis, IFC
series CAAD Futures
email
last changed 2012/02/11 19:21

_id 4d95
authors Alvarado, Rodrigo Garcia and Maver, Tom
year 1999
title Virtual Reality in Architectural Education: Defining Possibilities
doi https://doi.org/10.52842/conf.acadia.1999.007
source ACADIA Quarterly, vol. 18, no. 4, pp. 7-9
summary Introduction: virtual reality in architecture Virtual Reality (VR) is an emergent computer technology for full 3D-simulations, which has a natural application in the architectural work, due that activity involves the complete definition of buildings prior to its construction. Although the profession has a long tradition and expertise in the use of 2D-plans for the design of buildings, the increasing complexity of projects and social participation requires better media of representation. However, the technological promise of Virtual Reality involves many sophisticated software and hardware developments. It is based on techniques of 3D-modelling currently incorporated in the majority of drawing software used in architecture, and also there are several tools for rendering, animation and panoramic views, which provide visual realism. But other capabilities like interactivity and sense of immersion are still complex, expensive and under research. These require stereoscopic helmets, 3D pointers and trackers with complicated configurations and uncomfortable use. Most advanced installations of Virtual-Reality like CAVEs involve much hardware, building space and restrictions for users. Nevertheless, diverse developers are working in Virtual-Reality user-friendly techniques and there are many initial experiences of architectural walk-throughs showing advantages in the communication and development of designs. Then we may expect an increasing use of Virtual Reality in architecture.
series ACADIA
email
last changed 2022/06/07 07:54

_id 7ccd
authors Augenbroe, Godfried and Eastman, Chuck
year 1999
title Computers in Building: Proceedings of the CAADfutures '99 Conference
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, 398 p.
summary This is the eight CAADfutures Conference. Each of these bi-annual conferences identifies the state of the art in computer application in architecture. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. Early conferences, for example, addressed project work, either for real construction or done in academic studios, that approached the teaching or use of CAD tools in innovative ways. By the early 1990s, such project-based examples of CAD use disappeared from the conferences, as this area was no longer considered a research contribution. Computer-based design has become a basic way of doing business. This conference is marked by a similar evolutionary change. More papers were submitted about Web- based applications than about any other area. Rather than having multiple sessions on Web-based applications and communications, we instead came to the conclusion that the Web now is an integral part of digital computing, as are CAD applications. Using the conference as a sample, Web-based projects have been integrated into most research areas. This does not mean that the application of the Web is not a research area, but rather that the Web itself is an integral tool in almost all areas of CAAD research.
series CAAD Futures
email
last changed 2006/11/07 07:22

_id bfc2
authors Bessone, Miriam and Mantovani, Graciela
year 1999
title Integración del Medio Digital a la Enseñanza del Diseño Arquitectónico. Huellas de un Taller Experimental (Integration of Digital Media in the Teaching of Architectural Design. Tracks of an Experimental Studio)
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 289-294
summary This paper presents the searching of new building modes for the knowledge of design in curriculum workshops at Facultad de Arquitectura, Diseno y Urbanismo of the Universidad Nacional del Litoral the proposed “research action” program articulates longitudinally in the three cycles of the career, understanding architecture as metaknowledge within a new paradigm of subjectivity, complexity and multidimensionality. In other words, it is recognized a new scenery tending to modify didactic relations. This experimental field looks for conscientious equilibrium between “written culture/audiovisual culture”, and “analog instruments/digital media”. We focus our interest on the “machine interacting with and for men”, looking for harmonious synthesis through a new way of thinking, to allow “real progress”. For turning this idea into action, we organized an alternative and plural team-work in architecture. We called it “experimental workshop”. In this first level the students worked. On a preliminary plan of a “kindergarden”. They developed a divergent process through the 3D simulations (using the software 3DS MAX v2), scale models and sensible sketches. For conclusions, the paper addresses the characteristics of the pedagogic model used and the results achieved.
series SIGRADI
email
last changed 2016/03/10 09:47

_id 7229
authors Brenner, C. and Haala, N.
year 1999
title Towards Virtual Maps: On the Production of 3D City Models
source GeoInformatics 2(5), pp. 10–13
summary The growing demand for detailed city models has stimulated research on efficient 3D data acquisition. Over the past years, it has become evident that the automatic reconstruction of urban scenes is most promising if different types of data, possibly originating from different data sources are combined. In the approach presented in this paper the geometric reconstruction of urban areas is based on height data from airborne laser scanning and 2D GIS, which provides the ground plan geometry of buildings. Both data sources are used to estimate the type and parameter of basic primitives which in turn are combined to obtain complex building structures. The final output consists of 3D CAD models for the buildings. Using the reconstructed geometry, terrestrial images are mapped onto building facades to generate virtual city models.
keywords 3D City modeling
series journal paper
last changed 2003/11/21 15:16

_id 165b
authors Brenner, C. and Haala, N.
year 1999
title Rapid Production of Virtual Reality City Models
source GIS - Geo-Informationssysteme 12(3), pp. 22–28
summary The growing demand for detailed city models has stimulated research on efficient 3D data acquisition. Over the past years, it has become evident that the automatic reconstruction of urban scenes is most promising if different types of data, possibly originating from different data sources are combined. In the approach presented in this paper the geometric reconstruction of urban areas is based on height data from airborne laser scanning and 2D GIS, which provides the ground plan geometry of buildings. Both data sources are used to estimate the type and parameter of basic primitives which in turn are combined to obtain complex building structures. The final output consists of 3D CAD models for the buildings. Using the reconstructed geometry, terrestrial images are mapped onto building facades to generate virtual city models.
keywords 3D City modeling
series other
last changed 2003/11/21 15:16

_id 48a7
authors Brooks
year 1999
title What's Real About Virtual Reality
source IEEE Computer Graphics and Applications, Vol. 19, no. 6, Nov/Dec, 27
summary As is usual with infant technologies, the realization of the early dreams for VR and harnessing it to real work has taken longer than the wild hype predicted, but it is now happening. I assess the current state of the art, addressing the perennial questions of technology and applications. By 1994, one could honestly say that VR "almost works." Many workers at many centers could doe quite exciting demos. Nevertheless, the enabling technologies had limitations that seriously impeded building VR systems for any real work except entertainment and vehicle simulators. Some of the worst problems were end-to-end system latencies, low-resolution head-mounted displays, limited tracker range and accuracy, and costs. The technologies have made great strides. Today one can get satisfying VR experiences with commercial off-the-shelf equipment. Moreover, technical advances have been accompanied by dropping costs, so it is both technically and economically feasible to do significant application. VR really works. That is not to say that all the technological problems and limitations have been solved. VR technology today "barely works." Nevertheless, coming over the mountain pass from "almost works" to "barely works" is a major transition for the discipline. I have sought out applications that are now in daily productive use, in order to find out exactly what is real. Separating these from prototype systems and feasibility demos is not always easy. People doing daily production applications have been forthcoming about lessons learned and surprises encountered. As one would expect, the initial production applications are those offering high value over alternate approaches. These applications fall into a few classes. I estimate that there are about a hundred installations in daily productive use worldwide.
series journal paper
email
last changed 2003/04/23 15:14

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 30HOMELOGIN (you are user _anon_149281 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002