CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 628

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id avocaad_2001_22
id avocaad_2001_22
authors Jos van Leeuwen, Joran Jessurun
year 2001
title XML for Flexibility an Extensibility of Design Information Models
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary The VR-DIS research programme aims at the development of a Virtual Reality – Design Information System. This is a design and decision support system for collaborative design that provides a VR interface for the interaction with both the geometric representation of a design and the non-geometric information concerning the design throughout the design process. The major part of the research programme focuses on early stages of design. The programme is carried out by a large number of researchers from a variety of disciplines in the domain of construction and architecture, including architectural design, building physics, structural design, construction management, etc.Management of design information is at the core of this design and decision support system. Much effort in the development of the system has been and still is dedicated to the underlying theory for information management and its implementation in an Application Programming Interface (API) that the various modules of the system use. The theory is based on a so-called Feature-based modelling approach and is described in the PhD thesis by [first author, 1999] and in [first author et al., 2000a]. This information modelling approach provides three major capabilities: (1) it allows for extensibility of conceptual schemas, which is used to enable a designer to define new typologies to model with; (2) it supports sharing of conceptual schemas, called type-libraries; and (3) it provides a high level of flexibility that offers the designer the opportunity to easily reuse design information and to model information constructs that are not foreseen in any existing typologies. The latter aspect involves the capability to expand information entities in a model with relationships and properties that are not typologically defined but applicable to a particular design situation only; this helps the designer to represent the actual design concepts more accurately.The functional design of the information modelling system is based on a three-layered framework. In the bottom layer, the actual design data is stored in so-called Feature Instances. The middle layer defines the typologies of these instances in so-called Feature Types. The top layer is called the meta-layer because it provides the class definitions for both the Types layer and the Instances layer; both Feature Types and Feature Instances are objects of the classes defined in the top layer. This top layer ensures that types can be defined on the fly and that instances can be created from these types, as well as expanded with non-typological properties and relationships while still conforming to the information structures laid out in the meta-layer.The VR-DIS system consists of a growing number of modules for different kinds of functionality in relation with the design task. These modules access the design information through the API that implements the meta-layer of the framework. This API has previously been implemented using an Object-Oriented Database (OODB), but this implementation had a number of disadvantages. The dependency of the OODB, a commercial software library, was considered the most problematic. Not only are licenses of the OODB library rather expensive, also the fact that this library is not common technology that can easily be shared among a wide range of applications, including existing applications, reduces its suitability for a system with the aforementioned specifications. In addition, the OODB approach required a relatively large effort to implement the desired functionality. It lacked adequate support to generate unique identifications for worldwide information sources that were understandable for human interpretation. This strongly limited the capabilities of the system to share conceptual schemas.The approach that is currently being implemented for the core of the VR-DIS system is based on eXtensible Markup Language (XML). Rather than implementing the meta-layer of the framework into classes of Feature Types and Feature Instances, this level of meta-definitions is provided in a document type definition (DTD). The DTD is complemented with a set of rules that are implemented into a parser API, based on the Document Object Model (DOM). The advantages of the XML approach for the modelling framework are immediate. Type-libraries distributed through Internet are now supported through the mechanisms of namespaces and XLink. The implementation of the API is no longer dependent of a particular database system. This provides much more flexibility in the implementation of the various modules of the VR-DIS system. Being based on the (supposed to become) standard of XML the implementation is much more versatile in its future usage, specifically in a distributed, Internet-based environment.These immediate advantages of the XML approach opened the door to a wide range of applications that are and will be developed on top of the VR-DIS core. Examples of these are the VR-based 3D sketching module [VR-DIS ref., 2000]; the VR-based information-modelling tool that allows the management and manipulation of information models for design in a VR environment [VR-DIS ref., 2000]; and a design-knowledge capturing module that is now under development [first author et al., 2000a and 2000b]. The latter module aims to assist the designer in the recognition and utilisation of existing and new typologies in a design situation. The replacement of the OODB implementation of the API by the XML implementation enables these modules to use distributed Feature databases through Internet, without many changes to their own code, and without the loss of the flexibility and extensibility of conceptual schemas that are implemented as part of the API. Research in the near future will result in Internet-based applications that support designers in the utilisation of distributed libraries of product-information, design-knowledge, case-bases, etc.The paper roughly follows the outline of the abstract, starting with an introduction to the VR-DIS project, its objectives, and the developed theory of the Feature-modelling framework that forms the core of it. It briefly discusses the necessity of schema evolution, flexibility and extensibility of conceptual schemas, and how these capabilities have been addressed in the framework. The major part of the paper describes how the previously mentioned aspects of the framework are implemented in the XML-based approach, providing details on the so-called meta-layer, its definition in the DTD, and the parser rules that complement it. The impact of the XML approach on the functionality of the VR-DIS modules and the system as a whole is demonstrated by a discussion of these modules and scenarios of their usage for design tasks. The paper is concluded with an overview of future work on the sharing of Internet-based design information and design knowledge.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id f11d
authors Brown, K. and Petersen, D.
year 1999
title Ready-to-Run Java 3D
source Wiley Computer Publishing
summary Written for the intermediate Java programmer and Web site designer, Ready-to-Run Java 3D provides sample Java applets and code using Sun's new Java 3D API. This book provides a worthy jump-start for Java 3D that goes well beyond the documentation provided by Sun. Coverage includes downloading the Java 2 plug-in (needed by Java 3D) and basic Java 3D classes for storing shapes, matrices, and scenes. A listing of all Java 3D classes shows off its considerable richness. Generally, this book tries to cover basic 3D concepts and how they are implemented in Java 3D. (It assumes a certain knowledge of math, particularly with matrices, which are a staple of 3D graphics). Well-commented source code is printed throughout (though there is little additional commentary). An applet for orbiting planets provides an entertaining demonstration of transforming objects onscreen. You'll learn to add processing for fog effects and texture mapping and get material on 3D sound effects and several public domain tools for working with 3D artwork (including converting VRML [Virtual Reality Markup Language] files for use with Java 3D). In all, this book largely succeeds at being accessible for HTML designers while being useful to Java programmers. With Java 3D, Sun is betting that 3D graphics shouldn't require a degree in computer science. This book reflects that philosophy, though advanced Java developers will probably want more detail on this exciting new graphics package. --Richard Dragan Topics covered: Individual applets for morphing, translation, rotation, and scaling; support for light and transparency; adding motion and interaction to 3D objects (with Java 3D classes for behaviors and interpolators); and Java 3D classes used for event handling.
series other
last changed 2003/04/23 15:14

_id 85ab
authors Corrao, Rossella and Fulantelli, Giovanni
year 1999
title Architects in the Information Society: The Role of New Technologies
doi https://doi.org/10.52842/conf.ecaade.1999.665
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 665-671
summary New Technologies (NTs) offer us tools with which to deal with the new challenges that a changing society or workplace presents. In particular, new design strategies and approaches are required by the emerging Information Society, and NTs offer effective solutions to the designers in the different stages of their professional life, and in different working situations. In this paper some meaningful scenarios of the use of the NTs in Architecture and Urban Design are introduced; the scenarios have been selected in order to understand how the role of architects in the Information Society is changing, and what new opportunities NTs offer them. It will be underlined how the telematic networks play an essential role in the activation of virtual studios that are able to compete in an increasingly global market; examples will be given of the use of the Web to support activities related to Urban Planning and Management; it will be shown how the Internet may be used to access strategic resources for education and training, and sustain lifelong learning. The aforesaid considerations derive from a Web-Based Instruction system we have developed to support University students in the definition of projects that can concern either single buildings or whole parts of a city. The system can easily be adopted in the other scenarios introduced.
keywords Architecture, Urban Planning , New Technologies, World Wide Web, Education
series eCAADe
email
last changed 2022/06/07 07:56

_id bd13
authors Martens, B., Turk, Z. and Cerovsek, T.
year 2001
title Digital Proceedings: Experiences regarding Creating and Using
doi https://doi.org/10.52842/conf.ecaade.2001.025
source Architectural Information Management [19th eCAADe Conference Proceedings / ISBN 0-9523687-8-1] Helsinki (Finland) 29-31 August 2001, pp. 25-29
summary This paper describes the developments of the CUMINCAD database since 1999 when it was first presented and some statistical information, how the service is being used. CUMINCAD started as a bibliographic database storing meta information about CAADrelated publications. Recently, full texts are being added. The process of creation of electronic copies of papers in pdf-format is described as well as decisions which were taken in this context. Over the last two years 20.000 users visited CUMINCAD. We present a brief analysis of their behavior and interaction patterns. This and the forthcoming possibility of a full-text-search will open up a new perspective for CAAD-research.
keywords CAAD-Related Publications, Web-Based Bibliographic Database, Searchable Index, Retrospective CAAD Research, Purpose Analysis
series eCAADe
email
last changed 2022/06/07 07:59

_id f02b
authors Mitchell, W.
year 1999
title E-topia: urban life, Jim –but not as we know it
source MIT press
summary The global digital network is not just a delivery system for email, Web pages, and digital television. It is a whole new urban infrastructure--one that will change the forms of our cities as dramatically as railroads, highways, electric power supply, and telephone networks did in the past. In this lucid, invigorating book, William J. Mitchell examines this new infrastructure and its implications for our future daily lives. Picking up where his best-selling City of Bits left off, Mitchell argues that we must extend the definitions of architecture and urban design to encompass virtual places as well as physical ones, and interconnection by means of telecommunication links as well as by pedestrian circulation and mechanized transportation systems. He proposes strategies for the creation of cities that not only will be sustainable but will make economic, social, and cultural sense in an electronically interconnected and global world. The new settlement patterns of the twenty-first century will be characterized by live/work dwellings, 24-hour pedestrian-scale neighborhoods rich in social relationships, and vigorous local community life, complemented by far-flung configurations of electronic meeting places and decentralized production, marketing, and distribution systems. Neither digiphile nor digiphobe, Mitchell advocates the creation of e-topias--cities that work smarter, not harder.
series other
last changed 2003/04/23 15:14

_id 4827
authors Sasada, Tsuyoshi
year 1999
title Computer Graphics and Design: Presentation, Design Development and Conception
doi https://doi.org/10.52842/conf.caadria.1999.021
source CAADRIA '99 [Proceedings of The Fourth Conference on Computer Aided Architectural Design Research in Asia / ISBN 7-5439-1233-3] Shanghai (China) 5-7 May 1999, pp. 21-29
summary Computer graphics is a powerful medium for presentation and design. In early days of its usage, it has been used mainly for presentation. Then it was started to use computer graphics in design development stage. Even more, nowadays you can get an inspiration from it. With tracing the change, we can see a centripetal movement of usage from the fringe to the core of the design field. This paper will describe how this change occurred with what kind of effort.
series CAADRIA
email
last changed 2022/06/07 07:57

_id 1419
authors Spitz, Rejane
year 1999
title Dirty Hands on the Keyboard: In Search of Less Aseptic Computer Graphics Teaching for Art & Design
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 13-18
summary In recent decades our society has witnessed a level of technological development that has not been matched by that of educational development. Far from the forefront in the process of social change, education has been trailing behind transformations occurring in industrial sectors, passively and sluggishly assimilating their technological innovations. Worse yet, educators have taken the technology and logic of innovations deriving predominantly from industry and attempted to transpose them directly into the classroom, without either analyzing them in terms of demands from the educational context or adjusting them to the specificities of the teaching/learning process. In the 1970s - marked by the effervescence of Educational Technology - society witnessed the extensive proliferation of audio-visual resources for use in education, yet with limited development in teaching theories and educational methods and procedures. In the 1980s, when Computers in Education emerged as a new area, the discussion focused predominantly on the issue of how the available computer technology could be used in the school, rather than tackling the question of how it could be developed in such a way as to meet the needs of the educational proposal. What, then, will the educational legacy of the 1990s be? In this article we focus on the issue from the perspective of undergraduate and graduate courses in Arts and Design. Computer Graphics slowly but surely has gained ground and consolidated as part of the Art & Design curricula in recent years, but in most cases as a subject in the curriculum that is not linked to the others. Computers are usually allocated in special laboratories, inside and outside Departments, but invariably isolated from the dust, clay, varnish, and paint and other wastes, materials, and odors impregnating - and characterizing - other labs in Arts and Design courses.In spite of its isolation, computer technology coexists with centuries-old practices and traditions in Art & Design courses. This interesting meeting of tradition and innovation has led to daring educational ideas and experiments in the Arts and Design which have had a ripple effect in other fields of knowledge. We analyze these issues focusing on the pioneering experience of the Núcleo de Arte Eletrônica – a multidisciplinary space at the Arts Department at PUC-Rio, where undergraduate and graduate students of technological and human areas meet to think, discuss, create and produce Art & Design projects, and which constitutes a locus for the oxygenation of learning and for preparing students to face the challenges of an interdisciplinary and interconnected society.
series SIGRADI
email
last changed 2016/03/10 10:01

_id 29c6
authors Shaw, N. and Kimber, W.E.
year 1999
title STEP and SGML/XML: what it means, how it works
source XML Europe ‘99 Conference Proceedings, Graphic Communication Association, 1999, pp. 267-70
summary The STEP standard, ISO 10303, is the primary standard for data representation and interchange in the product design and manufacturing world. Originally designed to enable the interchange of 3-D CAD models between different systems, like SGML, it has defined and uses a general mechanism for representing and managing complex data of any type. Increasingly products are defined as solid models that are stored in product databases. These databases are not limited to shape but contain a considerable wealth of other information, such as materials, failure modes, task descriptions, product related meta-data such as approvals and much more. The product world is of course also replete with documents, from requirements through specifications to user manuals. These documents both act as input to the product development processes and are output as well. Indeed in some cases documents form part of the product and are given part numbers. It is therefore not surprising to find that there are many companies where there are very real requirements to interact and interoperate between the product data and documents, specifically in the form of SGML-based data. This paper reports on work in progress to bring the two worlds together. This is primarily being done through the SGML and Industrial Data Preliminary Work Item under ISO TC184/SC4. The need for common capabilities for using STEP and SGML together has been obvious for a long time as can be seen from the inclusion of product data and SGML-based data within initiatives such as CALS. However, until recently, this requirement was never satisfied, for various reasons. For the last year or more, a small group has been actively pursuing this area and gaining the necessary understandings across the different standards. It is this work that is reported here. The basic thrust of the work is to answer the questions: Can STEP and SGML be used together and, if so, how?
series other
last changed 2003/04/23 15:50

_id ga9926
id ga9926
authors Antonini, Riccardo
year 1999
title Let's Improvise Together
source International Conference on Generative Art
summary The creators of ‘Let's-Improvise-Together’ adhere to the idea that while there is a multitude of online games now available in cyberspace, it appears that relatively few are focused on providing a positive, friendly and productive experience for the user. Producing this kind of experience is one the goals of our Amusement Project.To this end, the creation of ‘Let's Improvise Together’ has been guided by dedication to the importance of three themes:* the importance of cooperation,* the importance of creativity, and* the importance of emotion.Description of the GameThe avatar arrives in a certain area where there are many sound-blocks/objects. Or he may add sound "property" to existing ones. He can add new objects at will. Each object may represents a different sound, they do not have to though. The avatar walks around and chooses which objects he likes. Makes copies of these and add sounds or change the sounds on existing ones, then with all of the sound-blocks combined make his personalized "instrument". Now any player can make sounds on the instrument by approaching or bumping into a sound-block. The way that the avatar makes sounds on the instrument can vary. At the end of the improvising session, the ‘composition’ will be saved on the instrument site, along with the personalized instrument. In this way, each user of the Amusement Center will leave behind him a unique instrumental creation, that others who visit the Center later will be able to play on and listen to. The fully creative experience of making a new instrument can be obtained connecting to Active Worlds world ‘Amuse’ and ‘Amuse2’.Animated colorful sounding objects can be assembled by the user in the Virtual Environment as a sort of sounding instrument. We refrain here deliberately from using the word musical instrument, because the level of control we have on the sound in terms of rythm and melody, among other parameters, is very limited. It resembles instead, very closely, to the primitive instruments used by humans in some civilizations or to the experience made by children making sound out of ordinary objects. The dimension of cooperation is of paramount importance in the process of building and using the virtual sounding instrument. The instrument can be built on ones own effort but preferably by a team of cooperating users. The cooperation has as an important corolary: the sharing of the experience. The shared experience finds its permanence in the collective memory of the sounding instruments built. The sounding instrument can be seen also as a virtual sculpture, indeed this sculpture is a multimedial one. The objects have properties that ranges from video animation to sound to virtual physical properties like solidity. The role of the user representation in the Virtual World, called avatar, is important because it conveys, among other things, the user’s emotions. It is worth pointing out that the Avatar has no emotions on its own but it simply expresses the emotions of the user behind it. In a way it could be considered a sort of actor performing the script that the user gives it in real-time while playing.The other important element of the integration is related to the memory of the experience left by the user into the Virtual World. The new layout is explored and experienced. The layout is a permanent editable memory. The generative aspects of Let's improvise together are the following.The multi-media virtual sculpture left behind any participating avatar is not the creation of a single author/artist. The outcome of the sinergic interaction of various authors is not deterministic, nor predictable. The authors can indeed use generative algorythm in order to create the texture to be used on the objects. Usually, in our experience, the visitors of the Amuse worlds use shareware programs in order to generate their texture. In most cases the shareware programs are simple fractals generators. In principle, it is possible to generate also the shape of the object in a generative way. Taking into account the usual audience of our world, we expected visitors to use very simple algorythm that could generate shapes as .rwx files. Indeed, noone has attempted to do so insofar. As far as the music is concerned, the availability of shareware programs that allow simple generation of sounds sequences has made possible, for some users, to generate sounds sequences to be put in our world. In conclusion, the Let's improvise section of the Amuse worlds could be open for experimentation on generative art as a very simple entry point platform. We will be very happy to help anybody that for educational purposes would try to use our platform in order to create and exhibit generative forms of art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 4805
authors Bentley, P.
year 1999
title Evolutionary Design by Computers Morgan Kaufmann
source San Francisco, CA
summary Computers can only do what we tell them to do. They are our blind, unconscious digital slaves, bound to us by the unbreakable chains of our programs. These programs instruct computers what to do, when to do it, and how it should be done. But what happens when we loosen these chains? What happens when we tell a computer to use a process that we do not fully understand, in order to achieve something we do not fully understand? What happens when we tell a computer to evolve designs? As this book will show, what happens is that the computer gains almost human-like qualities of autonomy, innovative flair, and even creativity. These 'skills'which evolution so mysteriously endows upon our computers open up a whole new way of using computers in design. Today our former 'glorified typewriters' or 'overcomplicated drawing boards' can do everything from generating new ideas and concepts in design, to improving the performance of designs well beyond the abilities of even the most skilled human designer. Evolving designs on computers now enables us to employ computers in every stage of the design process. This is no longer computer aided design - this is becoming computer design. The pages of this book testify to the ability of today's evolutionary computer techniques in design. Flick through them and you will see designs of satellite booms, load cells, flywheels, computer networks, artistic images, sculptures, virtual creatures, house and hospital architectural plans, bridges, cranes, analogue circuits and even coffee tables. Out of all of the designs in the world, the collection you see in this book have a unique history: they were all evolved by computer, not designed by humans.
series other
last changed 2003/04/23 15:14

_id 9e00
authors Bridges, Alan
year 1999
title Progress? What Progress?
doi https://doi.org/10.52842/conf.ecaade.1999.321
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 321-326
summary This paper briefly reviews some of the history of computer graphics standardisation and then presents two specific case studies: one comparing HTML with SGML and Troff and the other comparing VRML with the Tektronix® Interactive Graphics Language implementation of the ACM Core Standard. In each case, it will be shown how the essential intellectual work carried out twenty years ago still lies at the foundations of the newer applications.
keywords SGML, HTML, VRML
series eCAADe
email
last changed 2022/06/07 07:54

_id aef9
id aef9
authors Brown, A., Knight, M. and Berridge, P. (Eds.)
year 1999
title Architectural Computing from Turing to 2000 [Conference Proceedings]
doi https://doi.org/10.52842/conf.ecaade.1999
source eCAADe Conference Proceedings / ISBN 0-9523687-5-7 / Liverpool (UK) 15-17 September 1999, 773 p.
summary The core theme of this book is the idea of looking forward to where research and development in Computer Aided Architectural Design might be heading. The contention is that we can do so most effectively by using the developments that have taken place over the past three or four decades in Computing and Architectural Computing as our reference point; the past informing the future. The genesis of this theme is the fact that a new millennium is about to arrive. If we are ruthlessly objective the year 2000 holds no more significance than any other year; perhaps we should, instead, be preparing for the year 2048 (2k). In fact, whatever the justification, it is now timely to review where we stand in terms of the development of Architectural Computing. This book aims to do that. It is salutary to look back at what writers and researchers have said in the past about where they thought that the developments in computing were taking us. One of the common themes picked up in the sections of this book is the developments that have been spawned by the global linkup that the worldwide web offers us. In the past decade the scale and application of this new medium of communication has grown at a remarkable rate. There are few technological developments that have become so ubiquitous, so quickly. As a consequence there are particular sections in this book on Communication and the Virtual Design Studio which reflect the prominence of this new area, but examples of its application are scattered throughout the book. In 'Computer-Aided Architectural Design' (1977), Bill Mitchell did suggest that computer network accessibility from expensive centralised locations to affordable common, decentralised computing facilities would become more commonplace. But most pundits have been taken by surprise by just how powerful the explosive cocktail of networks, email and hypertext has proven to be. Each of the ingredients is interesting in its own right but together they have presented us with genuinely new ways of working. Perhaps, with foresight we can see what the next new explosive cocktail might be.
series eCAADe
email
more http://www.ecaade.org
last changed 2022/06/07 07:49

_id 0dc3
authors Chambers, Tom and Wood, John B.
year 1999
title Decoding to 2000 CAD as Mediator
doi https://doi.org/10.52842/conf.ecaade.1999.210
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 210-216
summary This paper will present examples of current practice in the Design Studio course of the BDE, University of Strathclyde. The paper will demonstrate an integrated approach to teaching design, which includes CAD among other visual communication techniques as a means to exploring design concepts and the presentation of complex information as part of the design process. It will indicate how the theoretical dimension is used to direct the student in their areas of independent study. Projects illustrated will include design precedents that have involved students in the review and assessment of landmarks in the history of design. There will be evidence of how students integrate DTP in the presentation of site analysis, research of appropriate design precedents and presentation of their design solutions. CADET underlines the importance of considering design solutions within the context of both our European cultural context and of assessing the environmental impact of design options, for which CAD is eminently suited. As much as a critical method is essential to the development of the design process, a historical perspective and an appreciation of the sophistication of communicative media will inform the analysis of structural form and meaning in a modem urban context. Conscious of the dynamic of social and historical influences in design practice, the student is enabled "to take a critical stand against the dogmatism of the school "(Gadamer, 1988) that inevitably insinuates itself in learning institutions and professional practice.
keywords Design Studio, Communication, Integrated Teaching
series eCAADe
email
last changed 2022/06/07 07:56

_id acac
authors Chan, Chiu-Shui, and Browning, Todd R.
year 1999
title Design Simulation
doi https://doi.org/10.52842/conf.caadria.1999.243
source CAADRIA '99 [Proceedings of The Fourth Conference on Computer Aided Architectural Design Research in Asia / ISBN 7-5439-1233-3] Shanghai (China) 5-7 May 1999, pp. 243-252
summary This paper intends to explore methods of constructing a design simulator. Two methodologies, approached differently, imitate the human design processes. The first component is an algorithmic method which has a cognitive model embedded. This cognitive model hypothesizes that human design has certain design logic applied. The design rationales are based on knowledge stored in a designer_ memory. Each time a similar design task is encountered, the same design procedures will be repeated for completion. What makes the results different are the design information used and sequences of processing it. A kitchen design using procedural algorithms is developed to simulate this design aspect. The second component simulates an intuitive design approach. Intuition is defined as design by rules of thumb, or heuristic design. This study investigated how to simulate an intuitive design process. The method involves building up a set of inductive rules symbolizing cultural aspects that need to be addressed in a design. A residential foyer design is the simulation task. The driving force is the heuristics. Results in this study have shown that there are many variables to include but impossible to capture and simulate any of the design processes, which are the reasons why studies in this area are difficult.
series CAADRIA
email
more http://www.public.iastate.edu/~cschan
last changed 2022/06/07 07:56

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 37d1
authors Corona Martíne, Alfonso and Vigo, Libertad
year 1999
title Before the Digital Design Studio
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 247-252
summary This paper contains some observations which derive from our work as Studio Professors . In the last years, studios are in a transition phase with the progressive introduction of computers in later stages of the design process. The initiative generally belongs to students rather than to studio masters, since the former are aware that a knowledge of CAD systems will make them able to get work in architects offices. It is the first few Studios that will guide the student in forming a conception of what is architecture . Therefore, we have observer more attentively the way in which he establishes his first competence as a designer. We believe it is useful to clarify design training before we can integrate computers into it. The ways we all learn to design and which we transmit in the Studio were obviously created a long time ago, when Architecture became a subject taught in Schools, no longer a craft to be acquired under a master. The conception of architecture that the student forms in his mind is largely dependent on a long tradition of Beaux-Arts training which survives (under different forms) in Modern Architecture. The methods he or she acquires will become the basis of his creative design process also in professional life. Computer programmes are designed to adapt into the stages of this design process simply as time saving tools. We are interested in finding out how they can become an active part in the creative process and how to control this integration in teaching. Therefore, our work deals mainly with the tradition of the Studio and the conditioning it produces. The next step will be to explore the possiblities and restrictions that will inevitably issue from the introduction of new media.
series SIGRADI
email
last changed 2016/03/10 09:49

_id ga9916
id ga9916
authors Elzenga, R. Neal and Pontecorvo, Michael S.
year 1999
title Arties: Meta-Design as Evolving Colonies of Artistic Agents
source International Conference on Generative Art
summary Meta-design, the act of designing a system or species of design instead of a design instance, is an important concept in modern design practice and in the generative design paradigm. For meta-design to be a useful tool, the designer must have more formal support for both design species definition/expression and the abstract attributes which the designer is attempting to embody within a design. Arties is an exploration of one possible avenue for supporting meta-design. Arties is an artistic system emphasizing the co-evolution of colonies of Artificial Life design or artistic agents (called arties) and the environment they inhabit. Generative design systems have concentrated on biological genetics metaphors where a population of design instances are evolved directly from a set of ‘parent’ designs in a succession of generations. In Arties, the a-life agent which is evolved, produces the design instance as a byproduct of interacting with its environment. Arties utilize an attraction potential curve as their primary dynamic. They sense the relative attraction of entities in their environment, using multiple sensory channels. Arties then associate an attractiveness score to each entity. This attractiveness score is combined with a 'taste' function built into the artie that is sensitized to that observation channel, entity, and distance by a transfer function. Arties use this attraction to guide decisions and behaviors. A community of arties, with independent evolving attraction criteria can pass collective judgement on each point in an art space. As the Artie moves within this space it modifies the environment in reaction to what it senses. Arties support for Meta-design is in (A) the process of evolving arties, breeding their attraction potential curve parameters using a genetic algorithm and (B) their use of sensory channels to support abstract attributes geometries. Adjustment of these parameters tunes the attraction of the artie along various sensing channels. The multi-agent co-evolution of Arties is one approach to creating a system for supporting meta-design. Arties is part of an on-going exploration of how to support meta-design in computer augmented design systems. Our future work with Arties-like systems will be concerned with applications in areas such as modeling adaptive directives in Architecture, Object Structure Design, spatio-temporal behaviors design (for games and simulations), virtual ambient spaces, and representation and computation of abstract design attributes.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id acadia06_426
id acadia06_426
authors Garber, R., Robertson, N.
year 2006
title The Pleated Cape: From the Mass-Standardization of Levittown to Mass Customization Today
doi https://doi.org/10.52842/conf.acadia.2006.426
source Synthetic Landscapes [Proceedings of the 25th Annual Conference of the Association for Computer-Aided Design in Architecture] pp. 426-439
summary In the 1950’s, the Levitts put mass-production and the reverse assembly line into use in the building of thousands of single-family houses. However, the lack of variation that made their construction process so successful ultimately produced a mundane suburban landscape of sameness. While there were many attempts to differentiate these Levitt Cape Cods, none matched the ingenuity of their original construction process. The notion of mass-customization has been heavily theorized since the 1990’s, first appearing in the field of management and ultimately finding its way into the field of architecture. Greg Lynn used mass-customization in his design for the Embryological House in which thousands of unique houses could be generated using biological rules of differentiation (Lynn 1999). Other industries have embraced the premise that computer-numerically-controlled technologies allow for the production of variation, though it has not been thoroughly studied in architecture. While digital fabrication has been integral in the realization of several high-profile projects, the notion of large-scale mass-customization in the spec-housing market has yet to become a reality. Through the execution of an addition to a Cape Cod-style house, we examine the intersection between prefabricated standardized panels and digital fabrication to produce a mass-customized approach to housing design. Through illustrations and a detailed description of our design process, we will show how digital fabrication technologies allow for customization of mass produced products.
series ACADIA
email
last changed 2022/06/07 07:50

_id ga9928
id ga9928
authors Goulthorpe
year 1999
title Hyposurface: from Autoplastic to Alloplastic Space
source International Conference on Generative Art
summary By way of immediate qualification to an essay which attempts to orient current technical developments in relation to a series of dECOi projects, I would suggest that the greatest liberation offered by new technology in architecture is not its formal potential as much as the patterns of creativity and practice it engenders. For increasingly in the projects presented here dECOi operates as an extended network of technical expertise: Mark Burry and his research team at Deakin University in Australia as architects and parametric/ programmatic designers; Peter Wood in New Zealand as programmer; Alex Scott in London as mathematician; Chris Glasow in London as systems engineer; and the engineers (structural/services) of David Glover’s team at Ove Arup in London. This reflects how we’re working in a new technical environment - a new form of practice, in a sense - a loose and light network which deploys highly specialist technical skill to suit a particular project. By way of a second disclaimer, I would suggest that the rapid technological development we're witnessing, which we struggle to comprehend given the sheer pace of change that overwhelms us, is somehow of a different order than previous technological revolutions. For the shift from an industrial society to a society of mass communication, which is the essential transformation taking place in the present, seems to be a subliminal and almost inexpressive technological transition - is formless, in a sense - which begs the question of how it may be expressed in form. If one holds that architecture is somehow the crystallization of cultural change in concrete form, one suspects that in the present there is no simple physical equivalent for the burst of communication technologies that colour contemporary life. But I think that one might effectively raise a series of questions apropos technology by briefly looking at 3 or 4 of our current projects, and which suggest a range of possibilities fostered by new technology. By way of a third doubt, we might qualify in advance the apparent optimism of architects for CAD technology by thinking back to Thomas More and his island ‘Utopia’, which marks in some way the advent of Modern rationalism. This was, if not quite a technological utopia, certainly a metaphysical one, More’s vision typically deductive, prognostic, causal. But which by the time of Francis Bacon’s New Atlantis is a technological utopia availing itself of all the possibilities put at humanity’s disposal by the known machines of the time. There’s a sort of implicit sanction within these two accounts which lies in their nature as reality optimized by rational DESIGN as if the very ethos of design were sponsored by Modern rationalist thought and its utopian leanings. The faintly euphoric ‘technological’ discourse of architecture at present - a sort of Neue Bauhaus - then seems curiously misplaced historically given the 20th century’s general anti-, dis-, or counter-utopian discourse. But even this seems to have finally run its course, dissolving into the electronic heterotopia of the present with its diverse opportunities of irony and distortion (as it’s been said) as a liberating potential.1 This would seem to mark the dissolution of design ethos into non-causal process(ing), which begs the question of ‘design’ itself: who 'designs' anymore? Or rather, has 'design' not become uncoupled from its rational, deterministic, tradition? The utopianism that attatches to technological discourse in the present seems blind to the counter-finality of technology's own accomplishments - that transparency has, as it were, by its own more and more perfect fulfillment, failed by its own success. For what we seem to have inherited is not the warped utopia depicted in countless visions of a singular and tyrranical technology (such as that in Orwell's 1984), but a rich and diverse heterotopia which has opened the possibility of countless channels of local dialect competing directly with the channels of power. Undoubtedly such multiplicitous and global connectivity has sent creative thought in multiple directions…
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 31HOMELOGIN (you are user _anon_373025 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002