CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 625

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 39cb
authors Kelleners, Richard H.M.C.
year 1999
title Constraints in object-oriented graphics
source Eindhoven University of Technology
summary In the area of interactive computer graphics, two important approaches to deal with the complexity of designing and implementing graphics systems are object-oriented programming and constraint-based programming. From literature, it appears that combination of these two has clear advantages but has also proven to be difficult. One of the main problems is that constraint programming infringes the information hiding principle of object-oriented programming. The goal of the research project is to combine these two approaches to benefit from the strengths of both. Two research groups at the Eindhoven University of Technology investigate the use of constraints on graphics objects. At the Architecture department, constraints are applied in a virtual reality design environment. At the Computer Science department, constraints aid in modeling 3D animations. For these two groups, a constraint system for 3D graphical objects was developed. A conceptual model, called CODE (Constraints on Objects via Data flows and Events), is presented that enables integration of constraints and objects by separating the object world from the constraint world. In the design of this model, the main aspect being considered is that the information hiding principle among objects may not be violated. Constraint solvers, however, should have direct access to an object’s internal data structure. Communication between the two worlds is done via a protocol orthogonal to the message passing mechanism of objects, namely, via events and data flows. This protocol ensures that the information hiding principle at the object-oriented programming level is not violated while constraints can directly access “hidden” data. Furthermore, CODE is built up of distinct elements, or entity types, like constraint, solver, event, data flow. This structure enables that several special purpose constraint solvers can be defined and made to cooperate to solve complex constraint problems. A prototype implementation was built to study the feasibility of CODE. Therefore, the implementation should correspond directly to the conceptual model. To this end, every entity (object, constraint, solver) of the conceptual model is represented by a separate process in the language MANIFOLD. The (concurrent) processes communicate by events and data flows. The implementation serves to validate the conceptual model and to demonstrate that it is a viable way of combining constraints and objects. After the feasibility study, the prototype was discarded. The gained experience was used to build an implementation of the conceptual model for the two research groups. This implementation encompassed a constraint system with multiple solvers and constraint types. The constraint system was built as an object-oriented library that can be linked to the applications in the respective research groups. Special constructs were designed to ensure information hiding among application objects while constraints and solvers have direct access to the object data. CODE manages the complexity of object-oriented constraint solving by defining a communication protocol to allow the two paradigms to cooperate. The prototype implementation demonstrates that CODE can be implemented into a working system. Finally, the implementation of an actual application shows that the model is suitable for the development of object-oriented software.
keywords Computer Graphics; Object Oriented Programming; Constraint Programming
series thesis:PhD
last changed 2003/02/12 22:37

_id 1419
authors Spitz, Rejane
year 1999
title Dirty Hands on the Keyboard: In Search of Less Aseptic Computer Graphics Teaching for Art & Design
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 13-18
summary In recent decades our society has witnessed a level of technological development that has not been matched by that of educational development. Far from the forefront in the process of social change, education has been trailing behind transformations occurring in industrial sectors, passively and sluggishly assimilating their technological innovations. Worse yet, educators have taken the technology and logic of innovations deriving predominantly from industry and attempted to transpose them directly into the classroom, without either analyzing them in terms of demands from the educational context or adjusting them to the specificities of the teaching/learning process. In the 1970s - marked by the effervescence of Educational Technology - society witnessed the extensive proliferation of audio-visual resources for use in education, yet with limited development in teaching theories and educational methods and procedures. In the 1980s, when Computers in Education emerged as a new area, the discussion focused predominantly on the issue of how the available computer technology could be used in the school, rather than tackling the question of how it could be developed in such a way as to meet the needs of the educational proposal. What, then, will the educational legacy of the 1990s be? In this article we focus on the issue from the perspective of undergraduate and graduate courses in Arts and Design. Computer Graphics slowly but surely has gained ground and consolidated as part of the Art & Design curricula in recent years, but in most cases as a subject in the curriculum that is not linked to the others. Computers are usually allocated in special laboratories, inside and outside Departments, but invariably isolated from the dust, clay, varnish, and paint and other wastes, materials, and odors impregnating - and characterizing - other labs in Arts and Design courses.In spite of its isolation, computer technology coexists with centuries-old practices and traditions in Art & Design courses. This interesting meeting of tradition and innovation has led to daring educational ideas and experiments in the Arts and Design which have had a ripple effect in other fields of knowledge. We analyze these issues focusing on the pioneering experience of the Núcleo de Arte Eletrônica – a multidisciplinary space at the Arts Department at PUC-Rio, where undergraduate and graduate students of technological and human areas meet to think, discuss, create and produce Art & Design projects, and which constitutes a locus for the oxygenation of learning and for preparing students to face the challenges of an interdisciplinary and interconnected society.
series SIGRADI
email
last changed 2016/03/10 10:01

_id d54b
authors Thomas, N.J.T.
year 1999
title Are theories of imagery theories of imagination? An active perception approach to conscious mental content
source Cognitive Science 23(2): 207-245
summary Can theories of mental imagery, conscious mental contents, developed within cognitive science throw light on the obscure (but culturally very significant) concept of imagination? Three extant views of mental imagery are considered: quasi-pictorial, description, and perceptual activity theories. The first two face serious theoretical and empirical difficulties. The third is (for historically contingent reasons) little known, theoretically underdeveloped, and empirically untried, but has real explanatory potential. It rejects the "traditional" symbolic computational view of mental contents, but is compatible with recent and approaches in robotics. This theory is developed and elucidated. Three related key aspects of imagination (non-discursiveness, creativity, and ) raise difficulties for the other theories. Perceptual activity theory presents imagery as non-discursive and relates it closely to . It is thus well placed to be the basis for a general theory of imagination and its role in creative thought.
series journal paper
last changed 2003/04/23 15:50

_id cf2011_p109
id cf2011_p109
authors Abdelmohsen, Sherif; Lee Jinkook, Eastman Chuck
year 2011
title Automated Cost Analysis of Concept Design BIM Models
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 403-418.
summary AUTOMATED COST ANALYSIS OF CONCEPT DESIGN BIM MODELS Interoperability: BIM models and cost models This paper introduces the automated cost analysis developed for the General Services Administration (GSA) and the analysis results of a case study involving a concept design courthouse BIM model. The purpose of this study is to investigate interoperability issues related to integrating design and analysis tools; specifically BIM models and cost models. Previous efforts to generate cost estimates from BIM models have focused on developing two necessary but disjoint processes: 1) extracting accurate quantity take off data from BIM models, and 2) manipulating cost analysis results to provide informative feedback. Some recent efforts involve developing detailed definitions, enhanced IFC-based formats and in-house standards for assemblies that encompass building models (e.g. US Corps of Engineers). Some commercial applications enhance the level of detail associated to BIM objects with assembly descriptions to produce lightweight BIM models that can be used by different applications for various purposes (e.g. Autodesk for design review, Navisworks for scheduling, Innovaya for visual estimating, etc.). This study suggests the integration of design and analysis tools by means of managing all building data in one shared repository accessible to multiple domains in the AEC industry (Eastman, 1999; Eastman et al., 2008; authors, 2010). Our approach aims at providing an integrated platform that incorporates a quantity take off extraction method from IFC models, a cost analysis model, and a comprehensive cost reporting scheme, using the Solibri Model Checker (SMC) development environment. Approach As part of the effort to improve the performance of federal buildings, GSA evaluates concept design alternatives based on their compliance with specific requirements, including cost analysis. Two basic challenges emerge in the process of automating cost analysis for BIM models: 1) At this early concept design stage, only minimal information is available to produce a reliable analysis, such as space names and areas, and building gross area, 2) design alternatives share a lot of programmatic requirements such as location, functional spaces and other data. It is thus crucial to integrate other factors that contribute to substantial cost differences such as perimeter, and exterior wall and roof areas. These are extracted from BIM models using IFC data and input through XML into the Parametric Cost Engineering System (PACES, 2010) software to generate cost analysis reports. PACES uses this limited dataset at a conceptual stage and RSMeans (2010) data to infer cost assemblies at different levels of detail. Functionalities Cost model import module The cost model import module has three main functionalities: generating the input dataset necessary for the cost model, performing a semantic mapping between building type specific names and name aggregation structures in PACES known as functional space areas (FSAs), and managing cost data external to the BIM model, such as location and construction duration. The module computes building data such as footprint, gross area, perimeter, external wall and roof area and building space areas. This data is generated through SMC in the form of an XML file and imported into PACES. Reporting module The reporting module uses the cost report generated by PACES to develop a comprehensive report in the form of an excel spreadsheet. This report consists of a systems-elemental estimate that shows the main systems of the building in terms of UniFormat categories, escalation, markups, overhead and conditions, a UniFormat Level III report, and a cost breakdown that provides a summary of material, equipment, labor and total costs. Building parameters are integrated in the report to provide insight on the variations among design alternatives.
keywords building information modeling, interoperability, cost analysis, IFC
series CAAD Futures
email
last changed 2012/02/11 19:21

_id ga9926
id ga9926
authors Antonini, Riccardo
year 1999
title Let's Improvise Together
source International Conference on Generative Art
summary The creators of ‘Let's-Improvise-Together’ adhere to the idea that while there is a multitude of online games now available in cyberspace, it appears that relatively few are focused on providing a positive, friendly and productive experience for the user. Producing this kind of experience is one the goals of our Amusement Project.To this end, the creation of ‘Let's Improvise Together’ has been guided by dedication to the importance of three themes:* the importance of cooperation,* the importance of creativity, and* the importance of emotion.Description of the GameThe avatar arrives in a certain area where there are many sound-blocks/objects. Or he may add sound "property" to existing ones. He can add new objects at will. Each object may represents a different sound, they do not have to though. The avatar walks around and chooses which objects he likes. Makes copies of these and add sounds or change the sounds on existing ones, then with all of the sound-blocks combined make his personalized "instrument". Now any player can make sounds on the instrument by approaching or bumping into a sound-block. The way that the avatar makes sounds on the instrument can vary. At the end of the improvising session, the ‘composition’ will be saved on the instrument site, along with the personalized instrument. In this way, each user of the Amusement Center will leave behind him a unique instrumental creation, that others who visit the Center later will be able to play on and listen to. The fully creative experience of making a new instrument can be obtained connecting to Active Worlds world ‘Amuse’ and ‘Amuse2’.Animated colorful sounding objects can be assembled by the user in the Virtual Environment as a sort of sounding instrument. We refrain here deliberately from using the word musical instrument, because the level of control we have on the sound in terms of rythm and melody, among other parameters, is very limited. It resembles instead, very closely, to the primitive instruments used by humans in some civilizations or to the experience made by children making sound out of ordinary objects. The dimension of cooperation is of paramount importance in the process of building and using the virtual sounding instrument. The instrument can be built on ones own effort but preferably by a team of cooperating users. The cooperation has as an important corolary: the sharing of the experience. The shared experience finds its permanence in the collective memory of the sounding instruments built. The sounding instrument can be seen also as a virtual sculpture, indeed this sculpture is a multimedial one. The objects have properties that ranges from video animation to sound to virtual physical properties like solidity. The role of the user representation in the Virtual World, called avatar, is important because it conveys, among other things, the user’s emotions. It is worth pointing out that the Avatar has no emotions on its own but it simply expresses the emotions of the user behind it. In a way it could be considered a sort of actor performing the script that the user gives it in real-time while playing.The other important element of the integration is related to the memory of the experience left by the user into the Virtual World. The new layout is explored and experienced. The layout is a permanent editable memory. The generative aspects of Let's improvise together are the following.The multi-media virtual sculpture left behind any participating avatar is not the creation of a single author/artist. The outcome of the sinergic interaction of various authors is not deterministic, nor predictable. The authors can indeed use generative algorythm in order to create the texture to be used on the objects. Usually, in our experience, the visitors of the Amuse worlds use shareware programs in order to generate their texture. In most cases the shareware programs are simple fractals generators. In principle, it is possible to generate also the shape of the object in a generative way. Taking into account the usual audience of our world, we expected visitors to use very simple algorythm that could generate shapes as .rwx files. Indeed, noone has attempted to do so insofar. As far as the music is concerned, the availability of shareware programs that allow simple generation of sounds sequences has made possible, for some users, to generate sounds sequences to be put in our world. In conclusion, the Let's improvise section of the Amuse worlds could be open for experimentation on generative art as a very simple entry point platform. We will be very happy to help anybody that for educational purposes would try to use our platform in order to create and exhibit generative forms of art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id a35a
authors Arponen, Matti
year 2002
title From 2D Base Map To 3D City Model
source UMDS '02 Proceedings, Prague (Czech Republic) 2-4 October 2002, I.17-I.28
summary Since 1997 Helsinki City Survey Division has proceeded in experimenting and in developing the methods for converting and supplementing current digital 2D base maps in the scale 1:500 to a 3D city model. Actually since 1986 project areas have been produced in 3D for city planning and construction projects, but working with the whole map database started in 1997 because of customer demands and competitive 3D projects. 3D map database needs new data modelling and structures, map update processes need new working orders and the draftsmen need to learn a new profession; the 3D modeller. Laser-scanning and digital photogrammetry have been used in collecting 3D information on the map objects. During the years 1999-2000 laser-scanning experiments covering 45 km2 have been carried out utilizing the Swedish TopEye system. Simultaneous digital photography produces material for orto photo mosaics. These have been applied in mapping out dated map features and in vectorizing 3D buildings manually, semi automatically and automatically. In modelling we use TerraScan, TerraPhoto and TerraModeler sw, which are developed in Finland. The 3D city model project is at the same time partially a software development project. An accuracy and feasibility study was also completed and will be shortly presented. The three scales of 3D models are also presented in this paper. Some new 3D products and some usage of 3D city models in practice will be demonstrated in the actual presentation.
keywords 3D City modeling
series other
email
more www.udms.net
last changed 2003/11/21 15:16

_id 616c
authors Bentley, Peter J.
year 1999
title The Future of Evolutionary Design Research
source AVOCAAD Second International Conference [AVOCAAD Conference Proceedings / ISBN 90-76101-02-07] Brussels (Belgium) 8-10 April 1999, pp. 349-350
summary The use of evolutionary algorithms to optimise designs is now well known, and well understood. The literature is overflowing with examples of designs that bear the hallmark of evolutionary optimisation: bridges, cranes, electricity pylons, electric motors, engine blocks, flywheels, satellite booms -the list is extensive and evergrowing. But although the optimisation of engineering designs is perhaps the most practical and commercially beneficial form of evolutionary design for industry, such applications do not take advantage of the full potential of evolutionary design. Current research is now exploring how the related areas of evolutionary design such as evolutionary art, music and the evolution of artificial life can aid in the creation of new designs. By employing techniques from these fields, researchers are now moving away from straight optimisation, and are beginning to experiment with explorative approaches. Instead of using evolution as an optimiser, evolution is now beginning to be seen as an aid to creativity -providing new forms, new structures and even new concepts for designers.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id f11d
authors Brown, K. and Petersen, D.
year 1999
title Ready-to-Run Java 3D
source Wiley Computer Publishing
summary Written for the intermediate Java programmer and Web site designer, Ready-to-Run Java 3D provides sample Java applets and code using Sun's new Java 3D API. This book provides a worthy jump-start for Java 3D that goes well beyond the documentation provided by Sun. Coverage includes downloading the Java 2 plug-in (needed by Java 3D) and basic Java 3D classes for storing shapes, matrices, and scenes. A listing of all Java 3D classes shows off its considerable richness. Generally, this book tries to cover basic 3D concepts and how they are implemented in Java 3D. (It assumes a certain knowledge of math, particularly with matrices, which are a staple of 3D graphics). Well-commented source code is printed throughout (though there is little additional commentary). An applet for orbiting planets provides an entertaining demonstration of transforming objects onscreen. You'll learn to add processing for fog effects and texture mapping and get material on 3D sound effects and several public domain tools for working with 3D artwork (including converting VRML [Virtual Reality Markup Language] files for use with Java 3D). In all, this book largely succeeds at being accessible for HTML designers while being useful to Java programmers. With Java 3D, Sun is betting that 3D graphics shouldn't require a degree in computer science. This book reflects that philosophy, though advanced Java developers will probably want more detail on this exciting new graphics package. --Richard Dragan Topics covered: Individual applets for morphing, translation, rotation, and scaling; support for light and transparency; adding motion and interaction to 3D objects (with Java 3D classes for behaviors and interpolators); and Java 3D classes used for event handling.
series other
last changed 2003/04/23 15:14

_id 5a10
authors Cheng, Nancy Yen-Wen
year 1999
title Playing with Digital Media: Enlivening Computer Graphics Teaching
source Media and Design Process [ACADIA ‘99 / ISBN 1-880250-08-X] Salt Lake City 29-31 October 1999, pp. 96-109
doi https://doi.org/10.52842/conf.acadia.1999.096
summary Are there better ways of getting a student to learn? Getting students to play at learning can encourage comprehension by engaging their attention. Rather than having students' fascination with video games and entertainment limited to competing against learning, we can direct this interest towards learning computer graphics. We hypothesize that topics having a recreational component increase the learning curve for digital media instruction. To test this, we have offered design media projects with a playful element as a counterpart to more step-by-step descriptive exercises. Four kinds of problems, increasing in difficulty, are discussed in the context of computer aided architectural design education: 1) geometry play, 2) kit of parts, 3) dreams from childhood and 4) transformations. The problems engage the students in different ways: through playing with form, by capturing their imagination and by encouraging interaction. Each type of problem exercises specific design skills while providing practice with geometric modeling and rendering. The problems are sequenced from most constrained to most free, providing achievable milestones with focused objectives. Compared to descriptive assignments and more serious architectural problems, these design-oriented exercises invite experimentation by lowering risk, and neutralize stylistic questions by taking design out of the traditional architectural context. Used in conjunction with the modeling of case studies, they engage a wide range of students by addressing different kinds of issues. From examining the results of the student work, we conclude that play as a theme encourages greater degree of participation and comprehension.
series ACADIA
email
last changed 2022/06/07 07:55

_id 3db8
authors Clarke, Keith
year 1999
title Getting Started with GIS
source 2nd ed., Prentice Hall Series in Geographic Information Science, ed. Kieth Clarke. Upper Saddle River, NJ: Prentice Hall, 1999, 2-3
summary This best-selling non-technical, reader-friendly introduction to GIS makes the complexity of this rapidly growing high-tech field accessible to beginners. It uses a "learn-by-seeing" approach that features clear, simple explanations, an abundance of illustrations and photos, and generic practice labs for use with any GIS software. What Is a GIS? GIS's Roots in Cartography. Maps as Numbers. Getting the Map into the Computer. What Is Where? Why Is It There? Making Maps with GIS. How to Pick a GIS. GIS in Action. The Future of GIS. For anyone interested in a hands-on introduction to Geographic Information Systems.
series other
last changed 2003/04/23 15:14

_id 7dcd
authors Cotton B. and Oliver, R.
year 1999
title Understanding Hypermedia
source Phaidon Press Ltd, London
summary Understanding Hypermedia 2,000 is a wonderful read. It takes you on a journey tracing the origins of hypermedia from its very early beginings way back in the 1700's with the birth of print, all the way through to the modern new media revolution. It charts the developments in technology, culture, science and the arts to give you a very broad understanding of just what hypermedia is and where it came from. Looking to the future, Understanding Hypermedia looks at the components of hypermedia - interface design, typography, text, animation, video, vrml, etc -, the processes of designing and building new media projects - including examples from the web, cdrom and kiosks - and the future of the medium. From the hypermedia innovators to the visionaries of cyberspace. This book is a wonderful, rich and fasinating source of information and inspiration for anyone interested in or working with new media today.
series other
last changed 2003/04/23 15:14

_id ecaade2014_146
id ecaade2014_146
authors Davide Ventura and Matteo Baldassari
year 2014
title Grow: Generative Responsive Object for Web-based design - Methodology for generative design and interactive prototyping
source Thompson, Emine Mine (ed.), Fusion - Proceedings of the 32nd eCAADe Conference - Volume 2, Department of Architecture and Built Environment, Faculty of Engineering and Environment, Newcastle upon Tyne, England, UK, 10-12 September 2014, pp. 587-594
doi https://doi.org/10.52842/conf.ecaade.2014.2.587
wos WOS:000361385100061
summary This paper is part of the research on Generative Design and is inspired by the ideas spread by the following paradigms: the Internet of Things (Auto-ID Center, 1999) and the Pervasive/Ubiquitous Computing (Weiser, 1993). Particularly, the research describes a number of case studies and, in detail, the experimental prototype of an interactive-design object: “Grow-1”. The general assumptions of the study are as follows: a) Developing the experimental prototype of a smart-design object (Figure 1) in terms of interaction with man, with regard to the specific conditions of the indoor environment as well as in relation to the internet/web platforms. b) Setting up a project research based on the principles of Generative Design.c) Formulating and adopting a methodology where computational design techniques and interactive prototyping ones converge, in line with the principles spread by the new paradigms like the Internet of Things.
keywords Responsive environments and smart spaces; ubiquitous pervasive computing; internet of things; generative design; parametric modelling
series eCAADe
email
last changed 2022/06/07 07:55

_id 9b63
authors De Mesa, A., Quilez, J. and Regot, J.
year 1999
title Sunlight Energy Graphic and Analytic Control in 3D Modelling
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 733-738
doi https://doi.org/10.52842/conf.ecaade.1999.733
summary Linking solar positions with architecture is a traditional idea, but the use of graphical tools to control sunlight in urban surroundings or buildings is relatively recent. A three-dimensional working environment like the computer offers a new dimension to verify the relationships between the sun and the architecture. This paper shows a new way to calculate the incidence of solar energy in architectural environments using computer 3D modelling. The addition of virtual space visualisation to the analytic computation brings a new tool that simplifies the technical study of sunlight. We have developed several programs based upon the three-dimensional construction of the solar vault and the obstructing objects for a defined position. The first one draws the solar vault for a defined range of dates according to latitude, that is the basis of the energetic calculation. The second program computes the obstruction, i.e. the solar regions that are obstructed by any object. Finally, the third one, allow us to define an orientation to compute the energy that arrives to the analysed positioning. The last program returns the result of calculation in several ways: it shows the amount of energy through colours and makes a list of solar hours according to its energy.
keywords Sunlight, Energy, 3D modelling
series eCAADe
last changed 2022/06/07 07:56

_id 9e26
authors Do, Ellen Yi-Luen,
year 1999
title The right tool at the right time : investigation of freehand drawing as an interface to knowledge based design tools
source College of Architecture, Georgia Institute of Technology
summary Designers use different symbols and configurations in their drawings to explore alternatives and to communicate with each other. For example, when thinking about spatial arrangements, they draw bubble diagrams; when thinking about natural lighting, they draw a sun symbol and light rays. Given the connection between drawings and thinking, one should be able infer design intentions from a drawing and ultimately use such inferences to program a computer to understand our drawings. This dissertation reports findings from empirical studies on drawings and explores the possibility of using the computer to automatically infer designer's concerns from the drawings a designer makes. This dissertation consists of three parts: 1) a literature review of design studies, cognitive studies of drawing and computational sketch systems, and a set of pilot projects; 2) empirical studies of diagramming design intentions and a design drawing experiment; and 3) the implementation of a prototype system called Right-Tool-Right-Time. The main goal is to find out what is in design drawings that a computer program should be able to recognize and support. Experiments were conducted to study the relation between drawing conventions and the design tasks with which they are associated. It was found from the experiments that designers use certain symbols and configurations when thinking about certain design concerns. When thinking about allocating objects or spaces with a required dimensions, designers wrote down numbers beside the drawing to reason xviii about size and to calculate dimensions. When thinking about visual analysis, designers drew sight lines from a view point on a floor plan. Based on the recognition that it is possible to associate symbols and spatial arrangements in a drawing with a designer's intention, or task context, the second goal is to find out whether a computer can be programed to recognize these drawing conventions. Given an inferred intention and context, a program should be able to activate appropriate design tools automatically. For example, concerns about visual analysis can activate a visual simulation program, and number calculations can activate a calculator. The Right- Tool-Right-Time prototype program demonstrates how a freehand sketching system that infers intentions would support the automatic activation of different design tools based on a designers' drawing acts.
series thesis:PhD
email
more http://www.arch.gatech.edu/~ellen/thesis.html
last changed 2004/10/04 07:49

_id ga0021
id ga0021
authors Eacott, John
year 2000
title Generative music composition in practice - a critical evaluation
source International Conference on Generative Art
summary This critical evaluation will discuss 4 computer based musical works which, for reasons I shall explain, I describe as non-linear or generative. The works have been constructed by me and publicly performed or exhibited during a two year period from October 1998 to October 2000. ‘In the beginning…’ interactive music installation, strangeAttraction, Morley Gallery, London. July 1999 ‘jnrtv’ live generative dance music May 1999 to Dec 2000 ‘jazz’ interactive music installation, another strangeAttraction Morley Gallery, London. July 2000-09-26 ‘the street’ architectural interactive music installation, University of Westminster Oct 2000 Introduction I have always loved the practice of composing, particularly when it means scoring a work to be played by a live ensemble. There is something about taking a fresh sheet of manuscript , ruling the bar lines, adding clefs, key and time signatures and beginning the gradual process of adding notes, one at a time to the score until it is complete that is gratifying and compensates for the enormous effort involved. The process of scoring however is actually one distinct act within the more general task of creating music. Recently, the notion of ‘composing’ has met challenges through an increased interest in non-linear compositional methods. It is actually the presence of Chaotic or uncontrolable elements which add real beauty to music and many if not all of the things we value. If we think of a sunset, waves lapping on the shore, plants, trees a human face and the sound of the human voice, these things are not perfect and more importantly perhaps, they are transient, constantly changing and evolving. Last year and again this year, I have organised an exhibition of interactive , non-linear music installations called 'strangeAttraction'. The title refers to what Edward Lorenz called a ‘strange attractor’ the phenomenon that despite vast degrees of Chaos and uncertainty within a system, there is a degree of predictability, the tendency for chaotic behaviour to ‘attract’ towards a probable set of outcomes. Composition that deals with 'attractors' or probable outcomes rather than specific details which are set in stone is an increasingly intriguing notion.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 54a6
authors Eastman, C. and Jeng, T.S.
year 1999
title A database supporting evolutionary product model development for design
source Automation in Construction 8 (3) (1999) pp. 305-323
summary This paper presents the facilities in the EDM-2 product modeling and database language that support model evolution. It reviews the need for model evolution as a system and/or language requirement to support product modeling. Four types of model evolution are considered: (1) translation between distinct models, (2) deriving views from a central model, (3) modification of an existing model, and (4) model evolution based on writable views associated with each application. While the facilities described support all for types of evolution, the last type is emphasized. The language based modeling capabilities described in EDM-2 include: (a) mapping facilities for defining derivations and views within a single model or between different models; (b) procedural language capabilities supporting model addition, deletion and modification; (c) support for object instance migration so as to partition the set of class instances into multiple classes; (d) support for managing practical deletion of portions of a model; (e) explicit specification and automatic management of integrity between a building model and various views. The rationale and language features, and in some cases, the implementation strategy for the features, are presented.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id db00
authors Espina, Jane J.B.
year 2002
title Base de datos de la arquitectura moderna de la ciudad de Maracaibo 1920-1990 [Database of the Modern Architecture of the City of Maracaibo 1920-1990]
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 133-139
summary Bases de datos, Sistemas y Redes 134The purpose of this report is to present the achievements obtained in the use of the technologies of information andcommunication in the architecture, by means of the construction of a database to register the information on the modernarchitecture of the city of Maracaibo from 1920 until 1990, in reference to the constructions located in 5 of Julio, Sectorand to the most outstanding planners for its work, by means of the representation of the same ones in digital format.The objective of this investigation it was to elaborate a database for the registration of the information on the modernarchitecture in the period 1920-1990 of Maracaibo, by means of the design of an automated tool to organize the it datesrelated with the buildings, parcels and planners of the city. The investigation was carried out considering three methodologicalmoments: a) Gathering and classification of the information of the buildings and planners of the modern architectureto elaborate the databases, b) Design of the databases for the organization of the information and c) Design ofthe consultations, information, reports and the beginning menu. For the prosecution of the data files were generated inprograms attended by such computer as: AutoCAD R14 and 2000, Microsoft Word, Microsoft PowerPoint and MicrosoftAccess 2000, CorelDRAW V9.0 and Corel PHOTOPAINT V9.0.The investigation is related with the work developed in the class of Graphic Calculation II, belonging to the Departmentof Communication of the School of Architecture of the Faculty of Architecture and Design of The University of the Zulia(FADLUZ), carried out from the year 1999, using part of the obtained information of the works of the students generatedby means of the CAD systems for the representation in three dimensions of constructions with historical relevance in themodern architecture of Maracaibo, which are classified in the work of The Other City, generating different types ofisometric views, perspectives, representations photorealistics, plants and facades, among others.In what concerns to the thematic of this investigation, previous antecedents are ignored in our environment, and beingthe first time that incorporates the digital graph applied to the work carried out by the architects of “The Other City, thegenesis of the oil city of Maracaibo” carried out in the year 1994; of there the value of this research the field of thearchitecture and computer science. To point out that databases exist in the architecture field fits and of the design, alsoweb sites with information has more than enough architects and architecture works (Montagu, 1999).In The University of the Zulia, specifically in the Faculty of Architecture and Design, they have been carried out twoworks related with the thematic one of database, specifically in the years 1995 and 1996, in the first one a system wasdesigned to visualize, to classify and to analyze from the architectural point of view some historical buildings of Maracaiboand in the second an automated system of documental information was generated on the goods properties built insidethe urban area of Maracaibo. In the world environment it stands out the first database developed in Argentina, it is the database of the Modern andContemporary Architecture “Datarq 2000” elaborated by the Prof. Arturo Montagú of the University of Buenos Aires. The general objective of this work it was the use of new technologies for the prosecution in Architecture and Design (MONTAGU, Ob.cit). In the database, he intends to incorporate a complementary methodology and alternative of use of the informationthat habitually is used in the teaching of the architecture. When concluding this investigation, it was achieved: 1) analysis of projects of modern architecture, of which some form part of the historical patrimony of Maracaibo; 2) organized registrations of type text: historical, formal, space and technical data, and graph: you plant, facades, perspectives, pictures, among other, of the Moments of the Architecture of the Modernity in the city, general data and more excellent characteristics of the constructions, and general data of the Planners with their more important works, besides information on the parcels where the constructions are located, 3)construction in digital format and development of representations photorealistics of architecture projects already built. It is excellent to highlight the importance in the use of the Technologies of Information and Communication in this investigation, since it will allow to incorporate to the means digital part of the information of the modern architecturalconstructions that characterized the city of Maracaibo at the end of the XX century, and that in the last decades they have suffered changes, some of them have disappeared, destroying leaves of the modern historical patrimony of the city; therefore, the necessity arises of to register and to systematize in digital format the graphic information of those constructions. Also, to demonstrate the importance of the use of the computer and of the computer science in the representation and compression of the buildings of the modern architecture, to inclination texts, images, mapping, models in 3D and information organized in databases, and the relevance of the work from the pedagogic point of view,since it will be able to be used in the dictation of computer science classes and history in the teaching of the University studies of third level, allowing the learning with the use in new ways of transmission of the knowledge starting from the visual information on the part of the students in the elaboration of models in three dimensions or electronic scalemodels, also of the modern architecture and in a future to serve as support material for virtual recoveries of some buildings that at the present time they don’t exist or they are almost destroyed. In synthesis, the investigation will allow to know and to register the architecture of Maracaibo in this last decade, which arises under the parameters of the modernity and that through its organization and visualization in digital format, it will allow to the students, professors and interested in knowing it in a quicker and more efficient way, constituting a contribution to theteaching in the history area and calculation. Also, it can be of a lot of utility for the development of future investigation projects related with the thematic one and restoration of buildings of the modernity in Maracaibo.
keywords database, digital format, modern architecture, model, mapping
series SIGRADI
email
last changed 2016/03/10 09:51

_id 99ce
authors Forowicz, T.
year 1999
title Modeling of energy demands for residential buildings with HTML interface
source Automation in Construction 8 (4) (1999) pp. 481-487
summary This paper presents the package for calculation of energy and cost demands for heating, cooling and hot water. The package represents a new kind of approach to developing software, employing user (client) and server (program provider) computers connected by Internet. It is mounted on the owner server and is available to the whole world through the Web browser. The package was developed as a simplified tool for estimating energy use in four types of new and old houses, located in 900 US cities. The computing engine utilizes the database that was compiled by LBL in support of the 'Affordable Housing through Energy Conservation' Project with over 10000 DOE-2.1 simulations. The package consists of 69 routines and scripts coded in four languages: HTML, Perl, C, and FORTRAN. The modeling, the programming, and the future perspectives of the new kind of computational tool are presented. The paper discusses further technical limitations, as well as suggestions for further improvements and development. Especially important is the problem of multi-user access; ways for its solution are proposed.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 993c
authors Fruchter, Renate
year 1999
title A/E/C Teamwork: A Collaborative Design and Learning Space
source Journal of Computing in Civil Engineering -- October 1999 -- Volume 13, Issue 4, pp. 261-269
summary This paper describes an ongoing effort focused on combined research and curriculum development for multidisciplinary, geographically distributed architecture/engineering/construction (A/E/C) teamwork. Itpresents a model for a distributed A/E/C learning environment and an Internet-based Web-mediated collaboration tool kit. The distributed learning environment includes six universities from Europe, Japan, andthe United States. The tool kit is aimed to assist team members and owners (1) capture and share knowledge and information related to a specific project; (2) navigate through the archived knowledge andinformation; and (3) evaluate and explain the product's performance. The A/E/C course offered at Stanford University acts as a testbed for cutting-edge information technologies and a forum to teach newgenerations of professionals how to team up with practitioners from other disciplines and take advantage of information technology to produce a better, faster, more economical product. The paper presents newassessment metrics to monitor students' cross-disciplinary learning experience and track programmatic changes. The paper concludes with challenges and quandaries regarding the impact of informationtechnologies on team performance and behavior.
series journal paper
last changed 2003/05/15 21:45

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 31HOMELOGIN (you are user _anon_435853 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002