CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 595

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id cf2011_p109
id cf2011_p109
authors Abdelmohsen, Sherif; Lee Jinkook, Eastman Chuck
year 2011
title Automated Cost Analysis of Concept Design BIM Models
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 403-418.
summary AUTOMATED COST ANALYSIS OF CONCEPT DESIGN BIM MODELS Interoperability: BIM models and cost models This paper introduces the automated cost analysis developed for the General Services Administration (GSA) and the analysis results of a case study involving a concept design courthouse BIM model. The purpose of this study is to investigate interoperability issues related to integrating design and analysis tools; specifically BIM models and cost models. Previous efforts to generate cost estimates from BIM models have focused on developing two necessary but disjoint processes: 1) extracting accurate quantity take off data from BIM models, and 2) manipulating cost analysis results to provide informative feedback. Some recent efforts involve developing detailed definitions, enhanced IFC-based formats and in-house standards for assemblies that encompass building models (e.g. US Corps of Engineers). Some commercial applications enhance the level of detail associated to BIM objects with assembly descriptions to produce lightweight BIM models that can be used by different applications for various purposes (e.g. Autodesk for design review, Navisworks for scheduling, Innovaya for visual estimating, etc.). This study suggests the integration of design and analysis tools by means of managing all building data in one shared repository accessible to multiple domains in the AEC industry (Eastman, 1999; Eastman et al., 2008; authors, 2010). Our approach aims at providing an integrated platform that incorporates a quantity take off extraction method from IFC models, a cost analysis model, and a comprehensive cost reporting scheme, using the Solibri Model Checker (SMC) development environment. Approach As part of the effort to improve the performance of federal buildings, GSA evaluates concept design alternatives based on their compliance with specific requirements, including cost analysis. Two basic challenges emerge in the process of automating cost analysis for BIM models: 1) At this early concept design stage, only minimal information is available to produce a reliable analysis, such as space names and areas, and building gross area, 2) design alternatives share a lot of programmatic requirements such as location, functional spaces and other data. It is thus crucial to integrate other factors that contribute to substantial cost differences such as perimeter, and exterior wall and roof areas. These are extracted from BIM models using IFC data and input through XML into the Parametric Cost Engineering System (PACES, 2010) software to generate cost analysis reports. PACES uses this limited dataset at a conceptual stage and RSMeans (2010) data to infer cost assemblies at different levels of detail. Functionalities Cost model import module The cost model import module has three main functionalities: generating the input dataset necessary for the cost model, performing a semantic mapping between building type specific names and name aggregation structures in PACES known as functional space areas (FSAs), and managing cost data external to the BIM model, such as location and construction duration. The module computes building data such as footprint, gross area, perimeter, external wall and roof area and building space areas. This data is generated through SMC in the form of an XML file and imported into PACES. Reporting module The reporting module uses the cost report generated by PACES to develop a comprehensive report in the form of an excel spreadsheet. This report consists of a systems-elemental estimate that shows the main systems of the building in terms of UniFormat categories, escalation, markups, overhead and conditions, a UniFormat Level III report, and a cost breakdown that provides a summary of material, equipment, labor and total costs. Building parameters are integrated in the report to provide insight on the variations among design alternatives.
keywords building information modeling, interoperability, cost analysis, IFC
series CAAD Futures
email
last changed 2012/02/11 19:21

_id e719
authors Achten, Henri and Turksma, Arthur
year 1999
title Virtual Reality in Early Design: the Design Studio Experiences
source AVOCAAD Second International Conference [AVOCAAD Conference Proceedings / ISBN 90-76101-02-07] Brussels (Belgium) 8-10 April 1999, pp. 327-335
summary The Design Systems group of the Eindhoven University of Technology started a new kind of design studio teaching. With the use of high-end equipment, students use Virtual Reality from the very start of the design process. Virtual Reality technology up to now was primarily used for giving presentations. We use the same technology in the design process itself by means of reducing the time span in which one gets results in Virtual Reality. The method is based on a very brief cycle of modelling in AutoCAD, assigning materials in 3DStudio Viz, and then making a walkthrough in Virtual Reality in a standard landscape. Due to this cycle, which takes about 15 seconds, the student gets immediate feedback on design decisions which facilitates evaluation of the design in three dimensions much faster than usual. Usually the learning curve of this kind of software is quite steep, but with the use of templates the number of required steps to achieve results is reduced significantly. In this way, the potential of Virtual Reality is not only explored in research projects, but also in education. This paper discusses the general set-up of the design studio and shows how, via short workshops, students acquire knowledge of the cycle in a short time. The paper focuses on the added value of using Virtual Reality technology in this manner: improved spatial reasoning, translation from two-dimensional to three-dimensional representations, and VR feedback on design decisions. It discusses the needs for new design representations in this design environment, and shows how fast feedback in Virtual Reality can improve the spatial design at an early stage of the design process.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id edf5
authors Arnold, J.A., Teicholz, P. and Kunz, J.
year 1999
title An approach for the interoperation of web-distributed applications with a design model
source Automation in Construction 8 (3) (1999) pp. 291-303
summary This paper defines the data and inference requirements for the integration of analysis applications with a product model described by a CAD/CAE application. Application input conditions often require sets of complex data that may be considered views of a product model database. We introduce a method that is compatible with the STEP and PLIB product description standards to define an intermediate model that selects, extracts, and validates views of information from a product model to serve as input for an engineering CAD/CAE application. The intermediate model framework was built and tested in a software prototype, the Internet Broker for Engineering Services (IBES). The first research case for IBES integrates applications that specify certain components, for example pumps and valves, with a CAD/CAE application. This paper therefore explores a sub-set of the general problem of integrating product data semantics between various engineering applications. The IBES integration method provides support for a general set of services that effectively assist interpretation and validate information from a product model for an engineering purpose. Such methods can enable application interoperation for the automation of typical engineering tasks, such as component specification and procurement.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id b4d2
authors Caldas, Luisa G. and Norford, Leslie K.
year 1999
title A Genetic Algorithm Tool for Design Optimization
source Media and Design Process [ACADIA ‘99 / ISBN 1-880250-08-X] Salt Lake City 29-31 October 1999, pp. 260-271
doi https://doi.org/10.52842/conf.acadia.1999.260
summary Much interest has been recently devoted to generative processes in design. Advances in computational tools for design applications, coupled with techniques from the field of artificial intelligence, have lead to new possibilities in the way computers can inform and actively interact with the design process. In this paper we use the concepts of generative and goal-oriented design to propose a computer tool that can help the designer to generate and evaluate certain aspects of a solution towards an optimized behavior of the final configuration. This work focuses mostly on those aspects related to the environmental performance of the building. Genetic Algorithms are applied as a generative and search procedure to look for optimized design solutions in terms of thermal and lighting performance in a building. The Genetic Algorithm (GA) is first used to generate possible design solutions, which are then evaluated in terms of lighting and thermal behavior using a detailed thermal analysis program (DOE2.1E). The results from the simulations are subsequently used to further guide the GA search towards finding low-energy solutions to the problem under study. Solutions can be visualized using an AutoLisp routine. The specific problem addressed in this study is the placing and sizing of windows in an office building. The same method is applicable to a wide range of design problems like the choice of construction materials, design of shading elements, or sizing of lighting and mechanical systems for buildings.
series ACADIA
email
last changed 2022/06/07 07:54

_id 84e8
authors Cohen, J.M., Markosian, L., Zeleznik, R.C., Hughes, J.F. and Barzel, R.
year 1999
title An Interface for Sketching 3D Curves
source ACM Symposium on Interactive 3D Graphics, pp. 17-22 (April 1999). ACM SIGGRAPH. Edited by Jessica Hodgins and James D. Foley
summary The ability to specify nonplanar 3D curves is of fundamental importance in 3D modeling and animation systems. Effective techniques for specifying such curves using 2D input devices are desirable, but existing methods typically require the user to edit the curve from several viewpoints. We present a novel method for specifying 3D curves with 2D input from a single viewpoint. The user rst draws the curve as it appears from the current viewpoint, and then draws its shadow on the oor plane. The system correlates the curve with its shadow to compute the curve's 3D shape. This method is more natural than existing methods in that it leverages skills that many artists and designers have developed from work with pencil and paper.
series other
last changed 2003/04/23 15:14

_id groot_ddssar0221
id groot_ddssar0221
authors De Groot, E.H.
year 1999
title Integrated Lighting System Assistant
source Eindhoven University of Technology
summary The aim of the design project described in this thesis is to design a tool to support the building design process. Developing a design is considered to be a wicked problem because it goes beyond reasonable or predictable limits. Consequently, in this design project we address two wicked problems simultaneously: a double wicked problem. The two wicked problems concerned are the design of Design Decision Support System [DDSS] and the conceptual design of office lighting systems. To get a handle on the first wicked problem, two workshops were organised to meet the possible future users and to create a common basis for the tool to be developed. To tackle the wickedness of the second problem, an office lighting model and performance evaluation method were developed and implemented in a new prototype computer system: Integrated Lighting System Assistant [ILSA]. The workshops have proven to be a good source of feedback and an essential link to daily practice. The ILSA prototype shows that it is possible to implement the lighting model and evaluation method into a working prototype that can support architects in making decisions for the early design stage in the field of integrating daylight and artificial lighting.
series thesis:PhD
more http://www.bwk.tue.nl/fago/AIO/ellie/
last changed 2003/12/16 07:16

_id 762b
authors De Paoli, Giovanni and Bogdan, Marius
year 1999
title The Front of the Stage of Vitruvius' Roman Theatre - A new Approach of Computer Aided Design that Transforms Geometric Operators to Semantic Operators
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, pp. 321-333
summary The driving force of all researches where the systems of computation are used, is the utilization of an intelligent method for the representation of building. The use of computer, in design process, is often limited to technical functions (tekhne), and what one usually calls computer-aided design is often no more than computer-aided drawing. In this research paper we continue a reflection on the architect's work methods, and suggest an approach to design based on the semantic properties of the object (i.e. semantic operators), rather than by geometric operators. We propose a method of computer aid design using procedural models where the initial state of design is vague and undefined. We operate from a paradigm that leads to represent a building by means of parametric functions that, expressed algorithmically, give a procedural model to facilitate the design process. This approach opens new avenues that would permit to add the logos (semantic properties) and lead to a metaphorical representation. By means of procedural models, we show that, from a generic model we can produce a four dimensional model that encapsulate a volumetric model with semantic characteristics. We use a meta-functional language that allows us to model the actions and encapsulate detailed information about various building elements. This descriptive mechanism is extremely powerful. It helps to establish relations between the functions, contributes to a better understanding of the project's aim, and encapsulates the building properties by recalling characteristics of common classes which give rise to a new configuration and a completely original design. The scientific result of this experiment is the understanding and confirmation of the hypothesis that it is possible to encapsulate, by means of computing process, the links between design moves during conceptual and figural decisions and transform the geometric operators in semantic operators.
keywords Architecture, CAD, Function, Modeling, Semantic Operator, Geometric Operator
series CAAD Futures
last changed 2006/11/07 07:22

_id 6be9
authors Guo, Haoxu
year 1999
title The Realization of Intelligent Aid to CAD of Architectural Design with the Object-Oriented Method
source CAADRIA '99 [Proceedings of The Fourth Conference on Computer Aided Architectural Design Research in Asia / ISBN 7-5439-1233-3] Shanghai (China) 5-7 May 1999, pp. 443-454
doi https://doi.org/10.52842/conf.caadria.1999.443
summary The object-oriented analysis and design has been the principal technology of software development since the 90s and intellectualization has been the direction of development for CAD software in the architectural design. An investigation is made on the application of the object-oriented technology to the realization of the intellectualization of the CAD for architectural design.
keywords Object-oriented; CAD for Architectural Design, Intelligent Technology, Design Expert System, Object, Visual-Computing Integration, Parameter Drive, Polymorphism, Inherit, Correlated Operation
series CAADRIA
last changed 2022/06/07 07:50

_id 4fa1
authors Lee, E., Ida, Y., Woo, S. and Sasada, T.
year 1999
title Environmental Design Using Fractals in Computer Graphics
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 533-538
doi https://doi.org/10.52842/conf.ecaade.1999.533
summary Computer graphics have developed efficient techniques for visualisation of the real world. Many of the algorithms have a physical basis, such as computational models for the light and the shadow, models of real objects (buildings, mountains, roads and so on) and the simulation of natural phenomenon. Now computer graphics techniques provide the virtual world with a perception of three dimensions. The concept of the virtual world and its technology have been expanding and intensifying in recent years. Almost everything in the real world has been simulated in virtual world. When it comes to a terrain model, what we need is labour and time. But now it is possible to simulate terrain like the real world using fractals in computer graphics with a very small program and small data set. This study aims to show how to build a real world impression in the virtual world. In this paper the authors suggest a landscape design method and show the results of its application.
keywords Fractals, Polygon-Reduction, Computer Graphics, Virtual World, Collaboration
series eCAADe
last changed 2022/06/07 07:51

_id 5919
authors Lentz, Uffe
year 1999
title Integrated Design with Form and Topology Optimizing
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 116-121
doi https://doi.org/10.52842/conf.ecaade.1999.116
summary The topic of this paper is to describe the ability of 3D CAD systems to integrate designers and engineers into a simultaneous process developing a functional and aesthetic concept in a close and equal interdisciplinary process. We already have the Finite Element Method, FEM systems for analyzing the mechanical behavior of constructions. This technique is suitable for justifying design aspects in the final part of the design process. A new group of CAE systems under the generic term Topology optimizing has the potentials to handle aspects of conceptual design and aesthetic criteria. Such interactive design tools do not eliminate the designer, but the relationship between the designer and other professions and the professional consciousness of the designer will change. It is necessary to develop common ideas able to connect the scientific and the artistic fields. The common aesthetic values must be clarified and the corresponding formal ideas be developed. These tools could be called "Construction tools for the intelligent user" (Olhoff, 1998) because the use of optimizing is based on a profound knowledge of the techniques.
keywords Form, Topology, Optimizing
series eCAADe
email
last changed 2022/06/07 07:52

_id 2720
authors Magyar, Peter and Temkin, Aron
year 2000
title Developing an Algorithm for Topological Transformation
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 203-205
summary This research intends to test the architectural application of Jean Piaget’s clinical observations, described in the book: The Child’s Conception of Space (Piaget, 1956), according to which topology is an ordering discipline, active in the human psyche. Earlier attempts, based on the principles of graph-theory, were able to cover only a narrow aspect of spatial relations, i.e. connectivity, and were mostly a-perceptional, visually mute. The “Spaceprint” method, explained and illustrated in co-author’s book: Thought Palaces (Magyar, 1999), through dimensional reduction, investigates volumetric, 3D characteristics and relationships with planar 2D configurations. These configurations, however, represent dual values: they are simultaneously the formal descriptors of both finite matter and (fragments of) infinite space. The so- called “Particular Spaceprint”, as a tool of design development in building, object, or urban scales, with the help of digital technology, could express - again simultaneously - qualities of an idea-gram and the visual, even tactile aspects of material reality. With topological surface-transformations, the “General Spaceprints”, these abstract yet visually active spatial formulas can be obtained.
series SIGRADI
email
last changed 2016/03/10 09:55

_id 6810
authors Makkonen, Petri
year 1999
title On multi body systems simulation in product design
source KTH Stockholm
summary The aim of this thesis is to provide a basis for efficient modelling and software use in simulation driven product development. The capabilities of modern commercial computer software for design are analysed experimentally and qualitatively. An integrated simulation model for design of mechanical systems, based on four different "simulation views" is proposed: An integrated CAE (Computer Aided Engineering) model using Solid Geometry (CAD), Finite Element Modelling (FEM), Multi Body Systems Modelling (MBS) and Dynamic System Simulation utilising Block System Modelling tools is presented. A theoretical design process model for simulation driven design based on the theory of product chromosome is introduced. This thesis comprises a summary and six papers. Paper A presents the general framework and a distributed model for simulation based on CAD, FEM, MBS and Block Systems modelling. Paper B outlines a framework to integrate all these models into MBS simulation for performance prediction and optimisation of mechanical systems, using a modular approach. This methodology has been applied to design of industrial robots of parallel robot type. During the development process, from concept design to detail design, models have been refined from kinematic to dynamic and to elastodynamic models, finally including joint backlash. A method for analysing the kinematic Jacobian by using MBS simulation is presented. Motor torque requirements are studied by varying major robot geometry parameters, in dimensionless form for generality. The robot TCP (Tool Center Point) path in time space, predicted from elastodynamic model simulations, has been transformed to the frequency space by Fourier analysis. By comparison of this result with linear (modal) eigen frequency analysis from the elastodynamic MBS model, internal model validation is obtained. Paper C presents a study of joint backlash. An impact model for joint clearance, utilised in paper B, has been developed and compared to a simplified spring-damper model. The impact model was found to predict contact loss over a wider range of rotational speed than the spring-damper model. Increased joint bearing stiffness was found to widen the speed region of chaotic behaviour, due to loss of contact, while increased damping will reduce the chaotic range. The impact model was found to have stable under- and overcritical speed ranges, around the loss of contact region. The undercritical limit depends on the gravitational load on the clearance joint. Papers D and E give examples of the distributed simulation model approach proposed in paper A. Paper D presents simulation and optimisation of linear servo drives for a 3-axis gantry robot, using block systems modelling. The specified kinematic behaviour is simulated with multi body modelling, while drive systems and control system are modelled using a block system model for each drive. The block system model has been used for optimisation of the transmission and motor selection. Paper E presents an approach for re-using CAD geometry for multi body modelling of a rock drilling rig boom. Paper F presents synthesis methods for mechanical systems. Joint and part number synthesis is performed using the Grübler and Euler equations. The synthesis is continued by applying the theory of generative grammar, from which the grammatical rules of planar mechanisms have been formulated. An example of topological synthesis of mechanisms utilising this grammar is presented. Finally, dimensional synthesis of the mechanism is carried out by utilising non-linear programming with addition of a penalty function to avoid singularities.
keywords Simulation; Optimisation; Control Systems; Computer Aided Engineering; Multi Body Systems; Finite Element Method; Backslash; Clearance; Industrial Robots; Parallel Robots
series thesis:PhD
last changed 2003/02/12 22:37

_id f9f7
authors Mullins, Michael
year 1999
title Forming, Planning, Imaging and Connecting
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 178-185
doi https://doi.org/10.52842/conf.ecaade.1999.178
summary This paper sets out to define aspects of the architectural design process, using historical precedent and architectural theory, and tests the relationship of those aspects to the application of computers in architectural design, particularly in an educational context. The design process sub-sets are defined as: Forming, Planning, Imaging and Connecting. Historical precedents are uncovered in Classical, Modern, Postmodern and Contemporary architecture. The defined categories of the design process are related to current usages of computers in architectural education towards elucidating the strengths and weaknesses of digital media in those areas. Indications of their concurrent usage in digital design will be demonstrated in analysis of design studio programs presented at recent ACADIA conferences. An example of a current design studio programme set at the School of Architecture University of Natal, South Africa in which the above described categories give an underlying structure to the introduction of 3D digital modelling to undergraduates through design process. The definition of this set of design activities may offer a useful method for other educators in assessing existing and future design programs where digital tools are used.
keywords Design-Process, Digital-Media, Design-Programmes
series eCAADe
email
last changed 2022/06/07 07:59

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 9dfa
authors Ries, R. and Mahdavi, A.
year 1999
title Environmental Life Cycle Assessment in an Integrated CAD Environment: The Ecologue Approach
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, pp. 351-363
summary Construction and operation of buildings is a major cause of resource depletion and environmental pollution. Computational performance evaluation tools could support the decision making process in environmentally responsive building design and play an important role in environmental impact assessment, especially when a life cycle assessment (LCA) approach is used. The building domain, however, presents notable challenges to the application of LCA methods. For comprehensive environmental impact analysis to be realized in a computational support tool for the building design domain, such tools must a) have an analysis method that considers the life cycle of building construction, operation, and decommissioning, b) have a representation that is able to accommodate the data and computability requirements of the analysis method and the analysis tool, and c) be seamlessly integrated within a multi-aspect design analysis environment that can provide data on environmentally relevant building operation criteria. This paper reviews the current state of assessment methods and computational support tools for LCA, and their application to building design. Then, the implementation of an application (ECOLOGUE) for comprehensive computational assessment of environmental impact indicators over the building life cycle is presented. The application is a component in a multi-aspect space-based CAD and evaluation environment (SEMPER). The paper describes the use and typical results of ECOLOGUE system via illustrative examples.
keywords Life Cycle Assessment, Integrated Computational Environmental Analysis
series CAAD Futures
email
last changed 2006/11/07 07:22

_id b587
authors Saux, E. and Daniel, M.
year 1999
title Data reduction of polygonal curves using B-splines
source Computer-Aided Design, Vol. 31 (8) (1999) pp. 507-515
summary We present a new method for data reduction of polygonal curves. Representation by means of a list of points does not provide fair curve models that may have complex andvarying shapes. We suggest a different technique based on fitting B-spline curves. This algorithm reaches high data reduction rates while producing fair approximations evenfor the most complex curves. We apply our technique to cartographic data but the method is suitable for any application where the number of data points must be greatlyreduced.
keywords Data Reduction, Accuracy Criterion, B-Splines, Smoothing
series journal paper
email
last changed 2003/05/15 21:33

_id 3d23
authors Sellgren, Ulf
year 1999
title Simulation-driven Design
source KTH Stockholm
summary Efficiency and innovative problem solving are contradictory requirements for product development (PD), and both requirements must be satisfied in companies that strive to remain or to become competitive. Efficiency is strongly related to ”doing things right”, whereas innovative problem solving and creativity is focused on ”doing the right things”. Engineering design, which is a sub-process within PD, can be viewed as problem solving or a decision-making process. New technologies in computer science and new software tools open the way to new approaches for the solution of mechanical problems. Product data management (PDM) technology and tools can enable concurrent engineering (CE) by managing the formal product data, the relations between the individual data objects, and their relation to the PD process. Many engineering activities deal with the relation between behavior and shape. Modern CAD systems are highly productive tools for concept embodiment and detailing. The finite element (FE) method is a general tool used to study the physical behavior of objects with arbitrary shapes. Since a modern CAD technology enables design modification and change, it can support the innovative dimension of engineering as well as the verification of physical properties and behavior. Concepts and detailed solutions have traditionally been evaluated and verified with physical testing. Numerical modeling and simulation is in many cases a far more time efficient method than testing to verify the properties of an artifact. Numerical modeling can also support the innovative dimension of problem solving by enabling parameter studies and observations of real and synthetic behavior. Simulation-driven design is defined as a design process where decisions related to the behavior and performance of the artifact are significantly supported by computer-based product modeling and simulation. A framework for product modeling, that is based on a modern CAD system with fully integrated FE modeling and simulation functionality provides the engineer with tools capable of supporting a number of engineering steps in all life-cycle phases of a product. Such a conceptual framework, that is based on a moderately coupled approach to integrate commercial PDM, CAD, and FE software, is presented. An object model and a supporting modular modeling methodology are also presented. Two industrial cases are used to illustrate the possibilities and some of the opportunities given by simulation-driven design with the presented methodology and framework.
keywords CAE; FE Method; Metamodel; Object Model; PDM; Physical Behavior, System
series thesis:PhD
email
last changed 2003/02/12 22:37

_id f154
authors Amor, Robert and Newnham, Leonard
year 1999
title CAD Interfaces to the ARROW Manufactured Product Server
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, pp. 1-11
summary The UK national project ARROW (Advanced Reusable Reliable Objects Warehouse) provides an Internet based framework through which it is possible to identify any of a range of manufactured products meeting specific design criteria. This open framework (based upon the IAI's IFCs) provides a mechanism for users to search for products from any participating manufacturer or supplier based both on specific attributes of a product or on any of the textual descriptions of the product. The service returns the closest matching products and allows the user to navigate to related information including manufacturer, suppliers, CAD details, VR displays, installation instructions, certificates, health and safety information, promotional information, costings, etc. ARROW also provides a toolkit to enable manufacturers and suppliers to more easily map and publish their information in the format utilised by the ARROW system. As part of the ARROW project we have examined the ability to interface from a design tool through to ARROW to automatically retrieve information required by the tool. This paper describes the API developed to allow CAD and simulation tools to communicate directly with ARROW and identify appropriate manufactured information. The demonstration system enables CAD systems to identify the closest matching manufactured product to a designed product and replacing the designed product with the details supplied by the manufacturer for the manufactured product as well as pulling through product attributes utilised by the design application. This paper provides a description of the ARROW framework and issues faced in providing information based upon standards as well as containing information not currently modelled in public standards. The paper looks at issues of enabling manufacturers and suppliers to move from their current world-view of product information to a more data-rich and user accessible information repository (even though this enables a uniform comparison across a range of manufacturer's products). Finally the paper comments on the likely way forward for ARROW like systems in providing quality information to end users.
keywords Computer-aided Design, Product Retrieval
series CAAD Futures
email
last changed 2006/11/07 07:22

_id 48a7
authors Brooks
year 1999
title What's Real About Virtual Reality
source IEEE Computer Graphics and Applications, Vol. 19, no. 6, Nov/Dec, 27
summary As is usual with infant technologies, the realization of the early dreams for VR and harnessing it to real work has taken longer than the wild hype predicted, but it is now happening. I assess the current state of the art, addressing the perennial questions of technology and applications. By 1994, one could honestly say that VR "almost works." Many workers at many centers could doe quite exciting demos. Nevertheless, the enabling technologies had limitations that seriously impeded building VR systems for any real work except entertainment and vehicle simulators. Some of the worst problems were end-to-end system latencies, low-resolution head-mounted displays, limited tracker range and accuracy, and costs. The technologies have made great strides. Today one can get satisfying VR experiences with commercial off-the-shelf equipment. Moreover, technical advances have been accompanied by dropping costs, so it is both technically and economically feasible to do significant application. VR really works. That is not to say that all the technological problems and limitations have been solved. VR technology today "barely works." Nevertheless, coming over the mountain pass from "almost works" to "barely works" is a major transition for the discipline. I have sought out applications that are now in daily productive use, in order to find out exactly what is real. Separating these from prototype systems and feasibility demos is not always easy. People doing daily production applications have been forthcoming about lessons learned and surprises encountered. As one would expect, the initial production applications are those offering high value over alternate approaches. These applications fall into a few classes. I estimate that there are about a hundred installations in daily productive use worldwide.
series journal paper
email
last changed 2003/04/23 15:14

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 29HOMELOGIN (you are user _anon_86093 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002