CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 623

_id b5d9
authors González, Guillermo and Gutiérrez, Liliana
year 1999
title El TDE-AC: tecnología digital y estrategia pedagógica (The Tde-ac: Digital Technology and Pedagogical Strategy)
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 269-271
summary In 1995 the programming of a specialized, expert graphic software CA-TSD, began. The TSD acronym designates a graphic language derived from the theory of spatial delimitation, it systematizes all possibilities of selection and combination of flat and volumetric figures. It establishes necessary and sufficient morphic and tactic dimensions to account for all possible relationships of selection and combination. TSD proposes a syntactic reading of those formal, pure design operations underlying traditional representations. Tracings and complex configurations described by tree-hierarchical structures of simple configurations allow for a coherent syntactic analysis of the design structure of any object this will allow the construction of a pure design formula for the conscious and unconscious prefiguration operations of an artist or style in this presentation, we will use our proprietary CA-TSD software that will allow for fast verification of what's stated, including architecture and graphic design examples.
series SIGRADI
email
last changed 2016/03/10 09:52

_id avocaad_2001_22
id avocaad_2001_22
authors Jos van Leeuwen, Joran Jessurun
year 2001
title XML for Flexibility an Extensibility of Design Information Models
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary The VR-DIS research programme aims at the development of a Virtual Reality – Design Information System. This is a design and decision support system for collaborative design that provides a VR interface for the interaction with both the geometric representation of a design and the non-geometric information concerning the design throughout the design process. The major part of the research programme focuses on early stages of design. The programme is carried out by a large number of researchers from a variety of disciplines in the domain of construction and architecture, including architectural design, building physics, structural design, construction management, etc.Management of design information is at the core of this design and decision support system. Much effort in the development of the system has been and still is dedicated to the underlying theory for information management and its implementation in an Application Programming Interface (API) that the various modules of the system use. The theory is based on a so-called Feature-based modelling approach and is described in the PhD thesis by [first author, 1999] and in [first author et al., 2000a]. This information modelling approach provides three major capabilities: (1) it allows for extensibility of conceptual schemas, which is used to enable a designer to define new typologies to model with; (2) it supports sharing of conceptual schemas, called type-libraries; and (3) it provides a high level of flexibility that offers the designer the opportunity to easily reuse design information and to model information constructs that are not foreseen in any existing typologies. The latter aspect involves the capability to expand information entities in a model with relationships and properties that are not typologically defined but applicable to a particular design situation only; this helps the designer to represent the actual design concepts more accurately.The functional design of the information modelling system is based on a three-layered framework. In the bottom layer, the actual design data is stored in so-called Feature Instances. The middle layer defines the typologies of these instances in so-called Feature Types. The top layer is called the meta-layer because it provides the class definitions for both the Types layer and the Instances layer; both Feature Types and Feature Instances are objects of the classes defined in the top layer. This top layer ensures that types can be defined on the fly and that instances can be created from these types, as well as expanded with non-typological properties and relationships while still conforming to the information structures laid out in the meta-layer.The VR-DIS system consists of a growing number of modules for different kinds of functionality in relation with the design task. These modules access the design information through the API that implements the meta-layer of the framework. This API has previously been implemented using an Object-Oriented Database (OODB), but this implementation had a number of disadvantages. The dependency of the OODB, a commercial software library, was considered the most problematic. Not only are licenses of the OODB library rather expensive, also the fact that this library is not common technology that can easily be shared among a wide range of applications, including existing applications, reduces its suitability for a system with the aforementioned specifications. In addition, the OODB approach required a relatively large effort to implement the desired functionality. It lacked adequate support to generate unique identifications for worldwide information sources that were understandable for human interpretation. This strongly limited the capabilities of the system to share conceptual schemas.The approach that is currently being implemented for the core of the VR-DIS system is based on eXtensible Markup Language (XML). Rather than implementing the meta-layer of the framework into classes of Feature Types and Feature Instances, this level of meta-definitions is provided in a document type definition (DTD). The DTD is complemented with a set of rules that are implemented into a parser API, based on the Document Object Model (DOM). The advantages of the XML approach for the modelling framework are immediate. Type-libraries distributed through Internet are now supported through the mechanisms of namespaces and XLink. The implementation of the API is no longer dependent of a particular database system. This provides much more flexibility in the implementation of the various modules of the VR-DIS system. Being based on the (supposed to become) standard of XML the implementation is much more versatile in its future usage, specifically in a distributed, Internet-based environment.These immediate advantages of the XML approach opened the door to a wide range of applications that are and will be developed on top of the VR-DIS core. Examples of these are the VR-based 3D sketching module [VR-DIS ref., 2000]; the VR-based information-modelling tool that allows the management and manipulation of information models for design in a VR environment [VR-DIS ref., 2000]; and a design-knowledge capturing module that is now under development [first author et al., 2000a and 2000b]. The latter module aims to assist the designer in the recognition and utilisation of existing and new typologies in a design situation. The replacement of the OODB implementation of the API by the XML implementation enables these modules to use distributed Feature databases through Internet, without many changes to their own code, and without the loss of the flexibility and extensibility of conceptual schemas that are implemented as part of the API. Research in the near future will result in Internet-based applications that support designers in the utilisation of distributed libraries of product-information, design-knowledge, case-bases, etc.The paper roughly follows the outline of the abstract, starting with an introduction to the VR-DIS project, its objectives, and the developed theory of the Feature-modelling framework that forms the core of it. It briefly discusses the necessity of schema evolution, flexibility and extensibility of conceptual schemas, and how these capabilities have been addressed in the framework. The major part of the paper describes how the previously mentioned aspects of the framework are implemented in the XML-based approach, providing details on the so-called meta-layer, its definition in the DTD, and the parser rules that complement it. The impact of the XML approach on the functionality of the VR-DIS modules and the system as a whole is demonstrated by a discussion of these modules and scenarios of their usage for design tasks. The paper is concluded with an overview of future work on the sharing of Internet-based design information and design knowledge.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 6810
authors Makkonen, Petri
year 1999
title On multi body systems simulation in product design
source KTH Stockholm
summary The aim of this thesis is to provide a basis for efficient modelling and software use in simulation driven product development. The capabilities of modern commercial computer software for design are analysed experimentally and qualitatively. An integrated simulation model for design of mechanical systems, based on four different "simulation views" is proposed: An integrated CAE (Computer Aided Engineering) model using Solid Geometry (CAD), Finite Element Modelling (FEM), Multi Body Systems Modelling (MBS) and Dynamic System Simulation utilising Block System Modelling tools is presented. A theoretical design process model for simulation driven design based on the theory of product chromosome is introduced. This thesis comprises a summary and six papers. Paper A presents the general framework and a distributed model for simulation based on CAD, FEM, MBS and Block Systems modelling. Paper B outlines a framework to integrate all these models into MBS simulation for performance prediction and optimisation of mechanical systems, using a modular approach. This methodology has been applied to design of industrial robots of parallel robot type. During the development process, from concept design to detail design, models have been refined from kinematic to dynamic and to elastodynamic models, finally including joint backlash. A method for analysing the kinematic Jacobian by using MBS simulation is presented. Motor torque requirements are studied by varying major robot geometry parameters, in dimensionless form for generality. The robot TCP (Tool Center Point) path in time space, predicted from elastodynamic model simulations, has been transformed to the frequency space by Fourier analysis. By comparison of this result with linear (modal) eigen frequency analysis from the elastodynamic MBS model, internal model validation is obtained. Paper C presents a study of joint backlash. An impact model for joint clearance, utilised in paper B, has been developed and compared to a simplified spring-damper model. The impact model was found to predict contact loss over a wider range of rotational speed than the spring-damper model. Increased joint bearing stiffness was found to widen the speed region of chaotic behaviour, due to loss of contact, while increased damping will reduce the chaotic range. The impact model was found to have stable under- and overcritical speed ranges, around the loss of contact region. The undercritical limit depends on the gravitational load on the clearance joint. Papers D and E give examples of the distributed simulation model approach proposed in paper A. Paper D presents simulation and optimisation of linear servo drives for a 3-axis gantry robot, using block systems modelling. The specified kinematic behaviour is simulated with multi body modelling, while drive systems and control system are modelled using a block system model for each drive. The block system model has been used for optimisation of the transmission and motor selection. Paper E presents an approach for re-using CAD geometry for multi body modelling of a rock drilling rig boom. Paper F presents synthesis methods for mechanical systems. Joint and part number synthesis is performed using the Grübler and Euler equations. The synthesis is continued by applying the theory of generative grammar, from which the grammatical rules of planar mechanisms have been formulated. An example of topological synthesis of mechanisms utilising this grammar is presented. Finally, dimensional synthesis of the mechanism is carried out by utilising non-linear programming with addition of a penalty function to avoid singularities.
keywords Simulation; Optimisation; Control Systems; Computer Aided Engineering; Multi Body Systems; Finite Element Method; Backslash; Clearance; Industrial Robots; Parallel Robots
series thesis:PhD
last changed 2003/02/12 22:37

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 0c9c
authors Tweed, Christopher
year 1999
title Prescribing Designs
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 51-57
doi https://doi.org/10.52842/conf.ecaade.1999.051
summary Much of the debate and argument among CAAD researchers has turned on the degree to which CAAD systems limit the ways in which designers can express themselves. By defining representations for design objects and design functions, systems determine what it is possible to describe. Aart Bijl used the term 'prescriptiveness' to refer to this property of systems, and the need to overcome it was a major preoccupation of research at EdCAAD during the 1980s, including the development of the MOLE (Modelling Objects with Logic Expressions) system. But in trying to offer designers the freedom that was judged to be essential to evolving design practices, MOLE transferred much of the burden of programming from system developers to end-users - you can have any design objects you want, as long as you write the code. Close examination of MOLE's logic reveals that it too had to rely on fundamental definitions that, even if not domain-specific, are certainly historically contingent. This paper will return to the issue of prescriptiveness, summarising the lessons learned from the MOLE 'experiment,' and identifying new prescriptions that are deciding what designs can be. Looking beyond computer representations, we find that designs are shaped by much larger, and arguably more powerful, historical, social and cultural forces surrounding design practice. These forces are shaping the way CAAD is used and how new systems are conceived and developed.
keywords Bijl, Prescriptiveness
series eCAADe
email
last changed 2022/06/07 07:58

_id ffc7
authors Yakeley, Megan
year 1999
title Simultaneous Translation in Design: The Role of Computer Programming in Architectural Education
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 58-68
doi https://doi.org/10.52842/conf.ecaade.1999.058
summary In this paper it is proposed that architectural design involves simultaneous translation between several different languages and their corresponding systems of notation. The process of educating architects involves teaching fluency in these systems both separately and together. To improve pedagogical efficiency the physical manifestation of the languages - the graphical product - should be separated from the continuous expression of ideas in these languages - the conversational process. Digital media offer the opportunity to learn the process of translation between these systems, and thus form a strong foundation for the ability to design. Here a course taught at MIT by the author is described whose central theme is the development of design process through the use of the intermediary system of notation of a procedural programming language.
keywords Architectural Design Education, Emergent Rules, Systems of Notation, Grounded Theory
series eCAADe
email
last changed 2022/06/07 07:57

_id b7ff
authors Mullins, Michael and Van Zyl, Douw
year 2000
title Self-Selecting Digital Design Students
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 85-88
doi https://doi.org/10.52842/conf.ecaade.2000.085
summary Recent years have seen the increasing use of digital media in undergraduate architectural education at UND, and which has been fuelled by students themselves taking up the tools available to practising architects. This process of self-selection may hold valuable lessons for the development of architectural curricula. An experimental design studio offered as an elective to UND undergraduates in 1999 has indicated that the design work produced therein, most often differed remarkably from the previous work of the same students using only traditional media. In so far as digital environments rapidly provide new and strange objects and images for students to encounter, those students are driven to interpret, transform or customise that environment in innovative ways, thereby making it their own. It is clear that the full integration of digital environments into architectural education will profoundly effect the outcomes of student work. We have observed that some self-selecting students struggle in expressing ideas through repre-sentative form in traditional studios. The question arises whether these students are "onto something" which they intuitively understand as better suited to their abilities, or whether in fact they are see digital tools as a means to avoid those areas in design in which they experience difficulties. Through observation of a group of "self-selectors" the authors attempt to lead useful generalisations; to develop a theory and method for facilitators to deal with specific students; and to work toward the development of suitable curricula for these cases.
keywords Architectural Education, Digital Media, Learning Styles
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:59

_id 9a1e
authors Clayton, Mark J. and Vasquez de Velasco, Guillermo
year 1999
title Stumbling, Backtracking, and Leapfrogging: Two Decades of Introductory Architectural Computing
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 151-158
doi https://doi.org/10.52842/conf.ecaade.1999.151
summary Our collective concept of computing and its relevance to architecture has undergone dramatic shifts in emphasis. A review of popular texts from the past reveals the biases and emphases that were current. In the seventies, architectural computing was generally seen as an elective for data processing specialists. In the early eighties, personal computers and commercial CAD systems were widely adopted. Architectural computing diverged from the "batch" world into the "interactive" world. As personal computing matured, introductory architectural computing courses turned away from a foundation in programming toward instruction in CAD software. By the late eighties, Graphic User Interfaces and windowing operating systems had appeared, leading to a profusion of architecturally relevant applications that needed to be addressed in introductory computing. The introduction of desktop 3D modeling in the early nineties led to increased emphasis upon rendering and animation. The past few years have added new emphases, particularly in the area of network communications, the World Wide Web and Virtual Design Studios. On the horizon are topics of electronic commerce and knowledge markets. This paper reviews these past and current trends and presents an outline for an introductory computing course that is relevant to the year 2000.
keywords Computer-Aided Architectural Design, Computer-Aided Design, Computing Education, Introductory Courses
series eCAADe
email
last changed 2022/06/07 07:56

_id ga9921
id ga9921
authors Coates, P.S. and Hazarika, L.
year 1999
title The use of genetic programming for applications in the field of spatial composition
source International Conference on Generative Art
summary Architectural design teaching using computers has been a preoccupation of CECA since 1991. All design tutors provide their students with a set of models and ways to form, and we have explored a set of approaches including cellular automata, genetic programming ,agent based modelling and shape grammars as additional tools with which to explore architectural ( and architectonic) ideas.This paper discusses the use of genetic programming (G.P.) for applications in the field of spatial composition. CECA has been developing the use of Genetic Programming for some time ( see references ) and has covered the evolution of L-Systems production rules( coates 1997, 1999b), and the evolution of generative grammars of form (Coates 1998 1999a). The G.P. was used to generate three-dimensional spatial forms from a set of geometrical structures .The approach uses genetic programming with a Genetic Library (G.Lib) .G.P. provides a way to genetically breed a computer program to solve a problem.G. Lib. enables genetic programming to define potentially useful subroutines dynamically during a run .* Exploring a shape grammar consisting of simple solid primitives and transformations. * Applying a simple fitness function to the solid breeding G.P.* Exploring a shape grammar of composite surface objects. * Developing grammarsfor existing buildings, and creating hybrids. * Exploring the shape grammar of abuilding within a G.P.We will report on new work using a range of different morphologies ( boolean operations, surface operations and grammars of style ) and describe the use of objective functions ( natural selection) and the "eyeball test" ( artificial selection) as ways of controlling and exploring the design spaces thus defined.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ga9920
id ga9920
authors Daru, Roel
year 1999
title Hunting Design Memes in the Architectural Studio, student projects as a source of memetic analysis
source International Conference on Generative Art
summary The current practice in design programming is to generate forms based on preconceptions of what architectural design is supposed to be. But to offer adequate morphogenetic programs for architectural design processes, we should identify the diversity of types of cultural replicators applied by a variety of architectural designers. In order to explore the variety of replicators actually used, around hundred 4th year architectural students were asked to analyse two or three of their own past design assignments. The students were invited to look for the occurrence of evolutionary design processes. They were requested to try and find some traces of 'transmission', 'variation' and 'selection' in their own design assignments. The paper will present an overview of their answers, the arguments applied and the diversity of the found types of verbal and visual design memes as cultural replicators. A discussion about the applicability of the found results in the genotypes and phenotypes of morphogenetic design software will conclude the presentation.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 04fd
authors Karhu, V. and Lahdenpera, P.
year 1999
title A formalised process model of current Finnish design and construction practice
source The Int. Journal of Construction IT 7(1), pp. 51-71
summary There is a need for improved co-ordination to enhance the performance of the building process. The process involves many parties and the communication and interfaces need special attention. Conventionally, the processes of parties are carried out independently, each discipline having its own activities and limits. As a precursor to improving the overall process, formal process modelling may be used to clarify the activities, information flows and the responsibilities of the different parties. The model presented in this paper divides the Finnish construction process into six main stages: briefing, programming, global design, detailed design, construction and hand-over. In developing the model, all these stages were covered - the main focus being on the functions and flows of the process since these were found to be the most critical in the development of the building procedures. The IDEF0 method was used as the modelling technique. It is shown how the developed reference model can be subjected to various view-dependent examinations and that the modelling approach supports process re-engineering and improvement efforts as well as a new means of building process management, especially when combined with modern computer-aided applications.
series journal paper
last changed 2003/05/15 21:45

_id 39cb
authors Kelleners, Richard H.M.C.
year 1999
title Constraints in object-oriented graphics
source Eindhoven University of Technology
summary In the area of interactive computer graphics, two important approaches to deal with the complexity of designing and implementing graphics systems are object-oriented programming and constraint-based programming. From literature, it appears that combination of these two has clear advantages but has also proven to be difficult. One of the main problems is that constraint programming infringes the information hiding principle of object-oriented programming. The goal of the research project is to combine these two approaches to benefit from the strengths of both. Two research groups at the Eindhoven University of Technology investigate the use of constraints on graphics objects. At the Architecture department, constraints are applied in a virtual reality design environment. At the Computer Science department, constraints aid in modeling 3D animations. For these two groups, a constraint system for 3D graphical objects was developed. A conceptual model, called CODE (Constraints on Objects via Data flows and Events), is presented that enables integration of constraints and objects by separating the object world from the constraint world. In the design of this model, the main aspect being considered is that the information hiding principle among objects may not be violated. Constraint solvers, however, should have direct access to an object’s internal data structure. Communication between the two worlds is done via a protocol orthogonal to the message passing mechanism of objects, namely, via events and data flows. This protocol ensures that the information hiding principle at the object-oriented programming level is not violated while constraints can directly access “hidden” data. Furthermore, CODE is built up of distinct elements, or entity types, like constraint, solver, event, data flow. This structure enables that several special purpose constraint solvers can be defined and made to cooperate to solve complex constraint problems. A prototype implementation was built to study the feasibility of CODE. Therefore, the implementation should correspond directly to the conceptual model. To this end, every entity (object, constraint, solver) of the conceptual model is represented by a separate process in the language MANIFOLD. The (concurrent) processes communicate by events and data flows. The implementation serves to validate the conceptual model and to demonstrate that it is a viable way of combining constraints and objects. After the feasibility study, the prototype was discarded. The gained experience was used to build an implementation of the conceptual model for the two research groups. This implementation encompassed a constraint system with multiple solvers and constraint types. The constraint system was built as an object-oriented library that can be linked to the applications in the respective research groups. Special constructs were designed to ensure information hiding among application objects while constraints and solvers have direct access to the object data. CODE manages the complexity of object-oriented constraint solving by defining a communication protocol to allow the two paradigms to cooperate. The prototype implementation demonstrates that CODE can be implemented into a working system. Finally, the implementation of an actual application shows that the model is suitable for the development of object-oriented software.
keywords Computer Graphics; Object Oriented Programming; Constraint Programming
series thesis:PhD
last changed 2003/02/12 22:37

_id 20ab
authors Yakeley, Megan
year 2000
title Digitally Mediated Design: Using Computer Programming to Develop a Personal Design Process
source Massachusetts Institute of Technology, Department of Architecture
summary This thesis is based on the proposal that the current system of architectural design education confuses product and process. Students are assessed through, and therefore concentrate on, the former whilst the latter is left in many cases to chance. This thesis describes a new course taught by the author at MIT for the last three years whose aim is to teach the design process away from the complexities inherent in the studio system. This course draws a parallel between the design process and the Constructionist view of learning, and asserts that the design process is a constant learning activity. Therefore, learning about the design process necessarily involves learning the cognitive skills of this theoretical approach to education. These include concrete thinking and the creation of external artifacts to develop of ideas through iterative, experimental, incremental exploration. The course mimics the Constructionist model of using the computer programming environment LOGO to teach mathematics. It uses computer programming in a CAD environment, and specifically the development of a generative system, to teach the design process. The efficacy of such an approach to architectural design education has been studied using methodologies from educational research. The research design used an emergent qualitative model, employing Maykut and Morehouses interpretive descriptive approach (Maykut & Morehouse, 1994) and Glaser and Strausss Constant Comparative Method of data analysis (Glaser & Strauss, 1967). Six students joined the course in the Spring 1999 semester. The experience of these students, what and how they learned, and whether this understanding was transferred to other areas of their educational process, were studied. The findings demonstrated that computer programming in a particular pedagogical framework, can help transform the way in which students understand the process of designing. The following changes were observed in the students during the course of the year: Development of understanding of a personalized design process; move from using computer programming to solve quantifiable problems to using it to support qualitative design decisions; change in understanding of the paradigm for computers in the design process; awareness of the importance of intrapersonal and interpersonal communication skills; change in expectations of, their sense of control over, and appropriation of, the computer in the design process; evidence of transference of cognitive skills; change from a Behaviourist to a Constructionist model of learning Thesis Supervisor: William J. Mitchell Title: Professor of Architecture and Media Arts and Sciences, School of Architecture and Planning
series thesis:PhD
last changed 2003/02/12 22:37

_id 7674
authors Bourdakis, Vassilis and Charitos, Dimitrios
year 1999
title Virtual Environment Design - Defining a New Direction for Architectural Education
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 403-409
doi https://doi.org/10.52842/conf.ecaade.1999.403
summary This paper considers the design and development of virtual environments (VEs) and the way that it relates to traditional architectural education and practice. The need for practitioners who will contribute to the design of 3D content for multimedia and virtual reality applications is identified. The design of space in a VE is seen as being partly an architectural problem. Therefore, architectural design should play an important role in educating VE designers. Other disciplines, intrinsically related to the issue of VE design, are also identified. Finally, this paper aims at pointing out the need for a new direction within architectural education, which will lead towards a generation of VE architects.
keywords Virtual Environments, Architectural Design, Architectural Education
series eCAADe
email
last changed 2022/06/07 07:54

_id 2c1d
authors Castañé, D., Tessier, C., Álvarez, J. and Deho, C.
year 1999
title Patterns for Volumetric Recognition - Guidelines for the Creation of 3D-Models
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 171-175
summary This piece proposes new strategies and pedagogic methodologies applied to the recognition and study of the subjacent measurements of the architectural projects to be created. This proposal is the product of pedagogic experience, which stems from this instructional team of the department of tri-dimensional models of electronic models. This program constitutes an elective track for the architectural major at the college of architecture, design, and urbanism of the University of Buenos Aires and housed at the CAO center. One of the requirements that the students must complete, after doing research and analytical experimentation through the knowledge that they acquired through this course, is to practice the attained skills through exercises proposed by the department in this case, the student would be required to virtually rebuild a paradigmatic architectonic piece of several sample architects. Usually at this point, students experience some difficulties when they analyze the existing documents on the plants, views, picture, details, texts, etc., That they have obtained from magazines, books, and other sources. Afterwards, when they digitally begin to generate basic measurements of the architectural work to be modeled, they realize that there are great limitations in the comprehension of the tri-dimensional understanding of the work. This issue has brought us to investigate and develop proposals of volumetric understanding of patterns through examples of work already analyzed and digitalized tri-dimensionally in the department. Through a careful study of the existent documentation for that particular work, it is evaluated which would be the paths and basis to adopt through utilizing alternative technologies to arrive at a clear reconstruction of the projected architectural work, the study gets completed by implementing the proposal at the internet site http://www.datarq.fadu.uba.ar/catedra/dorcas
series SIGRADI
email
last changed 2016/03/10 09:48

_id 0dc3
authors Chambers, Tom and Wood, John B.
year 1999
title Decoding to 2000 CAD as Mediator
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 210-216
doi https://doi.org/10.52842/conf.ecaade.1999.210
summary This paper will present examples of current practice in the Design Studio course of the BDE, University of Strathclyde. The paper will demonstrate an integrated approach to teaching design, which includes CAD among other visual communication techniques as a means to exploring design concepts and the presentation of complex information as part of the design process. It will indicate how the theoretical dimension is used to direct the student in their areas of independent study. Projects illustrated will include design precedents that have involved students in the review and assessment of landmarks in the history of design. There will be evidence of how students integrate DTP in the presentation of site analysis, research of appropriate design precedents and presentation of their design solutions. CADET underlines the importance of considering design solutions within the context of both our European cultural context and of assessing the environmental impact of design options, for which CAD is eminently suited. As much as a critical method is essential to the development of the design process, a historical perspective and an appreciation of the sophistication of communicative media will inform the analysis of structural form and meaning in a modem urban context. Conscious of the dynamic of social and historical influences in design practice, the student is enabled "to take a critical stand against the dogmatism of the school "(Gadamer, 1988) that inevitably insinuates itself in learning institutions and professional practice.
keywords Design Studio, Communication, Integrated Teaching
series eCAADe
email
last changed 2022/06/07 07:56

_id ad51
authors Chastain, Th., Kalay, Y.E. and Peri, Ch.
year 1999
title Square Peg in a Round Hole or Horseless Carriage? Reflections on the Use of Computing in Architecture
source Media and Design Process [ACADIA ‘99 / ISBN 1-880250-08-X] Salt Lake City 29-31 October 1999, pp. 4-15
doi https://doi.org/10.52842/conf.acadia.1999.004
summary We start with two paradigms that have been used to describe the relationship of computation methods and tools to the production of architecture. The first is that of forcing a square peg into a round hole — implying that the use of a tool is mis-directed, or at least poorly fits the processes that have traditionally been part of an architectural design practice. In doing so, the design practice suffers from the use of new technology. The other paradigm describes a state of transformation in relation-ship to new technology as a horseless carriage in which the process is described in obsolete and ‘backward’ terms. The impli-cation is that there is a lack of appreciation for the emerging potentials of technology to change our relationship with the task. The paper demonstrates these two paradigms through the invention of drawings in the 14th century, which helped to define the profession of Architecture. It then goes on to argue that modern computational tools follow the same paradigms, and like draw-ings, stand to bring profound changes to the profession of architecture as we know it.
series ACADIA
email
last changed 2022/06/07 07:55

_id 1ea1
authors Cheng, Nancy Yen-wen
year 1999
title Digital Design at UO
source ACADIA Quarterly, vol. 18, no. 4, p. 18
doi https://doi.org/10.52842/conf.acadia.1999.x.l0k
summary University of Oregon Architecture Department has developed a spectrum of digital design from introductory methods courses to advanced design studios. With a computing curriculum that stresses a variety of tools, architectural issues such as form-making, communication, collaboration,theory-driven design, and presentation are explored. During the first year, all entering students are required to learn 3D modeling, rendering, image-processing and web-authoring in our Introduction to Architectural ComputerGraphics course. Through the use of cross-platform software, the two hundred beginning students are able to choose to work in either MacOS or Windows. Students begin learning the software by ‘playing’ with geometric elements and further develop their control by describing assigned architectural monuments. In describing the monuments, they begin with 2D diagrams and work up to complete 3D compositions, refining their modelswith symbol libraries. By visualizing back and forth between the drafting and modeling modes, the students quickly connect orthogonal plans and sections with their spatial counterparts. Such connections are an essential foundation for further learning.
series ACADIA
email
last changed 2022/06/07 07:49

_id 5a10
authors Cheng, Nancy Yen-Wen
year 1999
title Playing with Digital Media: Enlivening Computer Graphics Teaching
source Media and Design Process [ACADIA ‘99 / ISBN 1-880250-08-X] Salt Lake City 29-31 October 1999, pp. 96-109
doi https://doi.org/10.52842/conf.acadia.1999.096
summary Are there better ways of getting a student to learn? Getting students to play at learning can encourage comprehension by engaging their attention. Rather than having students' fascination with video games and entertainment limited to competing against learning, we can direct this interest towards learning computer graphics. We hypothesize that topics having a recreational component increase the learning curve for digital media instruction. To test this, we have offered design media projects with a playful element as a counterpart to more step-by-step descriptive exercises. Four kinds of problems, increasing in difficulty, are discussed in the context of computer aided architectural design education: 1) geometry play, 2) kit of parts, 3) dreams from childhood and 4) transformations. The problems engage the students in different ways: through playing with form, by capturing their imagination and by encouraging interaction. Each type of problem exercises specific design skills while providing practice with geometric modeling and rendering. The problems are sequenced from most constrained to most free, providing achievable milestones with focused objectives. Compared to descriptive assignments and more serious architectural problems, these design-oriented exercises invite experimentation by lowering risk, and neutralize stylistic questions by taking design out of the traditional architectural context. Used in conjunction with the modeling of case studies, they engage a wide range of students by addressing different kinds of issues. From examining the results of the student work, we conclude that play as a theme encourages greater degree of participation and comprehension.
series ACADIA
email
last changed 2022/06/07 07:55

_id ga9907
id ga9907
authors Ciao, Quinsan
year 1999
title Breeds of Artificial Design: Design Thinking in Computing Creation
source International Conference on Generative Art
summary There are many different paradigms or breeds of artificial design schemes. They each address artificial design from a different perspective. For instance, design by optimization emphasizes the iterative "trial-and-error" process of alternating generation and evaluation. Design by argumentation addresses the need of objectifying and communicating design thinking. Design by rues attempts to summary design knowledge into recipes. Design by simulation and electronic media offers a forum for design trial evaluation. Case-based design emphasizes experience-based design thinking. Fuzzy reasoning system provides a computing media to model and execute design reasoning. Although different, all of these paradigms are related and complement each other. Unification or collaboration of these different paradigms may lie ahead of future research and practice of artificial design.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 31HOMELOGIN (you are user _anon_60266 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002