CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 700

_id 83cb
authors Telea, Alexandru C.
year 2000
title Visualisation and simulation with object-oriented networks
source Eindhoven University of Technology
summary Among the existing systems, visual programming environments address best these issues. However, producing interactive simulations and visualisations is still a difficult task. This defines the main research objective of this thesis: The development and implementation of concepts and techniques to combine visualisation, simulation, and application construction in an interactive, easy to use, generic environment. The aim is to produce an environment in which the above mentioned activities can be learnt and carried out easily by a researcher. Working with such an environment should decrease the amount of time usually spent in redesigning existing software elements such as graphics interfaces, existing computational modules, and general infrastructure code. Writing new computational components or importing existing ones should be simple and automatic enough to make using the envisaged system an attractive option for a non programmer expert. Besides this, all proven successful elements of an interactive simulation and visualisation environment should be provided, such as visual programming, graphics user interfaces, direct manipulation, and so on. Finally, a large palette of existing scientific computation, data processing, and visualisation components should be integrated in the proposed system. On one hand, this should prove our claims of openness and easy code integration. On the other hand, this should provide the concrete set of tools needed for building a range of scientific applications and visualisations. This thesis is structured as follows. Chapter 2 defines the context of our work. The scientific research environment is presented and partitioned into the three roles of end user, application designer, and component developer. The interactions between these roles and their specific requirements are described and lead to a more precise formulation of our problem statement. Chapter 3 presents the most used architectures for simulation and visualisation systems: the monolithic system, the application library, and the framework. The advantages and disadvantages of these architectural models are then discussed in relation with our problem statement requirements. The main conclusion drawn is that no single existing architectural model suffices, and that what is needed is a combination of the features present in all three models. Chapter 4 introduces the new architectural model we propose, based on the combination of object-orientation in form of the C++ language and dataflow modelling in the new MC++ language. Chapter 5 presents VISSION, an interactive simulation and visualisation environment constructed on the introduced new architectural model, and shows how the usual tasks of application construction, steering, and visualisation are addressed. In chapter 6, the implementation of VISSION’s architectural model is described in terms of its component parts. Chapter 7 presents the applications of VISSION to numerical simulation, while chapter 8 focuses on its visualisation and graphics applications. Finally, chapter 9 concludes the thesis and outlines possible direction for future research.
keywords Computer Visualisation
series thesis:PhD
email
last changed 2003/02/12 22:37

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id b4c4
authors Carrara, G., Fioravanti, A. and Novembri, G.
year 2000
title A framework for an Architectural Collaborative Design
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 57-60
doi https://doi.org/10.52842/conf.ecaade.2000.057
summary The building industry involves a larger number of disciplines, operators and professionals than other industrial processes. Its peculiarity is that the products (building objects) have a number of parts (building elements) that does not differ much from the number of classes into which building objects can be conceptually subdivided. Another important characteristic is that the building industry produces unique products (de Vries and van Zutphen, 1992). This is not an isolated situation but indeed one that is spreading also in other industrial fields. For example, production niches have proved successful in the automotive and computer industries (Carrara, Fioravanti, & Novembri, 1989). Building design is a complex multi-disciplinary process, which demands a high degree of co-ordination and co-operation among separate teams, each having its own specific knowledge and its own set of specific design tools. Establishing an environment for design tool integration is a prerequisite for network-based distributed work. It was attempted to solve the problem of efficient, user-friendly, and fast information exchange among operators by treating it simply as an exchange of data. But the failure of IGES, CGM, PHIGS confirms that data have different meanings and importance in different contexts. The STandard for Exchange of Product data, ISO 10303 Part 106 BCCM, relating to AEC field (Wix, 1997), seems to be too complex to be applied to professional studios. Moreover its structure is too deep and the conceptual classifications based on it do not allow multi-inheritance (Ekholm, 1996). From now on we shall adopt the BCCM semantic that defines the actor as "a functional participant in building construction"; and we shall define designer as "every member of the class formed by designers" (architects, engineers, town-planners, construction managers, etc.).
keywords Architectural Design Process, Collaborative Design, Knowledge Engineering, Dynamic Object Oriented Programming
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:55

_id avocaad_2001_22
id avocaad_2001_22
authors Jos van Leeuwen, Joran Jessurun
year 2001
title XML for Flexibility an Extensibility of Design Information Models
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary The VR-DIS research programme aims at the development of a Virtual Reality – Design Information System. This is a design and decision support system for collaborative design that provides a VR interface for the interaction with both the geometric representation of a design and the non-geometric information concerning the design throughout the design process. The major part of the research programme focuses on early stages of design. The programme is carried out by a large number of researchers from a variety of disciplines in the domain of construction and architecture, including architectural design, building physics, structural design, construction management, etc.Management of design information is at the core of this design and decision support system. Much effort in the development of the system has been and still is dedicated to the underlying theory for information management and its implementation in an Application Programming Interface (API) that the various modules of the system use. The theory is based on a so-called Feature-based modelling approach and is described in the PhD thesis by [first author, 1999] and in [first author et al., 2000a]. This information modelling approach provides three major capabilities: (1) it allows for extensibility of conceptual schemas, which is used to enable a designer to define new typologies to model with; (2) it supports sharing of conceptual schemas, called type-libraries; and (3) it provides a high level of flexibility that offers the designer the opportunity to easily reuse design information and to model information constructs that are not foreseen in any existing typologies. The latter aspect involves the capability to expand information entities in a model with relationships and properties that are not typologically defined but applicable to a particular design situation only; this helps the designer to represent the actual design concepts more accurately.The functional design of the information modelling system is based on a three-layered framework. In the bottom layer, the actual design data is stored in so-called Feature Instances. The middle layer defines the typologies of these instances in so-called Feature Types. The top layer is called the meta-layer because it provides the class definitions for both the Types layer and the Instances layer; both Feature Types and Feature Instances are objects of the classes defined in the top layer. This top layer ensures that types can be defined on the fly and that instances can be created from these types, as well as expanded with non-typological properties and relationships while still conforming to the information structures laid out in the meta-layer.The VR-DIS system consists of a growing number of modules for different kinds of functionality in relation with the design task. These modules access the design information through the API that implements the meta-layer of the framework. This API has previously been implemented using an Object-Oriented Database (OODB), but this implementation had a number of disadvantages. The dependency of the OODB, a commercial software library, was considered the most problematic. Not only are licenses of the OODB library rather expensive, also the fact that this library is not common technology that can easily be shared among a wide range of applications, including existing applications, reduces its suitability for a system with the aforementioned specifications. In addition, the OODB approach required a relatively large effort to implement the desired functionality. It lacked adequate support to generate unique identifications for worldwide information sources that were understandable for human interpretation. This strongly limited the capabilities of the system to share conceptual schemas.The approach that is currently being implemented for the core of the VR-DIS system is based on eXtensible Markup Language (XML). Rather than implementing the meta-layer of the framework into classes of Feature Types and Feature Instances, this level of meta-definitions is provided in a document type definition (DTD). The DTD is complemented with a set of rules that are implemented into a parser API, based on the Document Object Model (DOM). The advantages of the XML approach for the modelling framework are immediate. Type-libraries distributed through Internet are now supported through the mechanisms of namespaces and XLink. The implementation of the API is no longer dependent of a particular database system. This provides much more flexibility in the implementation of the various modules of the VR-DIS system. Being based on the (supposed to become) standard of XML the implementation is much more versatile in its future usage, specifically in a distributed, Internet-based environment.These immediate advantages of the XML approach opened the door to a wide range of applications that are and will be developed on top of the VR-DIS core. Examples of these are the VR-based 3D sketching module [VR-DIS ref., 2000]; the VR-based information-modelling tool that allows the management and manipulation of information models for design in a VR environment [VR-DIS ref., 2000]; and a design-knowledge capturing module that is now under development [first author et al., 2000a and 2000b]. The latter module aims to assist the designer in the recognition and utilisation of existing and new typologies in a design situation. The replacement of the OODB implementation of the API by the XML implementation enables these modules to use distributed Feature databases through Internet, without many changes to their own code, and without the loss of the flexibility and extensibility of conceptual schemas that are implemented as part of the API. Research in the near future will result in Internet-based applications that support designers in the utilisation of distributed libraries of product-information, design-knowledge, case-bases, etc.The paper roughly follows the outline of the abstract, starting with an introduction to the VR-DIS project, its objectives, and the developed theory of the Feature-modelling framework that forms the core of it. It briefly discusses the necessity of schema evolution, flexibility and extensibility of conceptual schemas, and how these capabilities have been addressed in the framework. The major part of the paper describes how the previously mentioned aspects of the framework are implemented in the XML-based approach, providing details on the so-called meta-layer, its definition in the DTD, and the parser rules that complement it. The impact of the XML approach on the functionality of the VR-DIS modules and the system as a whole is demonstrated by a discussion of these modules and scenarios of their usage for design tasks. The paper is concluded with an overview of future work on the sharing of Internet-based design information and design knowledge.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id ec4d
authors Croser, J.
year 2001
title GDL Object
source The Architect’s Journal, 14 June 2001, pp. 49-50
summary It is all too common for technology companies to seek a new route to solving the same problem but for the most part the solutions address the effect and not the cause. The good old-fashioned pencil is the perfect example where inventors have sought to design-out the effect of the inherent brittleness of lead. Traditionally different methods of sharpening were suggested and more recently the propelling pencil has reigned king, the lead being supported by the dispensing sleeve thus reducing the likelihood of breakage. Developers convinced by the Single Building Model approach to design development have each embarked on a difficult journey to create an easy to use feature packed application. Unfortunately it seems that the two are not mutually compatible if we are to believe what we see emanating from Technology giants Autodesk in the guise of Architectural Desktop 3. The effect of their development is a feature rich environment but the cost and in this case the cause is a tool which is far from easy to use. However, this is only a small part of a much bigger problem, Interoperability. You see when one designer develops a model with one tool the information is typically locked in that environment. Of course the geometry can be distributed and shared amongst the team for use with their tools but the properties, or as often misquoted, the intelligence is lost along the way. The effect is the technological version of rubble; the cause is the low quality of data-translation available to us. Fortunately there is one company, which is making rapid advancements on the whole issue of collaboration, and data sharing. An old timer (Graphisoft - famous for ArchiCAD) has just donned a smart new suit, set up a new company called GDL Technology and stepped into the ring to do battle, with a difference. The difference is that GDL Technology does not rely on conquering the competition, quite the opposite in fact their success relies upon the continued success of all the major CAD platforms including AutoCAD, MicroStation and ArchiCAD (of course). GDL Technology have created a standard data format for manufacturers called GDL Objects. Product manufacturers such as Velux are now able to develop product libraries using GDL Objects, which can then be placed in a CAD model, or drawing using almost any CAD tool. The product libraries can be stored on the web or on CD giving easy download access to any building industry professional. These objects are created using scripts which makes them tiny for downloading from the web. Each object contains 3 important types of information: · Parametric scale dependant 2d plan symbols · Full 3d geometric data · Manufacturers information such as material, colour and price Whilst manufacturers are racing to GDL Technologies door to sign up, developers and clients are quick to see the benefit too. Porsche are using GDL Objects to manage their brand identity as they build over 300 new showrooms worldwide. Having defined the building style and interior Porsche, in conjunction with the product suppliers, have produced a CD-ROM with all of the selected building components such as cladding, doors, furniture, and finishes. Designing and detailing the various schemes will therefore be as straightforward as using Lego. To ease the process of accessing, sizing and placing the product libraries GDL Technology have developed a product called GDL Object Explorer, a free-standing application which can be placed on the CD with the product libraries. Furthermore, whilst the Object Explorer gives access to the GDL Objects it also enables the user to save the object in one of many file formats including DWG, DGN, DXF, 3DS and even the IAI's IFC. However, if you are an AutoCAD user there is another tool, which has been designed especially for you, it is called the Object Adapter and it works inside of AutoCAD 14 and 2000. The Object Adapter will dynamically convert all GDL Objects to AutoCAD Blocks during placement, which means that they can be controlled with standard AutoCAD commands. Furthermore, each object can be linked to an online document from the manufacturer web site, which is ideal for more extensive product information. Other tools, which have been developed to make the most of the objects, are the Web Plug-in and SalesCAD. The Plug-in enables objects to be dynamically modified and displayed on web pages and Sales CAD is an easy to learn and use design tool for sales teams to explore, develop and cost designs on a Notebook PC whilst sitting in the architects office. All sales quotations are directly extracted from the model and presented in HTML format as a mixture of product images, product descriptions and tables identifying quantities and costs. With full lifecycle information stored in each GDL Object it is no surprise that GDL Technology see their objects as the future for building design. Indeed they are not alone, the IAI have already said that they are going to explore the possibility of associating GDL Objects with their own data sharing format the IFC. So down to the dirty stuff, money and how much it costs? Well, at the risk of sounding like a market trader in Petticoat Lane, "To you guv? Nuffin". That's right as a user of this technology it will cost you nothing! Not a penny, it is gratis, free. The product manufacturer pays for the license to host their libraries on the web or on CD and even then their costs are small costing from as little as 50p for each CD filled with objects. GDL Technology has come up trumps with their GDL Objects. They have developed a new way to solve old problems. If CAD were a pencil then GDL Objects would be ballistic lead, which would never break or loose its point. A much better alternative to the strategy used by many of their competitors who seek to avoid breaking the pencil by persuading the artist not to press down so hard. If you are still reading and you have not already dropped the magazine and run off to find out if your favorite product supplier has already signed up then I suggest you check out the following web sites www.gdlcentral.com and www.gdltechnology.com. If you do not see them there, pick up the phone and ask them why.
series journal paper
email
last changed 2003/04/23 15:14

_id 5d19
authors Gómez Arvelo, Susana Carolina
year 2001
title Simulador de proyecciones de sombras sobre modelos computarizados en 3d. Herramienta para evaluar la eficiencia de modelos de proteccion solar. [Shade simulator On 3D Computer Models. A Tool to Evaluate the Efficiency of Models for Solar Protection]
source 2da Conferencia Venezolana sobre Aplicación de Computadores en Arquitectura, Maracaibo (Venezuela) december 2001, pp. 156-165
summary A Program of Graphic Computing, developed in the Language of Programming AUTOLISP and run in AUTOCAD 2000, which is guided toward the Investigation in Bioclimatic Architecture, is presented. Before a model of solar protection in 3D, and loading the input data of the geographical localization and orientation, time and evaluation date (chosen by the user), the program calculates and projects the simulation of the corresponding shade contours on every plane that constitutes the 3D pattern (openings, walls, protections, floor...). In this research different areas of knowledge concur: Plane and spherical trigonometry applied to the solar ray and to the Bioclimatic Architecture, space geometry, plane graphical representation of three-dimensional objects, a concept of the transformation of vision for the three-dimensional representation of the objects in computers, and modern programming techniques.
keywords Bioclimatic Architecture; Heat Gaining Control; 3D Shades Simulator; Solar Protection
series other
email
last changed 2003/02/14 08:29

_id 2004
authors Hendricx, A.
year 2000
title A Core Object Model for Architectural Design
source Katholieke Universiteit Leuven
summary A core object model apt to describe architectural objects and their functionality is one of the keystones to an integrated digital design environment for architecture. The object model presented in this thesis is based on a conceptual framework for computer aided architectural design (CAAD) and aims to assist the architect designer right from the early stages in the design process. For its development the object-oriented analysis method MERODE (Model-based Existence-dependency Relationship Object-oriented Development) is used. After a survey on the role of computers in the architectural design process and on particular Product Modelling initiatives, the model is elaborated in two phases: the enterprise-modelling phase and the higher functionality-modelling phase. Actual design cases and test implementations help to establish the conceptual model and illustrate its concepts. The appendices provide a detailed description of both the object model and one of the case studies. The architect’s point of view and the specific nature of the architectural design process are the basic considerations, thus leading to a unique model that hopes to make a valuable contribution to the research area of integrated design environments.
series thesis:PhD
email
last changed 2003/02/12 22:37

_id ddssar0012
id ddssar0012
authors Hendricx, Ann and Neuckermans, Herman
year 2000
title Setting objects to work: adding functionality to an architectural object model
source Timmermans, Harry (Ed.), Fifth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Nijkerk, the Netherlands)
summary Several research initiatives in the field of product modelling have produced static descriptions of the architectural and geometrical objects capable of describing architectural design projects. Less attention is paid to the development phase in which these static models are transformed into workable architectural design environments. In the context of the IDEA+ research project (Integrated Design Environment for Architectural Design), we use the object-oriented analysis method MERODE to develop and describe both an enterprise (or product) model and a functionality model. On the one hand, the enterprise model defines the architectural and geometrical objects, their methods and their relation with other objects. On the other hand, the functionality model organizes the functionality objects – ranging from single-event objects to complex-workflow objects – in a layered and easily expandable system. The functionality model is created on top of the enterprise model and closes the gap between the static enterprise model and the dynamic design environment as a whole. After a short introduction of the envisaged design environment and its underlying enterprise model, the paper will concentrate on the presentation of the higher-level functionality model. Elaborated examples of functionality objects on the different levels will clarify its concepts and proof its feasibility.
series DDSS
last changed 2003/08/07 16:36

_id ga0004
id ga0004
authors Lund, Andreas
year 2000
title Evolving the Shape of Things to Come - A Comparison of Interactive Evolution and Direct Manipulation for Creative Tasks
source International Conference on Generative Art
summary This paper is concerned with differences between direct manipulation and interactive evolutionary design as two fundamentally different interaction styles for creative tasks. Its main contribution to the field of generative design is the treatment of interactive evolutionary design as a general interaction style that can be used to support users in creative tasks. Direct manipulation interfaces, a term coined by Ben Shneiderman in the mid-seventies, are the kind of interface that is characteristic of most modern personal computer application user interfaces. Typically, direct manipulation interfaces incorporate a model of a context (such as a desktop environment) supposedly familiar to users. Rather than giving textual commands (i.e. "remove file.txt", "copy file1.txt file2.txt") to an imagined intermediary between the user and the computer, the user acts directly on the objects of interest to complete a task. Undoubtedly, direct manipulation has played an important role in making computers accessible to non-computer experts. Less certain are the reasons why direct manipulation interfaces are so successful. It has been suggested that this kind of interaction style caters for a sense of directness, control and engagement in the interaction with the computer. Also, the possibilities of incremental action with continuous feedback are believed to be an important factor of the attractiveness of direct manipulation. However, direct manipulation is also associated with a number of problems that make it a less than ideal interaction style in some situations. Recently, new interaction paradigms have emerged that address the shortcomings of direct manipulation in various ways. One example is so-called software agents that, quite the contrary to direct manipulation, act on behalf of the user and alleviate the user from some of the attention and cognitive load traditionally involved in the interaction with large quantities of information. However, this relief comes at the cost of lost user control and requires the user to put trust into a pseudo-autonomous piece of software. Another emerging style of human-computer interaction of special interest for creative tasks is that of interactive evolutionary design (sometimes referred to as aesthetic selection). Interactive evolutionary design is inspired by notions from biological evolution and may be described as a way of exploring a large – potentially infinite – space of possible design configurations based on the judgement of the user. Rather than, as is the case with direct manipulation, directly influencing the features of an object, the user influences the design by means of expressing her judgement of design examples. Variations of interactive evolutionary design have been employed to support design and creation of a variety of objects. Examples of such objects include artistic images, web advertising banners and facial expressions. In order to make an empirical investigation possible, two functional prototypes have been designed and implemented. Both prototypes are targeted at typeface design. The first prototype allows a user to directly manipulate a set of predefined attributes that govern the design of a typeface. The second prototype allows a user to iteratively influence the design of a typeface by means of expressing her judgement of typeface examples. Initially, these examples are randomly generated but will, during the course of interaction, converge upon design configurations that reflect the user’s expressed subjective judgement. In the evaluation of the prototypes, I am specifically interested in users’ sense of control, convergence and surprise. Is it possible to maintain a sense of control and convergence without sacrificing the possibilities of the unexpected in a design process? The empirical findings seem to suggest that direct manipulation caters for a high degree of control and convergence, but with a small amount of surprise and sense of novelty. The interactive evolutionary design prototype supported a lower degree of experienced control, but seems to provide both a sense of surprise and convergence. One plausible interpretation of this is that, on the one hand, direct manipulation is a good interaction style for realizing the user’s intentions. On the other hand, interactive evolutionary design has a potential to actually change the user’s intentions and pre-conceptions of that which is being designed and, in doing so, adds an important factor to the creative process. Based on the empirical findings, the paper discusses situations when interactive evolutionary design may be a serious contender with direct manipulation as the principal interaction style and also how a combination of both styles can be applied.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id diss_sola
id diss_sola
authors Sola-Morales, Pau
year 2000
title Representation in Architecture: A Data Model for Computer-Aided Architectural Design
source DDes Thesis, Harvard Design School, Cambridge, MA
summary Traditional representation systems – including technical drawings, perspectives, models and photography – have historically been used by architects to communicate projectual ideas to other agents in the process, as well to communicate ideas to themselves and recording them for future reference. The increasing complexity of the projects, involving more agents in ever more distant locations; the need for a greater semantic richness to express all the subtleties of the technical, cost and styling details; and -- most importantly – the introduction of computers in every day practice, which enable powerful data generation and manipulation; all these factors together demand for a new representation system adapted to the new digital medium. Yet, traditional CAAD software packages do not offer a solution to any of these problems, for their data model is too simplified to model complex projects and ideas, and are based on geometrical representations of the built environment. This dissertation addresses the issue of computer representation of architecture, and tries to refocus the discussion from a “geometric representation of objects” to a “representation of relationships among objects.” After studying the nature of design, it is observed that objects in the built environment can be represented as patterns of relationships. Based on the object-oriented data model (OODM), which can capture such relationships, the research proposes a new data model and a new set of abstractions of architectural elements that represent the patterns of relationships among them. The resulting representations are networks of design concepts and intentions, hypertext-like structures conveying all the semantic richness of the architectural project, containing qualitative as well as quantitative information. It is analogous to a “digital writing” or “encoding” of architecture. Being stored in an OO, centralized, concurrent database, these object models can be shared and exchanged among design professionals, adding up to a universal computer-readable design representation system.
series thesis:PhD
last changed 2005/09/09 12:58

_id 349e
authors Durmisevic, Sanja
year 2002
title Perception Aspects in Underground Spaces using Intelligent Knowledge Modeling
source Delft University of Technology
summary The intensification, combination and transformation are main strategies for future spatial development of the Netherlands, which are stated in the Fifth Bill regarding Spatial Planning. These strategies indicate that in the future, space should be utilized in a more compact and more efficient way requiring, at the same time, re-evaluation of the existing built environment and finding ways to improve it. In this context, the concept of multiple space usage is accentuated, which would focus on intensive 4-dimensional spatial exploration. The underground space is acknowledged as an important part of multiple space usage. In the document 'Spatial Exploration 2000', the underground space is recognized by policy makers as an important new 'frontier' that could provide significant contribution to future spatial requirements.In a relatively short period, the underground space became an important research area. Although among specialists there is appreciation of what underground space could provide for densely populated urban areas, there are still reserved feelings by the public, which mostly relate to the poor quality of these spaces. Many realized underground projects, namely subways, resulted in poor user satisfaction. Today, there is still a significant knowledge gap related to perception of underground space. There is also a lack of detailed documentation on actual applications of the theories, followed by research results and applied techniques. This is the case in different areas of architectural design, but for underground spaces perhaps most evident due to their infancv role in general architectural practice. In order to create better designs, diverse aspects, which are very often of qualitative nature, should be considered in perspective with the final goal to improve quality and image of underground space. In the architectural design process, one has to establish certain relations among design information in advance, to make design backed by sound rationale. The main difficulty at this point is that such relationships may not be determined due to various reasons. One example may be the vagueness of the architectural design data due to linguistic qualities in them. Another, may be vaguely defined design qualities. In this work, the problem was not only the initial fuzziness of the information but also the desired relevancy determination among all pieces of information given. Presently, to determine the existence of such relevancy is more or less a matter of architectural subjective judgement rather than systematic, non-subjective decision-making based on an existing design. This implies that the invocation of certain tools dealing with fuzzy information is essential for enhanced design decisions. Efficient methods and tools to deal with qualitative, soft data are scarce, especially in the architectural domain. Traditionally well established methods, such as statistical analysis, have been used mainly for data analysis focused on similar types to the present research. These methods mainly fall into a category of pattern recognition. Statistical regression methods are the most common approaches towards this goal. One essential drawback of this method is the inability of dealing efficiently with non-linear data. With statistical analysis, the linear relationships are established by regression analysis where dealing with non-linearity is mostly evaded. Concerning the presence of multi-dimensional data sets, it is evident that the assumption of linear relationships among all pieces of information would be a gross approximation, which one has no basis to assume. A starting point in this research was that there maybe both linearity and non-linearity present in the data and therefore the appropriate methods should be used in order to deal with that non-linearity. Therefore, some other commensurate methods were adopted for knowledge modeling. In that respect, soft computing techniques proved to match the quality of the multi-dimensional data-set subject to analysis, which is deemed to be 'soft'. There is yet another reason why soft-computing techniques were applied, which is related to the automation of knowledge modeling. In this respect, traditional models such as Decision Support Systems and Expert Systems have drawbacks. One important drawback is that the development of these systems is a time-consuming process. The programming part, in which various deliberations are required to form a consistent if-then rule knowledge based system, is also a time-consuming activity. For these reasons, the methods and tools from other disciplines, which also deal with soft data, should be integrated into architectural design. With fuzzy logic, the imprecision of data can be dealt with in a similar way to how humans do it. Artificial neural networks are deemed to some extent to model the human brain, and simulate its functions in the form of parallel information processing. They are considered important components of Artificial Intelligence (Al). With neural networks, it is possible to learn from examples, or more precisely to learn from input-output data samples. The combination of the neural and fuzzy approach proved to be a powerful combination for dealing with qualitative data. The problem of automated knowledge modeling is efficiently solved by employment of machine learning techniques. Here, the expertise of prof. dr. Ozer Ciftcioglu in the field of soft computing was crucial for tool development. By combining knowledge from two different disciplines a unique tool could be developed that would enable intelligent modeling of soft data needed for support of the building design process. In this respect, this research is a starting point in that direction. It is multidisciplinary and on the cutting edge between the field of Architecture and the field of Artificial Intelligence. From the architectural viewpoint, the perception of space is considered through relationship between a human being and a built environment. Techniques from the field of Artificial Intelligence are employed to model that relationship. Such an efficient combination of two disciplines makes it possible to extend our knowledge boundaries in the field of architecture and improve design quality. With additional techniques, meta know/edge, or in other words "knowledge about knowledge", can be created. Such techniques involve sensitivity analysis, which determines the amount of dependency of the output of a model (comfort and public safety) on the information fed into the model (input). Another technique is functional relationship modeling between aspects, which is derivation of dependency of a design parameter as a function of user's perceptions. With this technique, it is possible to determine functional relationships between dependent and independent variables. This thesis is a contribution to better understanding of users' perception of underground space, through the prism of public safety and comfort, which was achieved by means of intelligent knowledge modeling. In this respect, this thesis demonstrated an application of ICT (Information and Communication Technology) as a partner in the building design process by employing advanced modeling techniques. The method explained throughout this work is very generic and is possible to apply to not only different areas of architectural design, but also to other domains that involve qualitative data.
keywords Underground Space; Perception; Soft Computing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id f7e2
authors Noriega, Farid Mokhtar
year 2000
title Activities Oriented Environments. A Conceptual Model for Building Advanced CAAD Systems
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 131-134
doi https://doi.org/10.52842/conf.ecaade.2000.131
summary The Activities Oriented Design Environments, is a collection of proposals that will introduce important changes in the interaction procedures and integration mechanisms, in the design of CAAD software and the operating environments that support them. We will discuss how this environment uses the architectural activities as a reference for his organizational scheme, and the structural rules that control it’s operations.
keywords CAAD, CAAD Design Pradigms, CAAD User Interfaces, Architectural Design Management
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:58

_id 6ae2
authors Sutphin, J.
year 1999
title AutoCAD 2000 VBA
source Ed. Wro Press. Birminghan, USA
summary While AutoCAD is not directly associated with Office 2000, the AutoCAD object model is now a powerful tool with VBA behind it, allowing graphic designers to control their environment programmatically. It is therefore to be released concurrently with the other Wrox Office 2000 Programmer References. Using VBA (Visual Basic for Applications), the user can program his or her own programs in what is essentially a subset of the Visual Basic programming languages. This allows you to automate a lot of the graphical tasks performed daily - such as performing a hundred complex graphical manipulations.
series other
last changed 2003/04/23 15:14

_id a337
authors Testa, P., O’Reilly, U.-M. and Greenwold, S.
year 2000
title AGENCY GP: Genetic Programming for Architectural Design
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 227-231
doi https://doi.org/10.52842/conf.acadia.2000.227
summary AGENCY GP is a prototype for a system using genetic programming (GP) for architectural design exploration. Its software structure is noteworthy for its integration into a high-end three-dimensional modeling environment, its allowance for direct user interruption of evolution and reintegration of phenotypically modified individuals, and its agent-based evaluation of fitness.
series ACADIA
last changed 2022/06/07 07:58

_id ga0009
id ga0009
authors Lewis, Matthew
year 2000
title Aesthetic Evolutionary Design with Data Flow Networks
source International Conference on Generative Art
summary For a little over a decade, software has been created which allows for the design of visual content by aesthetic evolutionary design (AED) [3]. The great majority of these AED systems involve custom software intended for breeding entities within one fairly narrow problem domain, e.g., certain classes of buildings, cars, images, etc. [5]. Only a very few generic AED systems have been attempted, and extending them to a new design problem domain can require a significant amount of custom software development [6][8]. High end computer graphics software packages have in recent years become sufficiently robust to allow for flexible specification and construction of high level procedural models. These packages also provide extensibility, allowing for the creation of new software tools. One component of these systems which enables rapid development of new generative models and tools is the visual data flow network [1][2][7]. One of the first CG packages to employ this paradigm was Houdini. A system constructed within Houdini which allows for very fast generic specification of evolvable parametric prototypes is described [4]. The real-time nature of the software, when combined with the interlocking data networks, allows not only for vertical ancestor/child populations within the design space to be explored, but also allows for fast "horizontal" exploration of the potential population surface. Several example problem domains will be presented and discussed. References: [1] Alias | Wavefront. Maya. 2000, http://www.aliaswavefront.com [2] Avid. SOFTIMAGE. 2000, http://www.softimage.com [3] Bentley, Peter J. Evolutionary Design by Computers. Morgan Kaufmann, 1999. [4] Lewis, Matthew. "Metavolve Home Page". 2000, http://www.cgrg.ohio-state.edu/~mlewis/AED/Metavolve/ [5] Lewis, Matthew. "Visual Aesthetic Evolutionary Design Links". 2000, http://www.cgrg.ohio-state.edu/~mlewis/aed.html [6] Rowley, Timothy. "A Toolkit for Visual Genetic Programming". Technical Report GCG-74, The Geometry Center, University of Minnesota, 1994. [7] Side Effects Software. Houdini. 2000, http://www.sidefx.com [8] Todd, Stephen and William Latham. "The Mutation and Growth of Art by Computers" in Evolutionary Design by Computers, Peter Bentley ed., pp. 221-250, Chapter 9, Morgan Kaufmann, 1999.    
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ae0f
authors Ceccato, C., Simondetti, SA. and Burry, M.C.
year 2000
title Mass-Customization in Design Using Evolutionary and Parametric Methods
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 239-244
doi https://doi.org/10.52842/conf.acadia.2000.239
summary This paper describes a project within the authors’ ongoing research in the field of Generative Design. The work is based on the premise that computer-aided design (CAD) should evolve beyond its current limitation of one-way interaction, and become a dynamic, intelligent, multi-user environment that encourages creativity and actively supports the evolution of individual, mass-customized designs which exhibit common features. The authors describe this idea by illustrating the implementation of a research project, which explores the notions of mass-customization in design by using evolutionary and parametric methods to generate families of simple objects, in our case a door handle. The project examines related approaches using both complex CAD/CAM packages (CADDS, CATIA) and a proprietary software tool for evolutionary design. The paper first gives a short historical and philosophical background to the work, then describes the technical and algorithmic requirements, and concludes with the implementations of the project.
series ACADIA
email
last changed 2022/06/07 07:55

_id ga0019
id ga0019
authors Ceccato, Cristiano
year 2000
title On the Translation of Design Data into Design Form in Evolutionary Design
source International Conference on Generative Art
summary The marriage of advanced computational methods and new manufacturing technologies give rise to new paradigms in design process and execution. Specifically, the research concerns itself with the application of Generative and Evolutionary computation to the production of mass-customized products and building components. The work is based on the premise that CAD-CAM should evolve into a dynamic, intelligent, multi-user environment that encourages creativity and actively supports the evolution of individual, mass-customized designs that exhibit common features. The concept of Parametric Design is well established, and chiefly concerns itself with generating design sets that exists within the boundaries of pre-set parametric values. Evolutionary Design extends the notion of parametric control by using rule-based generative algorithms to evolve common families of individual design solutions. These can be optimized according to particular criteria, or can form a wide variety of hierarchically related design solutions, while supporting design intuition. The integration of Evolutionary Design with CAD-CAM, in particular the areas of flexible manufacturing and mass-customization, creates a unique scenario which exploits the full power of both approaches to create a new design-process paradigm that can generate limitless possibilities in a non-deterministic manner within a variable search-space of possible solutions.This paper concerns itself with the technical and philosophical aspects of the codification, generation and translation of data within the evolutionary-parametric design process. The efficiency and relevance of different methods for treating design data form the most fundamental aspect within the realm of CAD/CAM and are crucial to the successful implementation of Evolutionary Design mechanisms. This begins at the level of seeding and progresses through the entire evolutionary sequence, including the codification for evaluation criteria. Furthermore, the integration of digital design mechanisms with CAM and CNC technologies requires further translation of data into manufacturable formats. This paper examines different methods available to system designers and discussed their effect on new paradigms of digital design methods.
keywords Evolutionary, Parametric, Generative, Data, Format, Objects, Codification
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 3936
authors Geroimenko, Vladimir
year 1999
title Online Photorealistic VR with Interactive Architectural Objects
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 414-417
doi https://doi.org/10.52842/conf.ecaade.1999.414
summary This paper describes how Virtual Reality (VR) technologies can be used for modelling photorealistic environments with interactive and changeable architectural content. This application of VR allows us to create photograph-based panoramic models of real places that include a variety of interactive architectural objects and details. The user is able not only to navigate through a virtual environment (look around, up and down, zoom, jump to another viewpoint or location) but also to change buildings or their architectural details by clicking, moving or rotating. The following types of interactive objects are completely integrated with a virtual environment: 2D image-based objects, 3D image-based objects, 3D VRML-based objects and onscreen world controls. The application can be used effectively for teaching, including distance Internet-based education, project presentations and rapid prototyping. A sample VR environment is presented and some of the key creative and technological issues are discussed.
keywords Virtual Reality Modelling, Architectural Design, Interactive Contents, Photorealistic Environments
series eCAADe
email
last changed 2022/06/07 07:51

_id 9f48
authors Hendricx, Ann and Neuckermans, Herman
year 2000
title Towards a Working Design Environment: From Enterprise to Functionality Model
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 197-199
summary Several product-modelling initiatives have produced static descriptions of the architectural and geometrical objects capable of describing architectural design projects. Less attention is paid to the development phase in which these static models are transformed into workable architectural design environments. In the context of the IDEA+ research project (Integrated Design Environment for Architectural Design) emphasis lies on the systematic development of both phases. The result is an analysis model that consists of two submodels. On the one hand, the enterprise model defines the architectural and geometrical objects, their methods and their relation with other objects. On the other hand, the functionality model organises the functionality objects - ranging from single-event objects to complex-workflow objects - in a layered and easily expandable system. As such, it closes the gap between the static enterprise model and the dynamic design environment as a whole.
series SIGRADI
email
last changed 2016/03/10 09:53

_id b352
authors Kilkelly, Michael
year 2000
title Off The Page: Object-Oriented Construction Drawings
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 147-151
doi https://doi.org/10.52842/conf.acadia.2000.147
summary This paper discusses methods in which inefficiencies in the construction documentation process can be addressed through the application of digital technology. These inefficiencies are directly related to the time consuming nature of the construction documentation process, given that the majority of time is spent reformatting and redrawing previous details and specifications. The concepts of objectoriented programming are used as an organizational framework for construction documentation. Database structures are also used as a key component to information reuse in the documentation process. A prototype system is developed as an alternative to current Computer-Aided Drafting software. This prototype, the Drawing Assembler, functions as a graphic search engine for construction details. It links a building component database with a construction detail database through the intersection of dissimilar objects.
series ACADIA
last changed 2022/06/07 07:52

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 34HOMELOGIN (you are user _anon_405387 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002