CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 743

_id 05db
authors Peri, Christopher
year 2000
title Exercising Collaborative Design in a Virtual Environment
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 63-71
doi https://doi.org/10.52842/conf.acadia.2000.063
summary In the last few years remote collaborative design has been attracting interest, and with good reason: Almost everything we use today, whether it is the structure we inhabit, the vehicle we travel in, or the computer we work on, is the result of a number of participants’ contributions to a single design. At the same time, more and more design teams are working in remote locations from one another. In a distributed design situation with remote players, communication is key for successful and effective collaboration. Archville is a distributed, Web-based VR system that allows multiple users to interact with multiple models at the same time. We use it as a platform to exercise collaborative design by requiring students to build individual buildings as part of a city, or village and must share some common formal convention with their neighbors. The Archville exercise demonstrates to students how we can use computing and the Internet to design collaboratively. It also points out the need to have correct up-to-date information when working on collaborative projects because of the dynamic nature of the design process. In addition to architectural design and computer modeling, the exercise immerses students in the political and social aspects of designing within a community, where many of the design constraints must be negotiated, and where group work is often required. The paper describes both the pedagogical and the technical attributes of the Archville project.
keywords Collaboration, Virtual Reality, Design Studio, Real-Time, VRML
series ACADIA
last changed 2022/06/07 08:00

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id fdb8
authors Montagu, A., Rodriguez Barros, D. and Chernobilsky, L.
year 2000
title The New Reality through Virtuality
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 225-229
doi https://doi.org/10.52842/conf.ecaade.2000.225
summary In this paper we want to develop some conceptual reflections of the processes of virtualization procedures with the aim to indicate a series of misfits and mutations as byproducts of the “digital-graphic culture” (DGC) when we are dealing with the perception of the “digital space”. Considering the present situation, a bit chaotic from a pedagogical point of view, we also want to propose a set of “virtual space parameters” in order to organize in a systemic way the teaching procedures of architectural design when using digital technology. Nowadays there is a great variety of computer graphics applications comprising practically all the fields of “science & technology”, “architecture, design & urbanism”, “video & film”, “sound” and the massive amount of information technology protocols. This fact obliges us to have an overall view about the meaning of “the new reality through virtuality”. The paper is divided in two sections and one appendix. In the first section we recognise the relationships among the sensory apparatus, the cognitive structures of perception and the cultural models involved in the process of understanding the reality. In the second section, as architects, we use to have “a global set of social and technical responsabilities” to organize the physical space, but now we must also be able to organize the “virtual space” obtained from a multidimensional set of computer simulations. There are certain features that can be used as “sensory parameters” when we are dealing with architectural design in the “virtual world”, taking into consideration the differences between “immersive virtual reality” and “non inmersive virtual reality”. In the appendix we present a summary of some conclusions based on a set of pedagogical applications analysing the positive and the negative consequences of working exclusively in a “virtual world”.
keywords Virtualisation Processes, Simulation, Philosophy, Space, Design, Cyberspace
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:58

_id 5477
authors Donath, D., Kruijff, E., Regenbrecht, H., Hirschberg, U., Johnson, B., Kolarevic, B. and Wojtowicz, J.
year 1999
title Virtual Design Studio 1998 - A Place2Wait
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 453-458
doi https://doi.org/10.52842/conf.ecaade.1999.453
summary This article reports on the recent, geographically and temporally distributed, intercollegiate Virtual Design Studio based on the 1998 implementation Phase(x) environment. Students participating in this workshop had to create a place to wait in the form of a folly. This design task was cut in five logical parts, called phases. Every phase had to be finished within a specific timeframe (one day), after which the results would be stored in a common data repository, an online MSQL database environment which holds besides the presentations, consisting of text, 3D models and rendered images, basic project information like the descriptions of the phases and design process visualization tools. This approach to collaborative work is better known as memetic engineering and has successfully been used in several educational programs and past Virtual Design Studios. During the workshop, students made use of a variety of tools, including modeling tools (specifically Sculptor), video-conferencing software and rendering programs. The project distinguishes itself from previous Virtual Design Studios in leaving the design task more open, thereby focusing on the design process itself. From this perspective, this paper represents both a continuation of existing reports about previous Virtual Design Studios and a specific extension by the offered focus. Specific attention will be given at how the different collaborating parties dealt with the data flow and modification, the crux within a successful effort to cooperate on a common design task.
keywords Collaborative design, Design Process, New Media Usage, Global Networks
series eCAADe
email
last changed 2022/06/07 07:55

_id avocaad_2001_22
id avocaad_2001_22
authors Jos van Leeuwen, Joran Jessurun
year 2001
title XML for Flexibility an Extensibility of Design Information Models
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary The VR-DIS research programme aims at the development of a Virtual Reality – Design Information System. This is a design and decision support system for collaborative design that provides a VR interface for the interaction with both the geometric representation of a design and the non-geometric information concerning the design throughout the design process. The major part of the research programme focuses on early stages of design. The programme is carried out by a large number of researchers from a variety of disciplines in the domain of construction and architecture, including architectural design, building physics, structural design, construction management, etc.Management of design information is at the core of this design and decision support system. Much effort in the development of the system has been and still is dedicated to the underlying theory for information management and its implementation in an Application Programming Interface (API) that the various modules of the system use. The theory is based on a so-called Feature-based modelling approach and is described in the PhD thesis by [first author, 1999] and in [first author et al., 2000a]. This information modelling approach provides three major capabilities: (1) it allows for extensibility of conceptual schemas, which is used to enable a designer to define new typologies to model with; (2) it supports sharing of conceptual schemas, called type-libraries; and (3) it provides a high level of flexibility that offers the designer the opportunity to easily reuse design information and to model information constructs that are not foreseen in any existing typologies. The latter aspect involves the capability to expand information entities in a model with relationships and properties that are not typologically defined but applicable to a particular design situation only; this helps the designer to represent the actual design concepts more accurately.The functional design of the information modelling system is based on a three-layered framework. In the bottom layer, the actual design data is stored in so-called Feature Instances. The middle layer defines the typologies of these instances in so-called Feature Types. The top layer is called the meta-layer because it provides the class definitions for both the Types layer and the Instances layer; both Feature Types and Feature Instances are objects of the classes defined in the top layer. This top layer ensures that types can be defined on the fly and that instances can be created from these types, as well as expanded with non-typological properties and relationships while still conforming to the information structures laid out in the meta-layer.The VR-DIS system consists of a growing number of modules for different kinds of functionality in relation with the design task. These modules access the design information through the API that implements the meta-layer of the framework. This API has previously been implemented using an Object-Oriented Database (OODB), but this implementation had a number of disadvantages. The dependency of the OODB, a commercial software library, was considered the most problematic. Not only are licenses of the OODB library rather expensive, also the fact that this library is not common technology that can easily be shared among a wide range of applications, including existing applications, reduces its suitability for a system with the aforementioned specifications. In addition, the OODB approach required a relatively large effort to implement the desired functionality. It lacked adequate support to generate unique identifications for worldwide information sources that were understandable for human interpretation. This strongly limited the capabilities of the system to share conceptual schemas.The approach that is currently being implemented for the core of the VR-DIS system is based on eXtensible Markup Language (XML). Rather than implementing the meta-layer of the framework into classes of Feature Types and Feature Instances, this level of meta-definitions is provided in a document type definition (DTD). The DTD is complemented with a set of rules that are implemented into a parser API, based on the Document Object Model (DOM). The advantages of the XML approach for the modelling framework are immediate. Type-libraries distributed through Internet are now supported through the mechanisms of namespaces and XLink. The implementation of the API is no longer dependent of a particular database system. This provides much more flexibility in the implementation of the various modules of the VR-DIS system. Being based on the (supposed to become) standard of XML the implementation is much more versatile in its future usage, specifically in a distributed, Internet-based environment.These immediate advantages of the XML approach opened the door to a wide range of applications that are and will be developed on top of the VR-DIS core. Examples of these are the VR-based 3D sketching module [VR-DIS ref., 2000]; the VR-based information-modelling tool that allows the management and manipulation of information models for design in a VR environment [VR-DIS ref., 2000]; and a design-knowledge capturing module that is now under development [first author et al., 2000a and 2000b]. The latter module aims to assist the designer in the recognition and utilisation of existing and new typologies in a design situation. The replacement of the OODB implementation of the API by the XML implementation enables these modules to use distributed Feature databases through Internet, without many changes to their own code, and without the loss of the flexibility and extensibility of conceptual schemas that are implemented as part of the API. Research in the near future will result in Internet-based applications that support designers in the utilisation of distributed libraries of product-information, design-knowledge, case-bases, etc.The paper roughly follows the outline of the abstract, starting with an introduction to the VR-DIS project, its objectives, and the developed theory of the Feature-modelling framework that forms the core of it. It briefly discusses the necessity of schema evolution, flexibility and extensibility of conceptual schemas, and how these capabilities have been addressed in the framework. The major part of the paper describes how the previously mentioned aspects of the framework are implemented in the XML-based approach, providing details on the so-called meta-layer, its definition in the DTD, and the parser rules that complement it. The impact of the XML approach on the functionality of the VR-DIS modules and the system as a whole is demonstrated by a discussion of these modules and scenarios of their usage for design tasks. The paper is concluded with an overview of future work on the sharing of Internet-based design information and design knowledge.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 2683
authors Kos, Jose Ripper
year 2000
title Architectural Hypermedia Based on 3D Models
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 221-224
doi https://doi.org/10.52842/conf.ecaade.2000.221
summary The World Wide Web gave a new dimension to the terms hypermedia and hypertext. Their distinctions are not very clear and in this paper we will use both with the same meaning. They are usually defined in a very generic way as a revolutionary form of writing. The generalization and glorification of hypertext, however, obscures a clearer view of its real possibilities. Architects will benefit by investigating carefully its resources - and how it can be a powerful tool for the profession, particularly when associated with 3D models.
keywords Hypermedia, 3D Model, Hypertext, Latin-American Cities, Architecture
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:51

_id 8892
authors Maver, T.W.
year 1983
title CAAD in Onderwijs en Onderzoek [CAAD in Teaching and Design]
source Proceedings of THE-CAAD3 Symposium, Eindhoven
summary Students currently in schools of architecture will be at the peak of their careers around the year 2000. The pressure on the schools to provide an education and training which will stand the student in good stead between now and then is considerable. In an increasing number of departments of architecture and building science, importance is being placed on the concept of modelling: i.e. the development and use of models of the operational behaviour and aesthetic character of design proposals which will allow appraisal of how real buildings will performing the real world.
series other
email
last changed 2003/06/08 23:01

_id 3888
authors Reffat, Rabee M.
year 2000
title Computational Situated Learning in Designing - Application to Architectural Shape Semantics
source The University of Sydney, Faculty of Architecture
summary Learning the situatedness (applicability conditions), of design knowledge recognised from design compositions is the central tenet of the research presented in this thesis. This thesis develops and implements a computational system of situated learning and investigates its utility in designing. Situated learning is based on the concept that "knowledge is contextually situated and is fundamentally influenced by its situation". In this sense learning is tuned to the situations within which "what you do when you do matters". Designing cannot be predicted and the results of designing are not based on actions independent of what is being designed or independent of when, where and how it was designed. Designers' actions are situation dependent (situated), such that designers work actively with the design environment within the specific conditions of the situation where neither the goal state nor the solution space is completely predetermined. In designing, design solutions are fluid and emergent entities generated by dynamic and situated activities instead of fixed design plans. Since it is not possible in advance to know what knowledge to use in relation to any situation we need to learn knowledge in relation to its situation, i.e. learn the applicability conditions of knowledge. This leads towards the notion of the situation as having the potential role of guiding the use of knowledge.

Situated Learning in Designing (SLiDe) is developed and implemented within the domain of architectural shape composition (in the form of floor plans), to construct the situatedness of shape semantics. An architectural shape semantic is a set of characteristics with a semantic meaning based on a particular view of a shape such as reflection symmetry, adjacency, rotation and linearity. Each shape semantic has preconditions without which it cannot be recognised. Such preconditions indicate nothing about the situation within which this shape semantic was recognised. The situatedness or the applicability conditions of a shape semantic is viewed as, the interdependent relationships between this shape semantic as the design knowledge in focus, and other shape semantics across the observations of a design composition. While designing, various shape semantics and relationships among them emerge in different representations of a design composition. Multiple representations of a design composition by re-interpretation have been proposed to serve as a platform for SLiDe. Multiple representations provide the opportunity for different shape semantics and relationships among them to be found from a single design composition. This is important if these relationships are to be used later because it is not known in advance which of the possible relationships could be constructed are likely to be useful. Hence, multiple representations provide a platform for different situations to be encountered. A symbolic representation of shape and shape semantics is used in which the infinite maximal lines form the representative primitives of the shape.

SLiDe is concerned with learning the applicability conditions (situatedness), of shape semantics locating them in relation to situations within which they were recognised (situation dependent), and updating the situatedness of shape semantics in response to new observations of the design composition. SLiDe consists of three primary modules: Generator, Recogniser and Incremental Situator. The Generator is used by the designer to develop a set of multiple representations of a design composition. This set of representations forms the initial design environment of SLiDe. The Recogniser detects shape semantics in each representation and produces a set of observations, each of which is comprised of a group of shape semantics recognised at each corresponding representation. The Incremental Situator module consists of two sub-modules, Situator and Restructuring Situator, and utilises an unsupervised incremental clustering mechanism not affected by concept drift. The Situator module locates recognised shape semantics in relation to their situations by finding regularities of relationships among them across observations of a design composition and clustering them into situational categories organised in a hierarchical tree structure. Such relationships change over time due to the changes taken place in the design environment whenever further representations are developed using the Generator module and new observations are constructed by the Recogniser module. The Restructuring Situator module updates previously learned situational categories and restructures the hierarchical tree accordingly in response to new observations.

Learning the situatedness shape semantics may play a crucial role in designing if designers pursue further some of these shape semantics. This thesis illustrates an approach in which SLiDe can be utilised in designing to explore the shapes in a design composition in various ways; bring designers! attention to potentially hidden features and shape semantics of their designs; and maintain the integrity of the design composition by using the situatedness of shape semantics. The thesis concludes by outlining future directions for this research to learn and update the situatedness of design knowledge within the context of use; considering the role of functional knowledge while learning the situatedness of design knowledge; and developing an autonomous situated agent-based designing system.

series thesis:PhD
email
last changed 2003/05/06 11:34

_id 83cb
authors Telea, Alexandru C.
year 2000
title Visualisation and simulation with object-oriented networks
source Eindhoven University of Technology
summary Among the existing systems, visual programming environments address best these issues. However, producing interactive simulations and visualisations is still a difficult task. This defines the main research objective of this thesis: The development and implementation of concepts and techniques to combine visualisation, simulation, and application construction in an interactive, easy to use, generic environment. The aim is to produce an environment in which the above mentioned activities can be learnt and carried out easily by a researcher. Working with such an environment should decrease the amount of time usually spent in redesigning existing software elements such as graphics interfaces, existing computational modules, and general infrastructure code. Writing new computational components or importing existing ones should be simple and automatic enough to make using the envisaged system an attractive option for a non programmer expert. Besides this, all proven successful elements of an interactive simulation and visualisation environment should be provided, such as visual programming, graphics user interfaces, direct manipulation, and so on. Finally, a large palette of existing scientific computation, data processing, and visualisation components should be integrated in the proposed system. On one hand, this should prove our claims of openness and easy code integration. On the other hand, this should provide the concrete set of tools needed for building a range of scientific applications and visualisations. This thesis is structured as follows. Chapter 2 defines the context of our work. The scientific research environment is presented and partitioned into the three roles of end user, application designer, and component developer. The interactions between these roles and their specific requirements are described and lead to a more precise formulation of our problem statement. Chapter 3 presents the most used architectures for simulation and visualisation systems: the monolithic system, the application library, and the framework. The advantages and disadvantages of these architectural models are then discussed in relation with our problem statement requirements. The main conclusion drawn is that no single existing architectural model suffices, and that what is needed is a combination of the features present in all three models. Chapter 4 introduces the new architectural model we propose, based on the combination of object-orientation in form of the C++ language and dataflow modelling in the new MC++ language. Chapter 5 presents VISSION, an interactive simulation and visualisation environment constructed on the introduced new architectural model, and shows how the usual tasks of application construction, steering, and visualisation are addressed. In chapter 6, the implementation of VISSION’s architectural model is described in terms of its component parts. Chapter 7 presents the applications of VISSION to numerical simulation, while chapter 8 focuses on its visualisation and graphics applications. Finally, chapter 9 concludes the thesis and outlines possible direction for future research.
keywords Computer Visualisation
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 10ba
authors Tournay, Bruno
year 1999
title The Software Beats the Hardware
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 74-79
doi https://doi.org/10.52842/conf.ecaade.1999.074
summary The paper is based on ongoing reflections concerning the importance of information technology in architecture. Such reflections are necessary to develop research concerning the use of information technology in architectural design, so as to shift the focus from purely technological development to an actual field of research. The result of these reflections to date suggests that research into the significance of information technology in architecture must go via sociological research on the subject, since information technology has become a social factor. The central element in such research will be to identify and specify how the virtual world which is developing can be articulated in relation to the physical world. One of the ways of doing this is to use metaphors.
keywords 3D City modeling
series eCAADe
email
last changed 2022/06/07 07:57

_id avocaad_2001_17
id avocaad_2001_17
authors Ying-Hsiu Huang, Yu-Tung Liu, Cheng-Yuan Lin, Yi-Ting Cheng, Yu-Chen Chiu
year 2001
title The comparison of animation, virtual reality, and scenario scripting in design process
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary Design media is a fundamental tool, which can incubate concrete ideas from ambiguous concepts. Evolved from freehand sketches, physical models to computerized drafting, modeling (Dave, 2000), animations (Woo, et al., 1999), and virtual reality (Chiu, 1999; Klercker, 1999; Emdanat, 1999), different media are used to communicate to designers or users with different conceptual levels¡@during the design process. Extensively employed in design process, physical models help designers in managing forms and spaces more precisely and more freely (Millon, 1994; Liu, 1996).Computerized drafting, models, animations, and VR have gradually replaced conventional media, freehand sketches and physical models. Diversely used in the design process, computerized media allow designers to handle more divergent levels of space than conventional media do. The rapid emergence of computers in design process has ushered in efforts to the visual impact of this media, particularly (Rahman, 1992). He also emphasized the use of computerized media: modeling and animations. Moreover, based on Rahman's study, Bai and Liu (1998) applied a new design media¡Xvirtual reality, to the design process. In doing so, they proposed an evaluation process to examine the visual impact of this new media in the design process. That same investigation pointed towards the facilitative role of the computerized media in enhancing topical comprehension, concept realization, and development of ideas.Computer technology fosters the growth of emerging media. A new computerized media, scenario scripting (Sasada, 2000; Jozen, 2000), markedly enhances computer animations and, in doing so, positively impacts design processes. For the three latest media, i.e., computerized animation, virtual reality, and scenario scripting, the following question arises: What role does visual impact play in different design phases of these media. Moreover, what is the origin of such an impact? Furthermore, what are the similarities and variances of computing techniques, principles of interaction, and practical applications among these computerized media?This study investigates the similarities and variances among computing techniques, interacting principles, and their applications in the above three media. Different computerized media in the design process are also adopted to explore related phenomenon by using these three media in two projects. First, a renewal planning project of the old district of Hsinchu City is inspected, in which animations and scenario scripting are used. Second, the renewal project is compared with a progressive design project for the Hsinchu Digital Museum, as designed by Peter Eisenman. Finally, similarity and variance among these computerized media are discussed.This study also examines the visual impact of these three computerized media in the design process. In computerized animation, although other designers can realize the spatial concept in design, users cannot fully comprehend the concept. On the other hand, other media such as virtual reality and scenario scripting enable users to more directly comprehend what the designer's presentation.Future studies should more closely examine how these three media impact the design process. This study not only provides further insight into the fundamental characteristics of the three computerized media discussed herein, but also enables designers to adopt different media in the design stages. Both designers and users can more fully understand design-related concepts.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id avocaad_2001_09
id avocaad_2001_09
authors Yu-Tung Liu, Yung-Ching Yeh, Sheng-Cheng Shih
year 2001
title Digital Architecture in CAD studio and Internet-based competition
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary Architectural design has been changing because of the vast and creative use of computer in different ways. From the viewpoint of designing itself, computer has been used as drawing tools in the latter phase of design (Mitchell 1977; Coyne et al. 1990), presentation and simulation tools in the middle phase (Liu and Bai 2000), and even critical media which triggers creative thinking in the very early phase (Maher et al. 2000; Liu 1999; Won 1999). All the various roles that computer can play have been adopted in a number of professional design corporations and so-called computer-aided design (CAD) studio in schools worldwide (Kvan 1997, 2000; Cheng 1998). The processes and outcomes of design have been continuously developing to capture the movement of the computer age. However, from the viewpoint of social-cultural theories of architecture, the evolvement of design cannot be achieved solely by designers or design processes. Any new idea of design can be accepted socially, culturally and historically only under one condition: The design outcomes could be reviewed and appreciated by critics in the field at the time of its production (Csikszentmihalyi 1986, 1988; Schon and Wiggins 1992; Liu 2000). In other words, aspects of design production (by designers in different design processes) are as critical as those of design appreciation (by critics in different review processes) in the observation of the future trends of architecture.Nevertheless, in the field of architectural design with computer and Internet, that is, so-called computer-aided design computer-mediated design, or internet-based design, most existing studies pay more attentions to producing design in design processes as mentioned above. Relatively few studies focus on how critics act and how they interact with designers in the review processes. Therefore, this study intends to investigate some evolving phenomena of the interaction between design production and appreciation in the environment of computer and Internet.This paper takes a CAD studio and an Internet-based competition as examples. The CAD studio includes 7 master's students and 2 critics, all from the same countries. The Internet-based competition, held in year 2000, includes 206 designers from 43 counties and 26 critics from 11 countries. 3 students and the 2 critics in the CAD studio are the competition participating designers and critics respectively. The methodological steps are as follows: 1. A qualitative analysis: observation and interview of the 3 participants and 2 reviewers who join both the CAD studio and the competition. The 4 analytical criteria are the kinds of presenting media, the kinds of supportive media (such as verbal and gesture/facial data), stages of the review processes, and interaction between the designer and critics. The behavioral data are acquired by recording the design presentation and dialogue within 3 months. 2. A quantitative analysis: statistical analysis of the detailed reviewing data in the CAD studio and the competition. The four 4 analytical factors are the reviewing time, the number of reviewing of the same project, the comparison between different projects, and grades/comments. 3. Both the qualitative and quantitative data are cross analyzed and discussed, based on the theories of design thinking, design production/appreciation, and the appreciative system (Goodman 1978, 1984).The result of this study indicates that the interaction between design production and appreciation during the review processes could differ significantly. The review processes could be either linear or cyclic due to the influences from the kinds of media, the environmental discrepancies between studio and Internet, as well as cognitive thinking/memory capacity. The design production and appreciation seem to be more linear in CAD studio whereas more cyclic in the Internet environment. This distinction coincides with the complementary observations of designing as a linear process (Jones 1970; Simon 1981) or a cyclic movement (Schon and Wiggins 1992). Some phenomena during the two processes are also illustrated in detail in this paper.This study is merely a starting point of the research in design production and appreciation in the computer and network age. The future direction of investigation is to establish a theoretical model for the interaction between design production and appreciation based on current findings. The model is expected to conduct using revised protocol analysis and interviews. The other future research is to explore how design computing creativity emerge from the process of producing and appreciating.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 7da6
authors Campbell, Dace A.
year 2000
title Architectural construction documents on the web: VRML as a case study
source Automation in Construction 9 (1) (2000) pp. 129-138
summary The Virtual Reality Modeling Language (VRML) and the World Wide Web (WWW) offer new opportunities to communicate an architect's design intent throughout the design process. We have investigated the use of VRML in the production and communication of construction documents, the final phase of architectural building design. A prototype, experimental Web site was set up and used to disseminate design data as VRML models and HTML text to the design client, contractor, and fabricators. In this paper, we discuss the way our construction documents were developed in VRML, the issues we faced implementing it, and critical feedback from the users of the Web space/site. We analyze the usefulness of VRML as a communication tool for the design and construction industries. Finally, we discuss technical, social, and legal issues the AEC industry faces as it shifts to embrace widespread use of a "paperless" Web-based communications infrastructure for design documentation.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 8802
authors Burry, Mark, Dawson, Tony and Woodbury, Robert
year 1999
title Learning about Architecture with the Computer, and Learning about the Computer in Architecture
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 374-382
doi https://doi.org/10.52842/conf.ecaade.1999.374
summary Most students commencing their university studies in architecture must confront and master two new modes of thought. The first, widely known as reflection-in-action, is a continuous cycle of self-criticism and creation that produces both learning and improved work. The second, which we call here design making, is a process which considers building construction as an integral part of architectural designing. Beginning students in Australia tend to do neither very well; their largely analytic secondary education leaves the majority ill-prepared for these new forms of learning and working. Computers have both complicated and offered opportunities to improve this situation. An increasing number of entering students have significant computing skill, yet university architecture programs do little in developing such skill into sound and extensible knowledge. Computing offers new ways to engage both reflection-in-action and design making. The collaboration between two Schools in Australia described in detail here pools computer-based learning resources to provide a wider scope for the education in each institution, which we capture in the phrase: Learn to use computers in architecture (not use computers to learn architecture). The two shared learning resources are Form Making Games (Adelaide University), aimed at reflection-in-action and The Construction Primer (Deakin University and Victoria University of Wellington), aimed at design making. Through contributing to and customising the resources themselves, students learn how designing and computing relate. This paper outlines the collaborative project in detail and locates the initiative at a time when the computer seems to have become less self-consciously assimilated within the wider architectural program.
keywords Reflection-In-Action, Design Making, Customising Computers
series eCAADe
email
last changed 2022/06/07 07:54

_id 85ab
authors Corrao, Rossella and Fulantelli, Giovanni
year 1999
title Architects in the Information Society: The Role of New Technologies
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 665-671
doi https://doi.org/10.52842/conf.ecaade.1999.665
summary New Technologies (NTs) offer us tools with which to deal with the new challenges that a changing society or workplace presents. In particular, new design strategies and approaches are required by the emerging Information Society, and NTs offer effective solutions to the designers in the different stages of their professional life, and in different working situations. In this paper some meaningful scenarios of the use of the NTs in Architecture and Urban Design are introduced; the scenarios have been selected in order to understand how the role of architects in the Information Society is changing, and what new opportunities NTs offer them. It will be underlined how the telematic networks play an essential role in the activation of virtual studios that are able to compete in an increasingly global market; examples will be given of the use of the Web to support activities related to Urban Planning and Management; it will be shown how the Internet may be used to access strategic resources for education and training, and sustain lifelong learning. The aforesaid considerations derive from a Web-Based Instruction system we have developed to support University students in the definition of projects that can concern either single buildings or whole parts of a city. The system can easily be adopted in the other scenarios introduced.
keywords Architecture, Urban Planning , New Technologies, World Wide Web, Education
series eCAADe
email
last changed 2022/06/07 07:56

_id ec4d
authors Croser, J.
year 2001
title GDL Object
source The Architect’s Journal, 14 June 2001, pp. 49-50
summary It is all too common for technology companies to seek a new route to solving the same problem but for the most part the solutions address the effect and not the cause. The good old-fashioned pencil is the perfect example where inventors have sought to design-out the effect of the inherent brittleness of lead. Traditionally different methods of sharpening were suggested and more recently the propelling pencil has reigned king, the lead being supported by the dispensing sleeve thus reducing the likelihood of breakage. Developers convinced by the Single Building Model approach to design development have each embarked on a difficult journey to create an easy to use feature packed application. Unfortunately it seems that the two are not mutually compatible if we are to believe what we see emanating from Technology giants Autodesk in the guise of Architectural Desktop 3. The effect of their development is a feature rich environment but the cost and in this case the cause is a tool which is far from easy to use. However, this is only a small part of a much bigger problem, Interoperability. You see when one designer develops a model with one tool the information is typically locked in that environment. Of course the geometry can be distributed and shared amongst the team for use with their tools but the properties, or as often misquoted, the intelligence is lost along the way. The effect is the technological version of rubble; the cause is the low quality of data-translation available to us. Fortunately there is one company, which is making rapid advancements on the whole issue of collaboration, and data sharing. An old timer (Graphisoft - famous for ArchiCAD) has just donned a smart new suit, set up a new company called GDL Technology and stepped into the ring to do battle, with a difference. The difference is that GDL Technology does not rely on conquering the competition, quite the opposite in fact their success relies upon the continued success of all the major CAD platforms including AutoCAD, MicroStation and ArchiCAD (of course). GDL Technology have created a standard data format for manufacturers called GDL Objects. Product manufacturers such as Velux are now able to develop product libraries using GDL Objects, which can then be placed in a CAD model, or drawing using almost any CAD tool. The product libraries can be stored on the web or on CD giving easy download access to any building industry professional. These objects are created using scripts which makes them tiny for downloading from the web. Each object contains 3 important types of information: · Parametric scale dependant 2d plan symbols · Full 3d geometric data · Manufacturers information such as material, colour and price Whilst manufacturers are racing to GDL Technologies door to sign up, developers and clients are quick to see the benefit too. Porsche are using GDL Objects to manage their brand identity as they build over 300 new showrooms worldwide. Having defined the building style and interior Porsche, in conjunction with the product suppliers, have produced a CD-ROM with all of the selected building components such as cladding, doors, furniture, and finishes. Designing and detailing the various schemes will therefore be as straightforward as using Lego. To ease the process of accessing, sizing and placing the product libraries GDL Technology have developed a product called GDL Object Explorer, a free-standing application which can be placed on the CD with the product libraries. Furthermore, whilst the Object Explorer gives access to the GDL Objects it also enables the user to save the object in one of many file formats including DWG, DGN, DXF, 3DS and even the IAI's IFC. However, if you are an AutoCAD user there is another tool, which has been designed especially for you, it is called the Object Adapter and it works inside of AutoCAD 14 and 2000. The Object Adapter will dynamically convert all GDL Objects to AutoCAD Blocks during placement, which means that they can be controlled with standard AutoCAD commands. Furthermore, each object can be linked to an online document from the manufacturer web site, which is ideal for more extensive product information. Other tools, which have been developed to make the most of the objects, are the Web Plug-in and SalesCAD. The Plug-in enables objects to be dynamically modified and displayed on web pages and Sales CAD is an easy to learn and use design tool for sales teams to explore, develop and cost designs on a Notebook PC whilst sitting in the architects office. All sales quotations are directly extracted from the model and presented in HTML format as a mixture of product images, product descriptions and tables identifying quantities and costs. With full lifecycle information stored in each GDL Object it is no surprise that GDL Technology see their objects as the future for building design. Indeed they are not alone, the IAI have already said that they are going to explore the possibility of associating GDL Objects with their own data sharing format the IFC. So down to the dirty stuff, money and how much it costs? Well, at the risk of sounding like a market trader in Petticoat Lane, "To you guv? Nuffin". That's right as a user of this technology it will cost you nothing! Not a penny, it is gratis, free. The product manufacturer pays for the license to host their libraries on the web or on CD and even then their costs are small costing from as little as 50p for each CD filled with objects. GDL Technology has come up trumps with their GDL Objects. They have developed a new way to solve old problems. If CAD were a pencil then GDL Objects would be ballistic lead, which would never break or loose its point. A much better alternative to the strategy used by many of their competitors who seek to avoid breaking the pencil by persuading the artist not to press down so hard. If you are still reading and you have not already dropped the magazine and run off to find out if your favorite product supplier has already signed up then I suggest you check out the following web sites www.gdlcentral.com and www.gdltechnology.com. If you do not see them there, pick up the phone and ask them why.
series journal paper
email
last changed 2003/04/23 15:14

_id d244
authors De Mesa, A., Quilez, J. and Regot, J.
year 2000
title Análisis Geométrico de Formas Arquitectónicas Complejas (Geometrical Analysis of Complex Architectural Forms)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 295-297
summary The present graphic computer system allows defining high-level shape problems with great freedom. In free-form surface modeling it comes to be a good reason to develop an example that shows, which is the best way to create, modify and control complex free-form shapes in three-dimensional architectonic virtual modeling. The parameters of Bezier curves are not simple. But the use of Splines curves let us a friendly free form curves management with a great designer performance level. Unfortunately, the standard computer graphic tools to control these entities have a lot of variations, and normally create an unclear and confuse interface for generic users without several knowledge of mathematics and geometry. With the help of an example, this paper expose the use of computer graphics to make models of architectonic buildings with complex shapes that contains free-form surfaces. At the same time, it is an evaluation of how the standard CAD software processes this problem.
series SIGRADI
last changed 2016/03/10 09:50

_id 349e
authors Durmisevic, Sanja
year 2002
title Perception Aspects in Underground Spaces using Intelligent Knowledge Modeling
source Delft University of Technology
summary The intensification, combination and transformation are main strategies for future spatial development of the Netherlands, which are stated in the Fifth Bill regarding Spatial Planning. These strategies indicate that in the future, space should be utilized in a more compact and more efficient way requiring, at the same time, re-evaluation of the existing built environment and finding ways to improve it. In this context, the concept of multiple space usage is accentuated, which would focus on intensive 4-dimensional spatial exploration. The underground space is acknowledged as an important part of multiple space usage. In the document 'Spatial Exploration 2000', the underground space is recognized by policy makers as an important new 'frontier' that could provide significant contribution to future spatial requirements.In a relatively short period, the underground space became an important research area. Although among specialists there is appreciation of what underground space could provide for densely populated urban areas, there are still reserved feelings by the public, which mostly relate to the poor quality of these spaces. Many realized underground projects, namely subways, resulted in poor user satisfaction. Today, there is still a significant knowledge gap related to perception of underground space. There is also a lack of detailed documentation on actual applications of the theories, followed by research results and applied techniques. This is the case in different areas of architectural design, but for underground spaces perhaps most evident due to their infancv role in general architectural practice. In order to create better designs, diverse aspects, which are very often of qualitative nature, should be considered in perspective with the final goal to improve quality and image of underground space. In the architectural design process, one has to establish certain relations among design information in advance, to make design backed by sound rationale. The main difficulty at this point is that such relationships may not be determined due to various reasons. One example may be the vagueness of the architectural design data due to linguistic qualities in them. Another, may be vaguely defined design qualities. In this work, the problem was not only the initial fuzziness of the information but also the desired relevancy determination among all pieces of information given. Presently, to determine the existence of such relevancy is more or less a matter of architectural subjective judgement rather than systematic, non-subjective decision-making based on an existing design. This implies that the invocation of certain tools dealing with fuzzy information is essential for enhanced design decisions. Efficient methods and tools to deal with qualitative, soft data are scarce, especially in the architectural domain. Traditionally well established methods, such as statistical analysis, have been used mainly for data analysis focused on similar types to the present research. These methods mainly fall into a category of pattern recognition. Statistical regression methods are the most common approaches towards this goal. One essential drawback of this method is the inability of dealing efficiently with non-linear data. With statistical analysis, the linear relationships are established by regression analysis where dealing with non-linearity is mostly evaded. Concerning the presence of multi-dimensional data sets, it is evident that the assumption of linear relationships among all pieces of information would be a gross approximation, which one has no basis to assume. A starting point in this research was that there maybe both linearity and non-linearity present in the data and therefore the appropriate methods should be used in order to deal with that non-linearity. Therefore, some other commensurate methods were adopted for knowledge modeling. In that respect, soft computing techniques proved to match the quality of the multi-dimensional data-set subject to analysis, which is deemed to be 'soft'. There is yet another reason why soft-computing techniques were applied, which is related to the automation of knowledge modeling. In this respect, traditional models such as Decision Support Systems and Expert Systems have drawbacks. One important drawback is that the development of these systems is a time-consuming process. The programming part, in which various deliberations are required to form a consistent if-then rule knowledge based system, is also a time-consuming activity. For these reasons, the methods and tools from other disciplines, which also deal with soft data, should be integrated into architectural design. With fuzzy logic, the imprecision of data can be dealt with in a similar way to how humans do it. Artificial neural networks are deemed to some extent to model the human brain, and simulate its functions in the form of parallel information processing. They are considered important components of Artificial Intelligence (Al). With neural networks, it is possible to learn from examples, or more precisely to learn from input-output data samples. The combination of the neural and fuzzy approach proved to be a powerful combination for dealing with qualitative data. The problem of automated knowledge modeling is efficiently solved by employment of machine learning techniques. Here, the expertise of prof. dr. Ozer Ciftcioglu in the field of soft computing was crucial for tool development. By combining knowledge from two different disciplines a unique tool could be developed that would enable intelligent modeling of soft data needed for support of the building design process. In this respect, this research is a starting point in that direction. It is multidisciplinary and on the cutting edge between the field of Architecture and the field of Artificial Intelligence. From the architectural viewpoint, the perception of space is considered through relationship between a human being and a built environment. Techniques from the field of Artificial Intelligence are employed to model that relationship. Such an efficient combination of two disciplines makes it possible to extend our knowledge boundaries in the field of architecture and improve design quality. With additional techniques, meta know/edge, or in other words "knowledge about knowledge", can be created. Such techniques involve sensitivity analysis, which determines the amount of dependency of the output of a model (comfort and public safety) on the information fed into the model (input). Another technique is functional relationship modeling between aspects, which is derivation of dependency of a design parameter as a function of user's perceptions. With this technique, it is possible to determine functional relationships between dependent and independent variables. This thesis is a contribution to better understanding of users' perception of underground space, through the prism of public safety and comfort, which was achieved by means of intelligent knowledge modeling. In this respect, this thesis demonstrated an application of ICT (Information and Communication Technology) as a partner in the building design process by employing advanced modeling techniques. The method explained throughout this work is very generic and is possible to apply to not only different areas of architectural design, but also to other domains that involve qualitative data.
keywords Underground Space; Perception; Soft Computing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id aabc
authors Ha, Q.P., Nguyen, Q.H., Rye, D.C. and Durrant-Whyte, H.F.
year 2000
title Impedance control of a hydraulically actuated robotic excavator
source Automation in Construction 9 (5-6) (2000) pp. 421-435
summary In robotic excavation, hybrid position/force control has been proposed for bucket digging trajectory following. In hybrid position/force control, the control mode is required to switch between position- and force-control depending on whether the bucket is in free space or in contact with the soil during the process. Alternatively, impedance control can be applied such that one control mode is employed in both free and constrained motion. This paper presents a robust sliding controller that implements impedance control for a backhoe excavator. The control law consists of three components: an equivalent control, a switching control and a tuning control. Given an excavation task in world space, inverse kinematic and dynamic models are used to convert the task into a desired digging trajectory in joint space. The proposed controller is applied to provide good tracking performance with attenuated vibration at bucket–soil contact points. From the control signals and the joint angles of the excavator, the piston position and ram force of each hydraulic cylinder for the axis control of the boom, arm, and bucket can be determined. The problem is then how to find the control voltage applied to each servovalve to achieve force and position tracking of each electrohydraulic system for the axis motion of the boom, arm, and bucket. With an observer-based compensation for disturbance force including hydraulic friction, tracking of the piston ram force and position is guaranteed using robust sliding control. High performance and strong robustness can be obtained as demonstrated by simulation and experiments performed on a hydraulically actuated robotic excavator. The results obtained suggest that the proposed control technique can provide robust performance when employed in autonomous excavation with soil contact considerations.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 37HOMELOGIN (you are user _anon_258181 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002