CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 748

_id diss_walker
id diss_walker
authors Walker, Bruce N.
year 2000
title Magnitude Estimation of Conceptual Data Dimensions for Use in Sonification
source Rice University
summary Most data exploration tools are exclusively visual, failing to exploit the advantages of the human auditory system, and excluding students and researchers with visual disabilities. Sonification uses non-speech audio to create auditory graphs, which may address some limitations of visual graphs. However, almost no research has addressed how to create optimal sonifications.Three key research questions are: (1) What is the best sound parameter to use to represent a given data type? (2) Should an increase in the sound dimension (e.g., rising frequency) represent an increase or a decrease in the data dimension? (3) How much change in the sound dimension will represent a given change in the data dimension?Experiment 1 simply asked listeners which of two sounds represented something that was hotter, faster, etc. However, participants seemed not to make cognitive assessments of the sounds. I therefore proposed magnitude estimation (ME) as an alternative, less transparent, paradigm.Experiment 2 used ME with visual stimuli (lines and filled circles), replicating previous findings for perceptual judgments (length of lines, size of circles). However, judgments of conceptual data dimensions (i.e., the temperature, pressure, or velocity a given stimulus would represent) yielded slopes different from the perceptual judgments, indicating that the type of data being represented influences value estimation.Experiment 3 found similar results with auditory stimuli differing in frequency or tempo. Estimations of what temperature, pressure, velocity, size, or number of dollars a sound represented differed, indicating that both visual and auditory displays should be scaled according to the type of data being displayed.Experiment 4 presented auditory graphs and asked which of two data descriptions the sounds represented. Data sets based on the equations determined in Experiment 3 were preferred, providing validation of those slope values. Results also supported the use of the unanimity of mapping polarities as a measure of a mapping's effectiveness.Replication with different users and sounds is required to assess the reliability of the slopes. However, ME provides an excellent way to obtain a function relating conceptual data dimensions to display dimensions, which can be used to create more effective, appropriately scaled sonifications.
series thesis:PhD
email
more http://sonify.psych.gatech.edu/~walkerb/research/phd/
last changed 2003/11/28 07:37

_id 635f
authors Lee, Alpha W.K. and Iki, Kazuhisa
year 2000
title Use of DHTML for Interactive Assessment of Common Value for Townscape Conceptualization and Realization. Colour Assessment, Case Study of large-Scale Resort Facility in Aso Region, Kumamoto Prefecture, Japan
doi https://doi.org/10.52842/conf.caadria.2000.089
source CAADRIA 2000 [Proceedings of the Fifth Conference on Computer Aided Architectural Design Research in Asia / ISBN 981-04-2491-4] Singapore 18-19 May 2000, pp. 89-96
summary With the public's high consciousness of townscape, a new form of Color Planning incorporating Citizen Participation is necessary. This paper proposes the use of Dynamic Hypertext Mark-up Language (DHTML) in a Web-oriented Interactive Townscape Assessment System. This system consists of two parts, the first part includes tools for Analytic Hierarchical Process (AHP), Magnitude Estimation, Semantic Differential (SD) and Color Semantic Differential (Color SD) method, and the second part includes tools for Interactive Color Planning System (ICPS). Interactive Assessment is possible by the inclusion of JavaScript and Cascading Style Sheet (CSS). Efficiency is improved by client-side operations, data-collection using Common Gateway Interface (CGI) and presentation using Tabular Data Control (TDC). A case study of large-scale resort facility in Aso Region, Kumamoto Prefecture, Japan is undertaken. The result shows efficiency of the system.
series CAADRIA
email
last changed 2022/06/07 07:52

_id 53c6
authors Mardaljevic, John
year 2000
title Daylight Simulation: Validation, Sky Models and Daylight Coefficients
source De Montfort University, Leicester, UK
summary The application of lighting simulation techniques for daylight illuminance modelling in architectural spaces is described in this thesis. The prediction tool used for all the work described here is the Radiance lighting simulation system. An overview of the features and capabilities of the Radiance system is presented. Daylight simulation using the Radiance system is described in some detail. The relation between physical quantities and the lighting simulation parameters is made clear in a series of progressively more complex examples. Effective use of the inter-reflection calculation is described. The illuminance calculation is validated under real sky conditions for a full-size office space. The simulation model used sky luminance patterns that were based directly on measurements. Internal illuminance predictions are compared with measurements for 754 skies that cover a wide range of naturally occurring conditions. The processing of the sky luminance measurements for the lighting simulation is described. The accuracy of the illuminance predictions is shown to be, in the main, comparable with the accuracy of the model input data. There were a number of predictions with low accuracy. Evidence is presented to show that these result from imprecision in the model specification - such as, uncertainty of the circumsolar luminance - rather than the prediction algorithms themselves. Procedures to visualise and reduce illuminance and lighting-related data are presented. The ability of sky models to reproduce measured sky luminance patterns for the purpose of predicting internal illuminance is investigated. Four sky models and two sky models blends are assessed. Predictions of internal illuminance using sky models/blends are compared against those using measured sky luminance patterns. The sky model blends and the Perez All-weather model are shown to perform comparably well. Illuminance predictions using measured skies however were invariably better than those using sky models/blends. Several formulations of the daylight coefficient approach for predicting time varying illuminances are presented. Radiance is used to predict the daylight coefficients from which internal illuminances are derived. The form and magnitude of the daylight coefficients are related to the scene geometry and the discretisation scheme. Internal illuminances are derived for four daylight coefficient formulations based on the measured luminance patterns for the 754 skies. For the best of the formulations, the accuracy of the daylight coefficient derived illuminances is shown to be comparable to that using the standard Radiance calculation method. The use of the daylight coefficient approach to both accurately and efficiently predict hourly internal daylight illuminance levels for an entire year is described. Daylight coefficients are invariant to building orientation for a fixed building configuration. This property of daylight coefficients is exploited to yield hourly internal illuminances for a full year as a function of building orientation. Visual data analysis techniques are used to display and process the massive number of derived illuminances.
series thesis:PhD
email
more http://www.iesd.dmu.ac.uk/~jm/thesis/
last changed 2003/02/12 22:37

_id ddssar0213
id ddssar0213
authors De Groot, Ellie and Paule, Bernard
year 2002
title DIAL-Europe: New Functionality’s for an Integrated Daylighting Design Tool
source Timmermans, Harry (Ed.), Sixth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings Avegoor, the Netherlands), 2002
summary The European project DIAL-Europe started in April 2000 and intends to enhance and to enlarge the capabilities of the LesoDIAL software. The aim of this “Swiss” tool was to give architects relevant information regarding the use of daylight, at the very first stage of the design process. DIAL-Europe focuses on European standards and climatic data. Further, a Heating & Cooling evaluation module and an Artificial Lighting module will be added. The objective of the Heating & Cooling module is to indicate the implications of the user’s design on heating and cooling energy and on thermal comfort.The objective of Artificial Lighting module is to develop a tool that will give an estimation of illuminance values on the work plane and provide guidance on qualitative aspects and visual comfort as well as on switching control and integration with daylight based on generic light sources and luminaires. Furthermore, the scope of the examples of simulated rooms will be increased in order to allow the user to compare their design with more similar cases. This paper will present the state of achievement and give an overview of the first version of the DIAL-Europe software, which will beavailable at the beginning of 2002.
series DDSS
last changed 2003/08/07 16:36

_id 60e7
authors Bailey, Rohan
year 2000
title The Intelligent Sketch: Developing a Conceptual Model for a Digital Design Assistant
doi https://doi.org/10.52842/conf.acadia.2000.137
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 137-145
summary The computer is a relatively new tool in the practice of Architecture. Since its introduction, there has been a desire amongst designers to use this new tool quite early in the design process. However, contrary to this desire, most Architects today use pen and paper in the very early stages of design to sketch. Architects solve problems by thinking visually. One of the most important tools that the Architect has at his disposal in the design process is the hand sketch. This iterative way of testing ideas and informing the design process with images fundamentally directs and aids the architect’s decision making. It has been said (Schön and Wiggins 1992) that sketching is about the reflective conversation designers have with images and ideas conveyed by the act of drawing. It is highly dependent on feedback. This “conversation” is an area worthy of investigation. Understanding this “conversation” is significant to understanding how we might apply the computer to enhance the designer’s ability to capture, manipulate and reflect on ideas during conceptual design. This paper discusses sketching and its relation to design thinking. It explores the conversations that designers engage in with the media they use. This is done through the explanation of a protocol analysis method. Protocol analysis used in the field of psychology, has been used extensively by Eastman et al (starting in the early 70s) as a method to elicit information about design thinking. In the pilot experiment described in this paper, two persons are used. One plays the role of the “hand” while the other is the “mind”- the two elements that are involved in the design “conversation”. This variation on classical protocol analysis sets out to discover how “intelligent” the hand should be to enhance design by reflection. The paper describes the procedures entailed in the pilot experiment and the resulting data. The paper then concludes by discussing future intentions for research and the far reaching possibilities for use of the computer in architectural studio teaching (as teaching aids) as well as a digital design assistant in conceptual design.
keywords CAAD, Sketching, Protocol Analysis, Design Thinking, Design Education
series ACADIA
last changed 2022/06/07 07:54

_id db00
authors Espina, Jane J.B.
year 2002
title Base de datos de la arquitectura moderna de la ciudad de Maracaibo 1920-1990 [Database of the Modern Architecture of the City of Maracaibo 1920-1990]
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 133-139
summary Bases de datos, Sistemas y Redes 134The purpose of this report is to present the achievements obtained in the use of the technologies of information andcommunication in the architecture, by means of the construction of a database to register the information on the modernarchitecture of the city of Maracaibo from 1920 until 1990, in reference to the constructions located in 5 of Julio, Sectorand to the most outstanding planners for its work, by means of the representation of the same ones in digital format.The objective of this investigation it was to elaborate a database for the registration of the information on the modernarchitecture in the period 1920-1990 of Maracaibo, by means of the design of an automated tool to organize the it datesrelated with the buildings, parcels and planners of the city. The investigation was carried out considering three methodologicalmoments: a) Gathering and classification of the information of the buildings and planners of the modern architectureto elaborate the databases, b) Design of the databases for the organization of the information and c) Design ofthe consultations, information, reports and the beginning menu. For the prosecution of the data files were generated inprograms attended by such computer as: AutoCAD R14 and 2000, Microsoft Word, Microsoft PowerPoint and MicrosoftAccess 2000, CorelDRAW V9.0 and Corel PHOTOPAINT V9.0.The investigation is related with the work developed in the class of Graphic Calculation II, belonging to the Departmentof Communication of the School of Architecture of the Faculty of Architecture and Design of The University of the Zulia(FADLUZ), carried out from the year 1999, using part of the obtained information of the works of the students generatedby means of the CAD systems for the representation in three dimensions of constructions with historical relevance in themodern architecture of Maracaibo, which are classified in the work of The Other City, generating different types ofisometric views, perspectives, representations photorealistics, plants and facades, among others.In what concerns to the thematic of this investigation, previous antecedents are ignored in our environment, and beingthe first time that incorporates the digital graph applied to the work carried out by the architects of “The Other City, thegenesis of the oil city of Maracaibo” carried out in the year 1994; of there the value of this research the field of thearchitecture and computer science. To point out that databases exist in the architecture field fits and of the design, alsoweb sites with information has more than enough architects and architecture works (Montagu, 1999).In The University of the Zulia, specifically in the Faculty of Architecture and Design, they have been carried out twoworks related with the thematic one of database, specifically in the years 1995 and 1996, in the first one a system wasdesigned to visualize, to classify and to analyze from the architectural point of view some historical buildings of Maracaiboand in the second an automated system of documental information was generated on the goods properties built insidethe urban area of Maracaibo. In the world environment it stands out the first database developed in Argentina, it is the database of the Modern andContemporary Architecture “Datarq 2000” elaborated by the Prof. Arturo Montagú of the University of Buenos Aires. The general objective of this work it was the use of new technologies for the prosecution in Architecture and Design (MONTAGU, Ob.cit). In the database, he intends to incorporate a complementary methodology and alternative of use of the informationthat habitually is used in the teaching of the architecture. When concluding this investigation, it was achieved: 1) analysis of projects of modern architecture, of which some form part of the historical patrimony of Maracaibo; 2) organized registrations of type text: historical, formal, space and technical data, and graph: you plant, facades, perspectives, pictures, among other, of the Moments of the Architecture of the Modernity in the city, general data and more excellent characteristics of the constructions, and general data of the Planners with their more important works, besides information on the parcels where the constructions are located, 3)construction in digital format and development of representations photorealistics of architecture projects already built. It is excellent to highlight the importance in the use of the Technologies of Information and Communication in this investigation, since it will allow to incorporate to the means digital part of the information of the modern architecturalconstructions that characterized the city of Maracaibo at the end of the XX century, and that in the last decades they have suffered changes, some of them have disappeared, destroying leaves of the modern historical patrimony of the city; therefore, the necessity arises of to register and to systematize in digital format the graphic information of those constructions. Also, to demonstrate the importance of the use of the computer and of the computer science in the representation and compression of the buildings of the modern architecture, to inclination texts, images, mapping, models in 3D and information organized in databases, and the relevance of the work from the pedagogic point of view,since it will be able to be used in the dictation of computer science classes and history in the teaching of the University studies of third level, allowing the learning with the use in new ways of transmission of the knowledge starting from the visual information on the part of the students in the elaboration of models in three dimensions or electronic scalemodels, also of the modern architecture and in a future to serve as support material for virtual recoveries of some buildings that at the present time they don’t exist or they are almost destroyed. In synthesis, the investigation will allow to know and to register the architecture of Maracaibo in this last decade, which arises under the parameters of the modernity and that through its organization and visualization in digital format, it will allow to the students, professors and interested in knowing it in a quicker and more efficient way, constituting a contribution to theteaching in the history area and calculation. Also, it can be of a lot of utility for the development of future investigation projects related with the thematic one and restoration of buildings of the modernity in Maracaibo.
keywords database, digital format, modern architecture, model, mapping
series SIGRADI
email
last changed 2016/03/10 09:51

_id 9554
authors Jagbeck, A.
year 2000
title Field test of a product-model-based construction planning tool
source CIDAC, Volume 2 Issue 2 May 2000, pp. 80-91
summary Over the past decade, more than a dozen papers describing proposals for product-model-based planning models have been published, but only a few of these proposals have been implemented in prototypes that have been tested in full-scale tests. PreFacto is a research-based software for production planning based on product model data, which has been developed and tested in close cooperation with a construction company. It is operational but still under development. Assessing the degree of functionality achieved so far is a natural part of a modern cyclical software development process. This paper describes a 6-month full-scale field trial of the PreFacto system undertaken by the site management in cooperation with the author. It was carried out as a parallel planning activity on a real ongoing project. The trial was documented and the system's usability for the construction planning process was analysed and evaluated using mainly qualitative methods. The evaluated planning activities include importing product model data and performing a range of planning activities. The evaluation addressed such usability aspects as system capacity, ease of use of the interface, and conceptual compliance with the use context and the various planning tasks. The test method was useful for checking the conceptual model from the user's point of view. At the same time, the field trial worked equally as a case study for developers, a study of a degree of reality that would not have been possible in a laboratory situation. Apart from the evaluation of the features of the software itself, there are some results of general interest. the main result was that all the advantages of the system derive from the connection between design and planning, i.e. the use of a product model as a basis for defining the result of production tasks. Allowing production managers to freely structure tasks and to apply resource recipes were the most relevant functions.
keywords Integration, Information, Construction, Planning, Field Trial, Product Model
series journal paper
last changed 2003/05/15 21:23

_id avocaad_2001_22
id avocaad_2001_22
authors Jos van Leeuwen, Joran Jessurun
year 2001
title XML for Flexibility an Extensibility of Design Information Models
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary The VR-DIS research programme aims at the development of a Virtual Reality – Design Information System. This is a design and decision support system for collaborative design that provides a VR interface for the interaction with both the geometric representation of a design and the non-geometric information concerning the design throughout the design process. The major part of the research programme focuses on early stages of design. The programme is carried out by a large number of researchers from a variety of disciplines in the domain of construction and architecture, including architectural design, building physics, structural design, construction management, etc.Management of design information is at the core of this design and decision support system. Much effort in the development of the system has been and still is dedicated to the underlying theory for information management and its implementation in an Application Programming Interface (API) that the various modules of the system use. The theory is based on a so-called Feature-based modelling approach and is described in the PhD thesis by [first author, 1999] and in [first author et al., 2000a]. This information modelling approach provides three major capabilities: (1) it allows for extensibility of conceptual schemas, which is used to enable a designer to define new typologies to model with; (2) it supports sharing of conceptual schemas, called type-libraries; and (3) it provides a high level of flexibility that offers the designer the opportunity to easily reuse design information and to model information constructs that are not foreseen in any existing typologies. The latter aspect involves the capability to expand information entities in a model with relationships and properties that are not typologically defined but applicable to a particular design situation only; this helps the designer to represent the actual design concepts more accurately.The functional design of the information modelling system is based on a three-layered framework. In the bottom layer, the actual design data is stored in so-called Feature Instances. The middle layer defines the typologies of these instances in so-called Feature Types. The top layer is called the meta-layer because it provides the class definitions for both the Types layer and the Instances layer; both Feature Types and Feature Instances are objects of the classes defined in the top layer. This top layer ensures that types can be defined on the fly and that instances can be created from these types, as well as expanded with non-typological properties and relationships while still conforming to the information structures laid out in the meta-layer.The VR-DIS system consists of a growing number of modules for different kinds of functionality in relation with the design task. These modules access the design information through the API that implements the meta-layer of the framework. This API has previously been implemented using an Object-Oriented Database (OODB), but this implementation had a number of disadvantages. The dependency of the OODB, a commercial software library, was considered the most problematic. Not only are licenses of the OODB library rather expensive, also the fact that this library is not common technology that can easily be shared among a wide range of applications, including existing applications, reduces its suitability for a system with the aforementioned specifications. In addition, the OODB approach required a relatively large effort to implement the desired functionality. It lacked adequate support to generate unique identifications for worldwide information sources that were understandable for human interpretation. This strongly limited the capabilities of the system to share conceptual schemas.The approach that is currently being implemented for the core of the VR-DIS system is based on eXtensible Markup Language (XML). Rather than implementing the meta-layer of the framework into classes of Feature Types and Feature Instances, this level of meta-definitions is provided in a document type definition (DTD). The DTD is complemented with a set of rules that are implemented into a parser API, based on the Document Object Model (DOM). The advantages of the XML approach for the modelling framework are immediate. Type-libraries distributed through Internet are now supported through the mechanisms of namespaces and XLink. The implementation of the API is no longer dependent of a particular database system. This provides much more flexibility in the implementation of the various modules of the VR-DIS system. Being based on the (supposed to become) standard of XML the implementation is much more versatile in its future usage, specifically in a distributed, Internet-based environment.These immediate advantages of the XML approach opened the door to a wide range of applications that are and will be developed on top of the VR-DIS core. Examples of these are the VR-based 3D sketching module [VR-DIS ref., 2000]; the VR-based information-modelling tool that allows the management and manipulation of information models for design in a VR environment [VR-DIS ref., 2000]; and a design-knowledge capturing module that is now under development [first author et al., 2000a and 2000b]. The latter module aims to assist the designer in the recognition and utilisation of existing and new typologies in a design situation. The replacement of the OODB implementation of the API by the XML implementation enables these modules to use distributed Feature databases through Internet, without many changes to their own code, and without the loss of the flexibility and extensibility of conceptual schemas that are implemented as part of the API. Research in the near future will result in Internet-based applications that support designers in the utilisation of distributed libraries of product-information, design-knowledge, case-bases, etc.The paper roughly follows the outline of the abstract, starting with an introduction to the VR-DIS project, its objectives, and the developed theory of the Feature-modelling framework that forms the core of it. It briefly discusses the necessity of schema evolution, flexibility and extensibility of conceptual schemas, and how these capabilities have been addressed in the framework. The major part of the paper describes how the previously mentioned aspects of the framework are implemented in the XML-based approach, providing details on the so-called meta-layer, its definition in the DTD, and the parser rules that complement it. The impact of the XML approach on the functionality of the VR-DIS modules and the system as a whole is demonstrated by a discussion of these modules and scenarios of their usage for design tasks. The paper is concluded with an overview of future work on the sharing of Internet-based design information and design knowledge.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id sigradi2004_071
id sigradi2004_071
authors Marcelo Payssé; Magela Bielli; Juan Pablo Portillo; Fernando Rischewski
year 2004
title Proyecto de automatización de cálculos estructurales para programas cadî, uso de herramientas informáticas en la enseñanza del cálculo estructural en la facultad de arquitectura [Automation Project of Structural Calculations for CAD Programs - Use of Digital Tools for Structural Calculations in the School of Architecture]
source SIGraDi 2004 - [Proceedings of the 8th Iberoamerican Congress of Digital Graphics] Porte Alegre - Brasil 10-12 november 2004
summary This paper describes the implementation of Automated Structural Calculations For CAD Programs. We aim to develop a newly conceived software prioritizing the analysis and structural design in the conceptual aspect, linking the calculation with the usual graphic procedures by means of a specific application for local education methodology, that will be intellectual property of our University. It refers the methodology applied in the implementation of the program and the pedagogical aspect we considered. The software is developped as a macro programmed in open source code (Visual Basic Application) with data-input and data output generated in AutoCAD 2000. The specific objectives are: to obtain significant improvements in the habitual resolution standards of complex exercises, to obtain suitable software with free distribution for academic purposes with minimum costs and develop an adequate instrument to the specific architects . work modality in our faculty.
keywords Academic experiences, structural calculation, structural representation
series SIGRADI
email
last changed 2016/03/10 09:55

_id 7321
authors Potier, S., Maltret, J.L. and Zoller, J.
year 2000
title Computer graphics: assistance for archaelogical hypotheses
source Automation in Construction 9 (1) (2000) pp. 117-128
summary This paper is a contribution to the domain of computer tools for architectural and archeological restitution of ancient buildings. We describe an application of these tools to the modeling of the 14th century AD. Thermae of Constantin in Arles, south of France. It was a diploma project in School of Architecture of Marseille-Luminy, and took place in a context defined in the European ARELATE project. The general objective of this project is to emphasize the archeological and architectural heritage of the city of Arles; it aims, in particular, to equip the museum of ancient Arles with a computer tool enabling the storage and consultation of archaeological archives, the communication of information and exchange by specialized networks, and the creation of a virtual museum allowing a redescription of the monuments and a "virtual" visit of ancient Arles. Our approach involves a multidisciplinary approach, calling on architecture, archeology and computer science. The archeologist's work is to collect information and interpret it; this is the starting point of the architect's work who, using these elements, suggests an architectural reconstruction. This synthesis contains the functioning analysis of the structure and building. The potential provided by the computer as a tool (in this case, the POV-Ray software) with access to several three-dimensional visualizations, according to hypotheses formulated by the architect and archaeologists, necessitates the use of evolutive models which, thanks to the parametrization of dimensions of a building and its elements, can be adapted to all the changes desired by the architect. The specific contribution of POV-Ray in architectural reconstruction of thermae finds its expression in four forms of this modeling program, which correspond to the objectives set by the architect in agreement with archeologists: (a) The parametrization of dimensions, which contributes significantly in simplifying the reintervention process of the architectural data base; (b) Hierarchy and links between variables, allowing "grouped" modifications of modelized elements in order to preserve the consistency of the architectural building's morphology; (c) The levels of modeling (with or without facing, for example), which admit of the exploration of all structural and architectural trails (relationship form/function); and, (d) The "model-type", facilitating the setting up of hypotheses by simple scaling and transformation of these models (e.g., roofing models) on an already modelled structure. The methodological validation of this modeling software's particular use in architectural formulation of hypotheses shows that the software is the principal graphical medium of discussion between architect and archaeologist, thus confirming the hypotheses formulated at the beginning of this project.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:23

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id fe54
authors Regli, W.C. and Cicirello, V.A.
year 2000
title Managing digital libraries for computer-aided design
source Computer-Aided Design, Vol. 32 (2) (2000) pp. 119-132
summary This paper describes our initial efforts to deploy a digital library to support computer-aided collaborative design. At present, this experimental testbed, The EngineeringDesign Knowledge Repository, is an effort to collect and archive public domain engineering data for use by researchers and engineering professionals. We envision thiseffort expanding to facilitate collaboration and process archival for distributed design and manufacturing teams.CAD knowledge-bases are vital to engineers, who search through vast amounts of corporate legacy data and navigate on-line catalogs to retrieve precisely the rightcomponents for assembly into new products. This research attempts to begin addressing the critical need for improved computational methods for reasoning about complexgeometric and engineering information. In particular, we focus on archival and reuse of design and manufacturing data for mechatronic systems. This paper presents adescription of the research problems, an overview of the initial architecture of the testbed and a description of some of our preliminary results on conceptual design anddesign retrieval.
keywords Computer-Aided Design, Computer-Aided Engineering, Engineering Knowledge-Bases, Product Data Management, World Wide Web, Network-Enabled,CAD,CAE
series journal paper
email
last changed 2003/05/15 21:33

_id avocaad_2001_20
id avocaad_2001_20
authors Shen-Kai Tang
year 2001
title Toward a procedure of computer simulation in the restoration of historical architecture
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In the field of architectural design, “visualization¨ generally refers to some media, communicating and representing the idea of designers, such as ordinary drafts, maps, perspectives, photos and physical models, etc. (Rahman, 1992; Susan, 2000). The main reason why we adopt visualization is that it enables us to understand clearly and to control complicated procedures (Gombrich, 1990). Secondly, the way we get design knowledge is more from the published visualized images and less from personal experiences (Evans, 1989). Thus the importance of the representation of visualization is manifested.Due to the developments of computer technology in recent years, various computer aided design system are invented and used in a great amount, such as image processing, computer graphic, computer modeling/rendering, animation, multimedia, virtual reality and collaboration, etc. (Lawson, 1995; Liu, 1996). The conventional media are greatly replaced by computer media, and the visualization is further brought into the computerized stage. The procedure of visual impact analysis and assessment (VIAA), addressed by Rahman (1992), is renewed and amended for the intervention of computer (Liu, 2000). Based on the procedures above, a great amount of applied researches are proceeded. Therefore it is evident that the computer visualization is helpful to the discussion and evaluation during the design process (Hall, 1988, 1990, 1992, 1995, 1996, 1997, 1998; Liu, 1997; Sasada, 1986, 1988, 1990, 1993, 1997, 1998). In addition to the process of architectural design, the computer visualization is also applied to the subject of construction, which is repeatedly amended and corrected by the images of computer simulation (Liu, 2000). Potier (2000) probes into the contextual research and restoration of historical architecture by the technology of computer simulation before the practical restoration is constructed. In this way he established a communicative mode among archeologists, architects via computer media.In the research of restoration and preservation of historical architecture in Taiwan, many scholars have been devoted into the studies of historical contextual criticism (Shi, 1988, 1990, 1991, 1992, 1995; Fu, 1995, 1997; Chiu, 2000). Clues that accompany the historical contextual criticism (such as oral information, writings, photographs, pictures, etc.) help to explore the construction and the procedure of restoration (Hung, 1995), and serve as an aid to the studies of the usage and durability of the materials in the restoration of historical architecture (Dasser, 1990; Wang, 1998). Many clues are lost, because historical architecture is often age-old (Hung, 1995). Under the circumstance, restoration of historical architecture can only be proceeded by restricted pictures, written data and oral information (Shi, 1989). Therefore, computer simulation is employed by scholars to simulate the condition of historical architecture with restricted information after restoration (Potier, 2000). Yet this is only the early stage of computer-aid restoration. The focus of the paper aims at exploring that whether visual simulation of computer can help to investigate the practice of restoration and the estimation and evaluation after restoration.By exploring the restoration of historical architecture (taking the Gigi Train Station destroyed by the earthquake in last September as the operating example), this study aims to establish a complete work on computer visualization, including the concept of restoration, the practice of restoration, and the estimation and evaluation of restoration.This research is to simulate the process of restoration by computer simulation based on visualized media (restricted pictures, restricted written data and restricted oral information) and the specialized experience of historical architects (Potier, 2000). During the process of practicing, communicates with craftsmen repeatedly with some simulated alternatives, and makes the result as the foundation of evaluating and adjusting the simulating process and outcome. In this way we address a suitable and complete process of computer visualization for historical architecture.The significance of this paper is that we are able to control every detail more exactly, and then prevent possible problems during the process of restoration of historical architecture.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 01c0
authors Af Klercker, Jonas
year 2000
title Modelling for Virtual Reality in Architecture
doi https://doi.org/10.52842/conf.ecaade.2000.209
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 209-213
summary CAAD systems are using object modelling methods for building databases to make information available. Object data must then be made useful for many different purposes in the design process. Even if the capacity of the computer will allow an almost unlimited amount of information to be transformed, the eye does not make the transformations in the same “simple” mathematical way. Trained architects have to involve in an inventive process of finding ways to “harmonize” this new medium with the human eye and the architect’s professional experience. This paper will be an interimistic report from a surveying course. During the spring semester 2000 the CAAD division of TU-Lund is giving a course “Modelling for VR in Architecture”. The students are practising architects with experience from using object modelling CAAD. The aims are to survey different ways to use available hard- and software to create VR-models of pieces of architecture and evaluate them in desktop and CAVE environments. The architect is to do as much preparation work as possible with his CAAD program and only the final adjustments with the special VR tool.
keywords CAAD, VR, Modelling, Spatial Experience
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:54

_id 449f
authors Aish, Robert
year 2000
title Collaborative Design using Long Transactions and "Change Merge"
doi https://doi.org/10.52842/conf.ecaade.2000.107
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 107-111
summary If our goal is implement collaborative engineering across temporal, spatial and discipline dimensions, then it is suggested that we first have to address the necessary pre-requisites, which include both the deployment of "enterprise computing" and an understanding of the computing concepts on which such enterprise systems are based. This paper will consider the following computing concepts and the related concepts in the world of design computing, and discuss how these concepts have been realised in Bentley SystemsÕ ProjectBank collaborative engineering data repository: Computing Concept Related Design Concept Normalisation Model v. Report (or Drawing) Transaction Consistency of Design Long Transaction Parallelisation of Design Change Merge Coordination (synchronisation) Revisions Coordination (synchronisation) While we are most probably familiar with the applications of existing datadase concepts (such as Normalisation and Transaction Management) to the design process, the intent of this paper to focus
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:54

_id 1ee6
authors Argumedo, C., Guerri, C., Rainero, C., Carmena, S., Lomónaco, H., María Gilli, C. and Del Rio, A.
year 2000
title Restitución perspectiva mediante el uso de herramientas digitales para la confección deuna base de datos de obras arquitectónicas - (Perspective Restitution by means of the use of Digital Tools for the Preparation of a Data Base of Architectonic Works)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 188-190
summary The work is developed applying the perspective restitution method, based on the photographic survey of buildings. It researches the accuracy selection of the instruments, that should be effective, easy to manage, low cost and it allow fast results so as to let us compile a digital graphic data bases of the chosen works. The aim of the project is to elaborate graphic documents not only from the paradigmatic works but also from domestic architecture, so important in the consolidation of city. The idea is to include new concepts about the use of digital devices and instruments. The emphasis is on the production of a low cost graphic documents, obtained with standard hardware and software used at our University. The architectural works selected belongs to the Rosario’s Rationalist heritage that have to be completed at the Municipal Archive.
series SIGRADI
email
last changed 2016/03/10 09:47

_id a35a
authors Arponen, Matti
year 2002
title From 2D Base Map To 3D City Model
source UMDS '02 Proceedings, Prague (Czech Republic) 2-4 October 2002, I.17-I.28
summary Since 1997 Helsinki City Survey Division has proceeded in experimenting and in developing the methods for converting and supplementing current digital 2D base maps in the scale 1:500 to a 3D city model. Actually since 1986 project areas have been produced in 3D for city planning and construction projects, but working with the whole map database started in 1997 because of customer demands and competitive 3D projects. 3D map database needs new data modelling and structures, map update processes need new working orders and the draftsmen need to learn a new profession; the 3D modeller. Laser-scanning and digital photogrammetry have been used in collecting 3D information on the map objects. During the years 1999-2000 laser-scanning experiments covering 45 km2 have been carried out utilizing the Swedish TopEye system. Simultaneous digital photography produces material for orto photo mosaics. These have been applied in mapping out dated map features and in vectorizing 3D buildings manually, semi automatically and automatically. In modelling we use TerraScan, TerraPhoto and TerraModeler sw, which are developed in Finland. The 3D city model project is at the same time partially a software development project. An accuracy and feasibility study was also completed and will be shortly presented. The three scales of 3D models are also presented in this paper. Some new 3D products and some usage of 3D city models in practice will be demonstrated in the actual presentation.
keywords 3D City modeling
series other
email
more www.udms.net
last changed 2003/11/21 15:16

_id 7da7
authors Benedetti, Cristina and Salvioni, Giulio
year 1999
title The Use of Renewable Resource in Architecture: New Teaching Methodologies
doi https://doi.org/10.52842/conf.ecaade.1999.751
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 751-756
summary The program is organized into four parts. Each is very much connected, both logically and methodologically, so that the unit as a whole consists of a content and method of access that are not divided up. This method is not in a chronological order that simply goes in one direction, rather it allows the user to "refer back", in real time and in different directions. For the simple purpose of explanation, the sections of the program are listed as follows: (-) "Basic information" concerns the basics of bioclimatic and timber architecture. Without this knowledge, the other two sections would be difficult to understand; (-) "Actual buildings throughout the world"; give examples of architectural quality; they concretize the basics of bioclimatic and timber architecture; (-) "Students' Masters Theses", that follow on from the basic information and the learning experience "in the field", and guided by the lecturer, have a critical approach to actual buildings throughout the world. (-) A multimedia data-sheet organized to ensure a clear and straightforward presentation of information about the construction products. It relies on a tab-based navigation interface that gives users access to eight different stacked windows.
keywords Architecture, Multimedia, Timber, Bioclimatic, Classification
series eCAADe
email
last changed 2022/06/07 07:54

_id f288
authors Bille, Pia
year 1999
title Integrating GIS and Electronic Networks In Urban Design and Planning
doi https://doi.org/10.52842/conf.ecaade.1999.722
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 722-728
summary In 1998 I undertook an inquiry into the use of information technology in Urban Design and Planning in Danish municipalities and among planning consultants. The aim was to find out who was working with the IT and for what purposes it was used. In education there seems to be barriers to a full integration of the new media, and I wanted to find out if that was also the case in the practise of architects and planners. Surprisingly I discovered that there was a computer on almost every desk, - but there were big differences in the use of the technology. The investigation described here is based on interviews with planners in selected municipalities and with urban planning consultants, and the results have been summarised in a publication.
keywords Urban Planning, Electronic Collaboration, GIS, Data Bases
series eCAADe
email
last changed 2022/06/07 07:54

_id a136
authors Blaise, J.Y., Dudek, I. and Drap, P.
year 1998
title Java collaborative interface for architectural simulations A case study on wooden ceilings of Krakow
source International Conference On Conservation - Krakow 2000, 23-24 November 1998, Krakow, Poland
summary Concern for the architectural and urban preservation problems has been considerably increasing in the past decades, and with it the necessity to investigate the consequences and opportunities opened for the conservation discipline by the development of computer-based systems. Architectural interventions on historical edifices or in preserved urban fabric face conservationists and architects with specific problems related to the handling and exchange of a variety of historical documents and representations. The recent development of information technologies offers opportunities to favour a better access to such data, as well as means to represent architectural hypothesis or design. Developing applications for the Internet also introduces a greater capacity to exchange experiences or ideas and to invest on low-cost collaborative working platforms. In the field of the architectural heritage, our research addresses two problems: historical data and documentation of the edifice, methods of representation (knowledge modelling and visualisation) of the edifice. This research is connected with the ARKIW POLONIUM co-operation program that links the MAP-GAMSAU CNRS laboratory (Marseilles, France) and the Institute HAiKZ of Kraków's Faculty of Architecture. The ARKIW programme deals with questions related to the use of information technologies in the recording, protection and studying of the architectural heritage. Case studies are chosen in order to experience and validate a technical platform dedicated to the formalisation and exchange of knowledge related to the architectural heritage (architectural data management, representation and simulation tools, survey methods, ...). A special focus is put on the evolution of the urban fabric and on the simulation of reconstructional hypothesis. Our contribution will introduce current ARKIW internet applications and experiences: The ARPENTEUR architectural survey experiment on Wie¿a Ratuszowa (a photogrammetrical survey based on an architectural model). A Gothic and Renaissance reconstruction of the Ratusz Krakowski using a commercial modelisation and animation software (MAYA). The SOL on line documentation interface for Kraków's Rynek G_ówny. Internet analytical approach in the presentation of morphological informations about Kraków's Kramy Bogate Rynku Krakowskiego. Object-Orientation approach in the modelling of the architectural corpus. The VALIDEUR and HUBLOT Virtual Reality modellers for the simulation and representation of reconstructional hypothesis and corpus analysis.
series other
last changed 2003/04/23 15:14

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 37HOMELOGIN (you are user _anon_598635 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002