CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 521

_id 75a8
authors Achten, Henri H.
year 1997
title Generic representations : an approach for modelling procedural and declarative knowledge of building types in architectural design
source Eindhoven University of Technology
summary The building type is a knowledge structure that is recognised as an important element in the architectural design process. For an architect, the type provides information about norms, layout, appearance, etc. of the kind of building that is being designed. Questions that seem unresolved about (computational) approaches to building types are the relationship between the many kinds of instances that are generally recognised as belonging to a particular building type, the way a type can deal with varying briefs (or with mixed use), and how a type can accommodate different sites. Approaches that aim to model building types as data structures of interrelated variables (so-called ‘prototypes’) face problems clarifying these questions. The research work at hand proposes to investigate the role of knowledge associated with building types in the design process. Knowledge of the building type must be represented during the design process. Therefore, it is necessary to find a representation which supports design decisions, supports the changes and transformations of the design during the design process, encompasses knowledge of the design task, and which relates to the way architects design. It is proposed in the research work that graphic representations can be used as a medium to encode knowledge of the building type. This is possible if they consistently encode the things they represent; if their knowledge content can be derived, and if they are versatile enough to support a design process of a building belonging to a type. A graphic representation consists of graphic entities such as vertices, lines, planes, shapes, symbols, etc. Establishing a graphic representation implies making design decisions with respect to these entities. Therefore it is necessary to identify the elements of the graphic representation that play a role in decision making. An approach based on the concept of ‘graphic units’ is developed. A graphic unit is a particular set of graphic entities that has some constant meaning. Examples are: zone, circulation scheme, axial system, and contour. Each graphic unit implies a particular kind of design decision (e.g. functional areas, system of circulation, spatial organisation, and layout of the building). By differentiating between appearance and meaning, it is possible to define the graphic unit relatively shape-independent. If a number of graphic representations have the same graphic units, they deal with the same kind of design decisions. Graphic representations that have such a specifically defined knowledge content are called ‘generic representations.’ An analysis of over 220 graphic representations in the literature on architecture results in 24 graphic units and 50 generic representations. For each generic representation the design decisions are identified. These decisions are informed by the nature of the design task at hand. If the design task is a building belonging to a building type, then knowledge of the building type is required. In a single generic representation knowledge of norms, rules, and principles associated with the building type are used. Therefore, a single generic representation encodes declarative knowledge of the building type. A sequence of generic representations encodes a series of design decisions which are informed by the design task. If the design task is a building type, then procedural knowledge of the building type is used. By means of the graphic unit and generic representation, it is possible to identify a number of relations that determine sequences of generic representations. These relations are: additional graphic units, themes of generic representations, and successive graphic units. Additional graphic units defines subsequent generic representations by adding a new graphic unit. Themes of generic representations defines groups of generic representations that deal with the same kind of design decisions. Successive graphic units defines preconditions for subsequent or previous generic representations. On the basis of themes it is possible to define six general sequences of generic representations. On the basis of additional and successive graphic units it is possible to define sequences of generic representations in themes. On the basis of these sequences, one particular sequence of 23 generic representations is defined. The particular sequence of generic representations structures the decision process of a building type. In order to test this assertion, the particular sequence is applied to the office building type. For each generic representation, it is possible to establish a graphic representation that follows the definition of the graphic units and to apply the required statements from the office building knowledge base. The application results in a sequence of graphic representations that particularises an office building design. Implementation of seven generic representations in a computer aided design system demonstrates the use of generic representations for design support. The set is large enough to provide additional weight to the conclusion that generic representations map declarative and procedural knowledge of the building type.
series thesis:PhD
email
more http://alexandria.tue.nl/extra2/9703788.pdf
last changed 2003/11/21 15:15

_id 823f
authors Bignon, J.C., Halin, G. and Humbert, P.
year 1997
title Hypermedia Structuring of the Technical Documentation for the Architectural Aided Design
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 843-848
summary The definition of an universal structuring model of the technical documentation is arduous, indeed utopian considering the great number of products and the diversity of relative information. To answer this situation we are trying to develop a general approach of the documentation. The document is the base entity of documentation structuring and it represents a coherent informative unit. We propose a model of document hypermedia structuring. This model allows the definition, the presentation, the navigation and the retrieval of general information on building products by a document manipulation. It is associated with a hypermedia design method adapted to document management. This method proposes, after the identification of the user, three phases of hypermedia definition : data definition, navigation definition and user interface definition. The model of a hypermedia structuring of the technical documentation proposed in this article is at once independent of available information on products, open, and makes easier the addition of new navigational functions.
series CAAD Futures
email
last changed 2003/11/21 15:16

_id cabb
authors Broughton, T., Tan, A. and Coates, P.S.
year 1997
title The Use of Genetic Programming In Exploring 3D Design Worlds - A Report of Two Projects by Msc Students at CECA UEL
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 885-915
summary Genetic algorithms are used to evolve rule systems for a generative process, in one case a shape grammar,which uses the "Dawkins Biomorph" paradigm of user driven choices to perform artificial selection, in the other a CA/Lindenmeyer system using the Hausdorff dimension of the resultant configuration to drive natural selection. (1) Using Genetic Programming in an interactive 3D shape grammar. A report of a generative system combining genetic programming (GP) and 3D shape grammars. The reasoning that backs up the basis for this work depends on the interpretation of design as search In this system, a 3D form is a computer program made up of functions (transformations) & terminals (building blocks). Each program evaluates into a structure. Hence, in this instance a program is synonymous with form. Building blocks of form are platonic solids (box, cylinder, etc.). A Variety of combinations of the simple affine transformations of translation, scaling, rotation together with Boolean operations of union, subtraction and intersection performed on the building blocks generate different configurations of 3D forms. Using to the methodology of genetic programming, an initial population of such programs are randomly generated,subjected to a test for fitness (the eyeball test). Individual programs that have passed the test are selected to be parents for reproducing the next generation of programs via the process of recombination. (2) Using a GA to evolve rule sets to achieve a goal configuration. The aim of these experiments was to build a framework in which a structure's form could be defined by a set of instructions encoded into its genetic make-up. This was achieved by combining a generative rule system commonly used to model biological growth with a genetic algorithm simulating the evolutionary process of selection to evolve an adaptive rule system capable of replicating any preselected 3D shape. The generative modelling technique used is a string rewriting Lindenmayer system the genes of the emergent structures are the production rules of the L-system, and the spatial representation of the structures uses the geometry of iso-spatial dense-packed spheres
series CAAD Futures
email
last changed 2003/11/21 15:16

_id d60a
authors Casti, J.C.
year 1997
title Would be Worlds: How simulation is changing the frontiers of science
source John Wiley & Sons, Inc., New York.
summary Five Golden Rules is caviar for the inquiring reader. Anyone who enjoyed solving math problems in high school will be able to follow the author's explanations, even if high school was a long time ago. There is joy here in watching the unfolding of these intricate and beautiful techniques. Casti's gift is to be able to let the nonmathematical reader share in his understanding of the beauty of a good theory.-Christian Science Monitor "[Five Golden Rules] ranges into exotic fields such as game theory (which played a role in the Cuban Missile Crisis) and topology (which explains how to turn a doughnut into a coffee cup, or vice versa). If you'd like to have fun while giving your brain a first-class workout, then check this book out."-San Francisco Examiner "Unlike many popularizations, [this book] is more than a tour d'horizon: it has the power to change the way you think. Merely knowing about the existence of some of these golden rules may spark new, interesting-maybe even revolutionary-ideas in your mind. And what more could you ask from a book?"-New Scientist "This book has meat! It is solid fare, food for thought . . . makes math less forbidding, and much more interesting."-Ben Bova, The Hartford Courant "This book turns math into beauty."-Colorado Daily "John Casti is one of the great science writers of the 1990s."-San Francisco Examiner In the ever-changing world of science, new instruments often lead to momentous discoveries that dramatically transform our understanding. Today, with the aid of a bold new instrument, scientists are embarking on a scientific revolution as profound as that inspired by Galileo's telescope. Out of the bits and bytes of computer memory, researchers are fashioning silicon surrogates of the real world-elaborate "artificial worlds"-that allow them to perform experiments that are too impractical, too costly, or, in some cases, too dangerous to do "in the flesh." From simulated tests of new drugs to models of the birth of planetary systems and galaxies to computerized petri dishes growing digital life forms, these laboratories of the future are the essential tools of a controversial new scientific method. This new method is founded not on direct observation and experiment but on the mapping of the universe from real space into cyberspace. There is a whole new science happening here-the science of simulation. The most exciting territory being mapped by artificial worlds is the exotic new frontier of "complex, adaptive systems." These systems involve living "agents" that continuously change their behavior in ways that make prediction and measurement by the old rules of science impossible-from environmental ecosystems to the system of a marketplace economy. Their exploration represents the horizon for discovery in the twenty-first century, and simulated worlds are charting the course. In Would-Be Worlds, acclaimed author John Casti takes readers on a fascinating excursion through a number of remarkable silicon microworlds and shows us how they are being used to formulate important new theories and to solve a host of practical problems. We visit Tierra, a "computerized terrarium" in which artificial life forms known as biomorphs grow and mutate, revealing new insights into natural selection and evolution. We play a game of Balance of Power, a simulation of the complex forces shaping geopolitics. And we take a drive through TRANSIMS, a model of the city of Albuquerque, New Mexico, to discover the root causes of events like traffic jams and accidents. Along the way, Casti probes the answers to a host of profound questions these "would-be worlds" raise about the new science of simulation. If we can create worlds inside our computers at will, how real can we say they are? Will they unlock the most intractable secrets of our universe? Or will they reveal instead only the laws of an alternate reality? How "real" do these models need to be? And how real can they be? The answers to these questions are likely to change the face of scientific research forever.
series other
last changed 2003/04/23 15:14

_id 123c
authors Coomans, M.K.D. and Timmermans, H.J.P.
year 1997
title Towards a Taxonomy of Virtual Reality User Interfaces
source Proceedings of the International Conference on Information Visualisation (IV97), pp. 17-29
summary Virtual reality based user interfaces (VRUIs) are expected to bring about a revolution in computing. VR can potentially communicate large amounts of data in an easily understandable format. VR looks very promising, but it is still a very new interface technology for which very little application oriented knowledge is available. As a basis for such a future VRUI design theory, a taxonomy of VRUIs is required. A general model of human computer communication is formulated. This model constitutes a frame for the integration of partial taxonomies of human computer interaction that are found in the literature. The whole model constitutes a general user interface taxonomy. The field of VRUIs is described and delimited with respect to this taxonomy.
series other
last changed 2003/04/23 15:50

_id 2006_506
id 2006_506
authors Fioravanti, Antonio and Rinaldo Rustico
year 2006
title x-House game - A Space for simulating a Collaborative Working Environment in Architecture
source Communicating Space(s) [24th eCAADe Conference Proceedings / ISBN 0-9541183-5-9] Volos (Greece) 6-9 September 2006, pp. 506-511
doi https://doi.org/10.52842/conf.ecaade.2006.506
summary The research consists of the set up of a game simulating a e Collaborative Working Environment – CWE – in Architectural Design. The use of a game is particularly useful as it makes it possible to simplify the complex terms of the problem and, through the game itself, makes it easier to study knowledge engineering tools, communication protocols and the areas of an ICT implementation of a general model of collaborative design. In the following several characteristics of the game are given (also with reference to other games) such as; participating actors (Wix 1997), the “pieces” (construction components) used, the modular space employed, the PDWs/SDW dialectics, the screenshot of the interface prototype, the score.
keywords Architectural Design; CWE; Game; Representation Model; KBs
series eCAADe
email
last changed 2022/06/07 07:50

_id 742e
authors Hsieh, T.
year 1997
title The economic implications of subcontracting practice on building prefabrication
source Automation in Construction 6 (3) (1997) pp. 163-174
summary Pressured by labor shortage, quality requirements and tight construction schedules, building constructors are seeking innovative technology to tackle these unfriendly conditions while achieving the targeted profit. For the past decades, technologies related to building prefabrication are being incorporated into conventional construction methods for producing more favorable results. When building prefabrication method is adopted, the organization of the construction team, particularly related to the subcontracting practice, is also subject to change. This paper examines the impact of such changes due to the adoption of building prefabrication technology. This paper first reviews the subcontracting practices in the construction industry. Then, a conceptual economic model of the contractor-subcontractor relationship is developed and is used to explore the economic implications of prefabrication to the subcontracting practice. In the section that follows, a summary discussion on the cost structure and risk sharing nature of building prefabrication is provided. The major conclusion in this research is that the general contractor can not maximize its benefit by conventional subcontracting practice, since the basic elements of the contractor-subcontractor relationship are changed. Based on the analysis and the case study in this paper, it is inferred that in order to achieve maximum benefits through prefabrication, vertically integrating or internalizing the prefabrication subcontractor into the general contractor's organization is preferable.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id e82c
authors Mahdavi, A., Mathew, P. and Wong, N.H.
year 1997
title A Homology-Based Mapping Approach to Concurrent Multi-Domain Performance Evaluation
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 237-246
doi https://doi.org/10.52842/conf.caadria.1997.237
summary Over the past several years there have been a number of research efforts to develop integrated computational tools which seek to effectively support concurrent design and performance evaluation. In prior research, we have argued that elegant and effective solutions for concurrent, integrated design and simulation support systems can be found if the potentially existing structural homologies in general (configurational) and domain-specific (technical) building representations are creatively exploited. We present the use of such structural homologies to facilitate seamless and dynamic communication between a general building representation and multiple performance simulation modules – specifically, a thermal analysis and an air-flow simulation module. As a proof of concept, we demonstrate a computational design environment (SEMPER) that dynamically (and autonomously) links an object-oriented space-based design model, with structurally homologous object models of various simulation routines.
series CAADRIA
email
last changed 2022/06/07 07:59

_id 4cce
authors Monedero, Javier
year 1997
title Parametric Design. A Review and Some Experiences
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.q8p
summary During the last few years there has been an extraordinary development of computer aided tools intended to present or communicate the results of architectural projects. But there has not been a comparable progress in the development of tools intended to assist design to generate architectural forms in an easy and interactive way. Even worst, architects who use the powerful means provided by computers, as a direct tool to create architectural forms are still an exception. Architecture continues to be produced by traditional means using the computer as little more than a drafting tool.

The main reasons that may explain this situation can be identified rather easily, although there will be significant differences of opinion. Mine is that it is a mistake trying to advance too rapidly and, for instance, propose integrated design methods using expert systems and artificial intelligence resources when do not have still an adequate tool to generate and modify simple 3D models.

The modelling tools we have at the present moment are clearly unsatisfactory. Their principal limitation is the lack of appropriate instruments to modify interactively the model once it has been created. This is a fundamental aspect in any design activity, where the designer is constantly going forward and backwards, reelaborating once and again some particular aspect of the model, or its general layout, or even coming back to a previous solution that had been temporarily abandoned.

keywords Parametric Design
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/moneder/moneder.htm
last changed 2022/06/07 07:50

_id 0ec6
authors Shih, Naai Jung
year 1997
title Image Morphing for Architectural Visual Studies
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 397-406
doi https://doi.org/10.52842/conf.caadria.1997.397
summary The purpose of this paper is to suggest and demonstrate how image interpolation, as a tool, can facilitate architectural illustration of design content and process. This study emphasizes a design-oriented image transition process that is distinguished by two types of morphing: process and source. A morp model is presented with components of input, function, output and constraints. Based on a model’s definition, a matrix is used to illustrate the relationship between the two source images by referring to origin, reference plan, configuration, time, etc. Morphing contents emphasizes changes of pixel, outline (2D or 3D), and order. Possible applications in architectural visual studies include morphology study, comparison building renovation before and after, dynamic adjustment, quantitative measurement, dynamic image simulation, and model and image combination.
series CAADRIA
last changed 2022/06/07 07:56

_id c6e1
authors Smulevich, Gerard
year 1997
title Berlin-Crane City: Cardboard, Bits, and the Post-industrial Design Process
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 139-153
doi https://doi.org/10.52842/conf.acadia.1997.139
summary This paper explores the impact of information technology on the architectural design process as seen through different design studios from three schools of architecture in Southern California over a two year period.

All three studios tested notions of representation, simulation and the design process in relation to a post-industrial world and its impact on how we design for it. The sites for two of these studios were in the city of Berlin, where the spearhead of the information age and a leftover of the industrial revolution overlap in an urban condition that is representative of our world after the cold war. The three studios describe a progressive shift in the use of information technology in the design process, from nearly pure image-driven simulation to a more low-tech, highly creative uses of everyday computing tools. Combined, all three cases describe an array of scenarios for content-supportive uses of digital media in a design studio. The first studio described here, from USC, utilized computer modeling and visualization to design a building for a site located within the former no-mans' land of the Berlin Wall. The second studio, from SCI-Arc, produced an urban design proposal for an area along the former Berlin Wall and included a pan-geographic design collaboration via Internet between SCI-Arc/Los Angeles and SCI-Arc/Switzerland. The third and last studio from Woodbury University participated in the 1997 ACSA/Dupont Laminated Glass Competition designing a consulate general for Germany and one for Hong Kong. They employed a hybrid digital/non-digital process extracting experiential representations from simple chipboard study models and then using that information to explore an "enhanced model" through digital imaging processes.

The end of the cold war was coincidental with the explosive popularization of information technology as a consumer product and is poised to have huge impact on how and what we design for our cities. Few places in world express this potential as does the city of Berlin. These three undergraduate design studios employed consumer-grade technology in an attempt to make a difference in how we design, incorporating discussions of historical change, ideological premise and what it means to be an architect in a world where image and content can become easily disconnected from one another.

series ACADIA
email
last changed 2022/06/07 07:56

_id acadia03_022
id acadia03_022
authors Anders, Peter
year 2003
title Towards Comprehensive Space: A context for the programming/design of cybrids
source Connecting >> Crossroads of Digital Discourse [Proceedings of the 2003 Annual Conference of the Association for Computer Aided Design In Architecture / ISBN 1-880250-12-8] Indianapolis (Indiana) 24-27 October 2003, pp. 161-171
doi https://doi.org/10.52842/conf.acadia.2003.161
summary Cybrids have been presented as mixed realities: spatial, architectural compositions comprised of physical and cyberspaces (Anders 1997). In order to create a rigorous approach to the design of architectural cybrids, this paper offers a model for programming their spaces. Other than accepting cyberspaces as part of architecture’s domain, this approach is not radical. Indeed, many parts of program development resemble those of conventional practice. However, the proposition that cyberspaces should be integrated with material structures requires that their relationship be developed from the outset of a project. Hence, this paper provides a method for their integration from the project’s earliest stages, the establishment of its program. This study for an actual project, the Planetary Collegium, describes a distributed campus comprising buildings and cyberspaces in various locales across the globe. The programming for these cybrids merges them within a comprehensive space consisting not only of the physical and cyberspaces, but also in the cognitive spaces of its designers and users.
series ACADIA
email
last changed 2022/06/07 07:54

_id a93b
authors Anders, Peter
year 1997
title Cybrids: Integrating Cognitive and Physical Space in Architecture
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 17-34
doi https://doi.org/10.52842/conf.acadia.1997.017
summary People regularly use non-physical, cognitive spaces to navigate and think. These spaces are important to architects in the design and planning of physical buildings. Cognitive spaces inform design - often underlying principles of architectural composition. They include zones of privacy, territory and the space of memory and visual thought. They let us to map our environment, model or plan projects, even imagine places like Heaven or Hell.

Cyberspace is an electronic extension of this cognitive space. Designers of virtual environments already know the power these spaces have on the imagination. Computers are no longer just tools for projecting buildings. They change the very substance of design. Cyberspace is itself a subject for design. With computers architects can design space both for physical and non-physical media. A conscious integration of cognitive and physical space in architecture can affect construction and maintenance costs, and the impact on natural and urban environments.

This paper is about the convergence of physical and electronic space and its potential effects on architecture. The first part of the paper will define cognitive space and its relationship to cyberspace. The second part will relate cyberspace to the production of architecture. Finally, a recent project done at the University of Michigan Graduate School of Architecture will illustrate the integration of physical and cyberspaces.

series ACADIA
email
last changed 2022/06/07 07:54

_id a35a
authors Arponen, Matti
year 2002
title From 2D Base Map To 3D City Model
source UMDS '02 Proceedings, Prague (Czech Republic) 2-4 October 2002, I.17-I.28
summary Since 1997 Helsinki City Survey Division has proceeded in experimenting and in developing the methods for converting and supplementing current digital 2D base maps in the scale 1:500 to a 3D city model. Actually since 1986 project areas have been produced in 3D for city planning and construction projects, but working with the whole map database started in 1997 because of customer demands and competitive 3D projects. 3D map database needs new data modelling and structures, map update processes need new working orders and the draftsmen need to learn a new profession; the 3D modeller. Laser-scanning and digital photogrammetry have been used in collecting 3D information on the map objects. During the years 1999-2000 laser-scanning experiments covering 45 km2 have been carried out utilizing the Swedish TopEye system. Simultaneous digital photography produces material for orto photo mosaics. These have been applied in mapping out dated map features and in vectorizing 3D buildings manually, semi automatically and automatically. In modelling we use TerraScan, TerraPhoto and TerraModeler sw, which are developed in Finland. The 3D city model project is at the same time partially a software development project. An accuracy and feasibility study was also completed and will be shortly presented. The three scales of 3D models are also presented in this paper. Some new 3D products and some usage of 3D city models in practice will be demonstrated in the actual presentation.
keywords 3D City modeling
series other
email
more www.udms.net
last changed 2003/11/21 15:16

_id eb53
authors Asanowicz, K. and Bartnicka, M.
year 1997
title Computer analysis of visual perception - endoscopy without endoscope
source Architectural and Urban Simulation Techniques in Research and Education [Proceedings of the 3rd European Architectural Endoscopy Association Conference / ISBN 90-407-1669-2]
summary This paper presents a method of using computer animation techniques in order to solve problems of visual pollution of city environment. It is our observation that human-inducted degradation of city environmental results from well - intentioned but inappropriate preservation actions by uninformed designers and local administration. Very often, a local municipality administration permits to build bad-fitting surroundings houses. It is usually connected with lack of visual information's about housing areas of a city, its features and characteristics. The CAMUS system (Computer Aided Management of Urban Structure) is being created at the Faculty of Architecture of Bialystok Technical University. One of its integral parts is VIA - Visual Impact of Architecture. The basic element of this system is a geometrical model of the housing areas of Bialystok. This model can be enhanced using rendering packages as they create the basis to check our perception of a given area. An inspiration of this approach was the digital endoscopy presented by J. Breen and M. Stellingwerff at the 2nd EAEA Conferences in Vienna. We are presenting the possibilities of using simple computer programs for analysis of spatial model. This contribution presents those factors of computer presentation which can demonstrate that computers achieve such effects as endoscope and often their use be much more efficient and effective.
keywords Architectural Endoscopy, Endoscopy, Simulation, Visualisation, Visualization, Real Environments
series EAEA
email
more http://www.bk.tudelft.nl/media/eaea/eaea97.html
last changed 2005/09/09 10:43

_id 58f4
authors Barequet, G. and Kumar, S.
year 1997
title Repairing CAD models
source Proceedings of IEEE Visualizationí97, pp. 363-370
summary We describe an algorithm for repairing polyhedral CAD models that have errors in their B-REP. Errors like cracks, degeneracies, duplication, holes and overlaps are usually introduced in solid models due to imprecise arithmetic, model transformations, designer's fault, programming bugs, etc. Such errors often hamper further processing like finite element analysis, radiosity computation and rapid prototyping. Our fault-repair algorithm converts an unordered collection of polygons to a shared-vertex representation to help eliminate errors. This is done by choosing, for each polygon edge, the most appropriate edge to unify it with. The two edges are then geometrically merged into one, by moving vertices. At the end of this process, each polygon edge is either coincident with another or is a boundary edge for a polygonal hole or a dangling wall and may be appropriately repaired. Finally, in order to allow user- inspection of the automatic corrections, we produce a visualization of the repair and let the user mark the corrections that conflict with the original design intent. A second iteration of the correction algorithm then produces a repair that is commensurate with the intent. Thus, by involving the users in a feedback loop, we are able to refine the correction to their satisfaction.
series other
email
last changed 2003/04/23 15:14

_id 536e
authors Bouman, Ole
year 1997
title RealSpace in QuickTimes: architecture and digitization
source Rotterdam: Nai Publishers
summary Time and space, drastically compressed by the computer, have become interchangeable. Time is compressed in that once everything has been reduced to 'bits' of information, it becomes simultaneously accessible. Space is compressed in that once everything has been reduced to 'bits' of information, it can be conveyed from A to B with the speed of light. As a result of digitization, everything is in the here and now. Before very long, the whole world will be on disk. Salvation is but a modem away. The digitization process is often seen in terms of (information) technology. That is to say, one hears a lot of talk about the digital media, about computer hardware, about the modem, mobile phone, dictaphone, remote control, buzzer, data glove and the cable or satellite links in between. Besides, our heads are spinning from the progress made in the field of software, in which multimedia applications, with their integration of text, image and sound, especially attract our attention. But digitization is not just a question of technology, it also involves a cultural reorganization. The question is not just what the cultural implications of digitization will be, but also why our culture should give rise to digitization in the first place. Culture is not simply a function of technology; the reverse is surely also true. Anyone who thinks about cultural implications, is interested in the effects of the computer. And indeed, those effects are overwhelming, providing enough material for endless speculation. The digital paradigm will entail a new image of humankind and a further dilution of the notion of social perfectibility; it will create new notions of time and space, a new concept of cause and effect and of hierarchy, a different sort of public sphere, a new view of matter, and so on. In the process it will indubitably alter our environment. Offices, shopping centres, dockyards, schools, hospitals, prisons, cultural institutions, even the private domain of the home: all the familiar design types will be up for review. Fascinated, we watch how the new wave accelerates the process of social change. The most popular sport nowadays is 'surfing' - because everyone is keen to display their grasp of dirty realism. But there is another way of looking at it: under what sort of circumstances is the process of digitization actually taking place? What conditions do we provide that enable technology to exert the influence it does? This is a perspective that leaves room for individual and collective responsibility. Technology is not some inevitable process sweeping history along in a dynamics of its own. Rather, it is the result of choices we ourselves make and these choices can be debated in a way that is rarely done at present: digitization thanks to or in spite of human culture, that is the question. In addition to the distinction between culture as the cause or the effect of digitization, there are a number of other distinctions that are accentuated by the computer. The best known and most widely reported is the generation gap. It is certainly stretching things a bit to write off everybody over the age of 35, as sometimes happens, but there is no getting around the fact that for a large group of people digitization simply does not exist. Anyone who has been in the bit business for a few years can't help noticing that mum and dad are living in a different place altogether. (But they, at least, still have a sense of place!) In addition to this, it is gradually becoming clear that the age-old distinction between market and individual interests are still relevant in the digital era. On the one hand, the advance of cybernetics is determined by the laws of the marketplace which this capital-intensive industry must satisfy. Increased efficiency, labour productivity and cost-effectiveness play a leading role. The consumer market is chiefly interested in what is 'marketable': info- and edutainment. On the other hand, an increasing number of people are not prepared to wait for what the market has to offer them. They set to work on their own, appropriate networks and software programs, create their own domains in cyberspace, domains that are free from the principle whereby the computer simply reproduces the old world, only faster and better. Here it is possible to create a different world, one that has never existed before. One, in which the Other finds a place. The computer works out a new paradigm for these creative spirits. In all these distinctions, architecture plays a key role. Owing to its many-sidedness, it excludes nothing and no one in advance. It is faced with the prospect of historic changes yet it has also created the preconditions for a digital culture. It is geared to the future, but has had plenty of experience with eternity. Owing to its status as the most expensive of arts, it is bound hand and foot to the laws of the marketplace. Yet it retains its capacity to provide scope for creativity and innovation, a margin of action that is free from standardization and regulation. The aim of RealSpace in QuickTimes is to show that the discipline of designing buildings, cities and landscapes is not only a exemplary illustration of the digital era but that it also provides scope for both collective and individual activity. It is not just architecture's charter that has been changed by the computer, but also its mandate. RealSpace in QuickTimes consists of an exhibition and an essay.
series other
email
last changed 2003/04/23 15:14

_id ce11
authors Bradford, J., Wong, W.S. and Tang, H.F.
year 1997
title Bridging Virtual Reality to Internet for Architecture
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.m9r
summary This paper presents a virtual reality interface tool which allows a user to perform the following action :

1.Import design from other CAD tools.

2.Assemble an architecture structure from a library of pre-built blocks and geometry primitives dynamically created by user.

3.Export the design interactively in VRML format back to the library for Internet browsing.

The geometry primitives include polygon, sphere, cone, cylinder and cube. The pre-built blocks consist of fundamental architecture models which have been categorized with architectural related style, physical properties and environmental attributes. Upon a user’s request, the tool or the composer, has the ability to communicate with the library which indeed is a back-end distributed client-server database engine. The user may specify any combination of properties and attributes in the composer which will instantly bring up all matching 3-dimensional objects through the database engine. The database is designed in relational model and comes from the work of another research group.

keywords Virtual Reality, Architecture Models, Relational Database, Client-Server
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/bradford/bradford.htm
last changed 2022/06/07 07:50

_id 2b38
authors Bradford, J., Wong, R. and Yeung, C.S.K.
year 1997
title Hierarchical Decomposition of Architectural Computer Models
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 197-203
doi https://doi.org/10.52842/conf.caadria.1997.197
summary Architectural models can be represented in a hierarchy of complexity. Higher level or more complex architecture structures are then designed by repetitively instantiating libraries of building blocks. The advantages are that the object can be achieved in modular fashion and any modification to the definition of a building block can be easily propagated to all higher level objects using the block. Unfortunately, many existing representations of architectural models are monolithic instead of hierarchical and modular, thus, making the reuse of models very difficult and inefficient. This paper describes a research project on developing a tool to decompose a monolithic architectural model into elementary building blocks and then create a hierarchy in the model representation. The tool provides a graphical interface for the visualization of a model and a cutting plane. An associated algorithm will then automatically detach parts of the model into building blocks depending on where the user is applying the cutting plane. Studies will also be made on dividing more complex models employing spherical and NURBS surfaces.
series CAADRIA
email
last changed 2022/06/07 07:54

_id 20
authors Cabezas, M., Mariano, C., MÌtolo, S., MuÒoz, P., Oliva, S. and Ortiz, M.
year 1998
title Aportes a la EnseÒanza de la ComunicaciÛn Visual (Contributions to the Teaching of Visual Communication)
source II Seminario Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings / ISBN 978-97190-0-X] Mar del Plata (Argentina) 9-11 september 1998, pp. 168-173
summary Going back to the proposal for the incorporation of multimid oriented towards the study of visual communication in 1st year of Architecture and Industrial Design which was presented on the 1st Seminary of Digital Graph that was held in 1997, in the FAU of UBA,it is being developed an educative programme of hypermedial character. Referring to Monge System development, it is though for the students so that they can consul and have a first contact with theoretical concepts. Through direct experience, starting from the studentís pre-existence of a lineal path from general to specific, proposing transversal perspective to start in depth conceptual contents according. Completing the traditional view of drawing by enlarging the iconicity and comprehension of a complex topic like geometry of the space.
series SIGRADI
email
last changed 2016/03/10 09:47

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 26HOMELOGIN (you are user _anon_158015 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002