CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers

Hits 1 to 9 of 9

_id sigradi2004_027
id sigradi2004_027
authors Alfredo Stipech
year 2004
title Enseñanza de la representación manual y digital, para arquitectos y diseñadores [Teaching Hand and Digital Representation to Architects and Designers]
source SIGraDi 2004 - [Proceedings of the 8th Iberoamerican Congress of Digital Graphics] Porte Alegre - Brasil 10-12 november 2004
summary The supremacy of the digital means of representation and communication and the resulting shift of the manual means in the field of design and architecture, have engendered multiple opinions and literature. They focus and analyze the virtues and risks, the losses and substitutions, and the different expressive, productive and conceptual results of their leading role in the creative process. Furthermore, if we consider both as two extremes, apparently opposed, a broad panorama of combinations and additions are produced by the emerging group of hybrid practices. This motivated the development of a research project in the Universidad Nacional del Litoral de Santa Fe, Argentina, under the Program CAI+D 2000 dealing with Design and the Analog – Digital Means. From this project emerged a collection of conceptual speculations and experimentations in the extended field of representation, extended by the incorporation of new means and hybridations, searching for new parameters and methods for professional training and practice. Key words: analog, digital, graphics, means, representation.
series SIGRADI
last changed 2016/03/10 08:47

_id ecaade2014_162
id ecaade2014_162
authors Andrzej Zarzycki
year 2014
title Teaching and Designing for Augmented Reality
source Thompson, Emine Mine (ed.), Fusion - Proceedings of the 32nd eCAADe Conference - Volume 1, Department of Architecture and Built Environment, Faculty of Engineering and Environment, Newcastle upon Tyne, England, UK, 10-12 September 2014, pp. 357-364
summary This paper discusses ways emerging interactive technologies are adopted by designers and extended into areas of design, education, entertainment, and commerce. It looks, in detail, at various project development stages and methodologies used to engage design focused students into, often complex, technological issues. The discussion is contextualized through a number of case studies of mobile and marker-based augmented reality (AR) applications developed by students. These applications include an app for a fashion based social event that allows participants to preview recent collection additions, an info-navigational app for the High Line elevated urban park in New York City, a marker-based maze game, and an interior decorating interface to visualize various furnishing scenarios. While a number of case studies will be discussed from a developer perspective, the primary focus is on the concept and content development, interface design, and user participation.
wos WOS:000361384700035
keywords Augmented reality; ar; gamification; mobile culture
series eCAADe
last changed 2016/05/16 09:08

_id 4361
authors Bishop, G. and Weimer, D.M.
year 1986
title Fast Phong Shading
source Computer Graphics (20) 4 pp. 103-106
summary Computer image generation systems often represent curved surfaces as a mesh of polygons that are shaded to restore a smooth appearance. Phong shading is a well known algorithm for producing a realistic shading but it has not been used by real-time systems because of the 3 additions, 1 division and 1 square root required per pixel for its evaluation. We describe a new formulation for Phong shading that reduces the amount of computation per pixel to only 2 additions for simple Lambertian reflection and 5 additions and 1 memory reference for Phong's complete reflection model. We also show how to extend our method to compute the specular component with the eye at a finite distance from the scene rather than at infinity as is usually assumed. The method can be implemented in hardware for real-time applications or in software to speed image generation for almost any system.
series journal paper
last changed 2003/11/21 14:16

_id acadia09_267
id acadia09_267
authors Christenson, Mike
year 2009
title On the Use of Occlusion Maps to Examine Additions to Existing Buildings
source ACADIA 09: reForm( ) - Building a Better Tomorrow [Proceedings of the 29th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA) ISBN 978-0-9842705-0-7] Chicago (Illinois) 22-25 October, 2009), pp. 267-269
summary This paper discusses occlusion maps, or diagrams of isovists deployed in a plan field, which graphically describe an inhabitant’s position-dependent perception of a building’s visual permeability. Occlusion maps are shown here to be an important tool for analyzing the effect that additions to existing buildings have on this perception. The question is critical because additions invariably affect the visual permeability of their host buildings.
series ACADIA
type Short paper
last changed 2009/11/26 16:44

_id 4edc
authors Eastman, C., Jeng, T.S., Chowdbury, R. and Jacobsen, K.
year 1997
title Integration of Design Applications with Building Models
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 45-59
summary This paper reviews various issues in the integration of applications with a building model. First, we present three different architectures for interfacing applications to a building model, with three different structures for applying maps between datasets. The limitations and advantages of these alternatives are reviewed. Then we review the mechanisms for interfacing an application to a building data model, allowing iteration execution and the recognition of instance additions, modifications and deletions.
series CAAD Futures
last changed 1999/04/06 07:19

_id 78ca
authors Friedland, P. (Ed.)
year 1985
title Special Section on Architectures for Knowledge-Based Systems
source CACM (28), 9, September
summary A fundamental shift in the preferred approach to building applied artificial intelligence (AI) systems has taken place since the late 1960s. Previous work focused on the construction of general-purpose intelligent systems; the emphasis was on powerful inference methods that could function efficiently even when the available domain-specific knowledge was relatively meager. Today the emphasis is on the role of specific and detailed knowledge, rather than on reasoning methods.The first successful application of this method, which goes by the name of knowledge-based or expert-system research, was the DENDRAL program at Stanford, a long-term collaboration between chemists and computer scientists for automating the determination of molecular structure from empirical formulas and mass spectral data. The key idea is that knowledge is power, for experts, be they human or machine, are often those who know more facts and heuristics about a domain than lesser problem solvers. The task of building an expert system, therefore, is predominantly one of teaching" a system enough of these facts and heuristics to enable it to perform competently in a particular problem-solving context. Such a collection of facts and heuristics is commonly called a knowledge base. Knowledge-based systems are still dependent on inference methods that perform reasoning on the knowledge base, but experience has shown that simple inference methods like generate and test, backward-chaining, and forward-chaining are very effective in a wide variety of problem domains when they are coupled with powerful knowledge bases. If this methodology remains preeminent, then the task of constructing knowledge bases becomes the rate-limiting factor in expert-system development. Indeed, a major portion of the applied AI research in the last decade has been directed at developing techniques and tools for knowledge representation. We are now in the third generation of such efforts. The first generation was marked by the development of enhanced AI languages like Interlisp and PROLOG. The second generation saw the development of knowledge representation tools at AI research institutions; Stanford, for instance, produced EMYCIN, The Unit System, and MRS. The third generation is now producing fully supported commercial tools like KEE and S.1. Each generation has seen a substantial decrease in the amount of time needed to build significant expert systems. Ten years ago prototype systems commonly took on the order of two years to show proof of concept; today such systems are routinely built in a few months. Three basic methodologies-frames, rules, and logic-have emerged to support the complex task of storing human knowledge in an expert system. Each of the articles in this Special Section describes and illustrates one of these methodologies. "The Role of Frame-Based Representation in Reasoning," by Richard Fikes and Tom Kehler, describes an object-centered view of knowledge representation, whereby all knowldge is partitioned into discrete structures (frames) having individual properties (slots). Frames can be used to represent broad concepts, classes of objects, or individual instances or components of objects. They are joined together in an inheritance hierarchy that provides for the transmission of common properties among the frames without multiple specification of those properties. The authors use the KEE knowledge representation and manipulation tool to illustrate the characteristics of frame-based representation for a variety of domain examples. They also show how frame-based systems can be used to incorporate a range of inference methods common to both logic and rule-based systems.""Rule-Based Systems," by Frederick Hayes-Roth, chronicles the history and describes the implementation of production rules as a framework for knowledge representation. In essence, production rules use IF conditions THEN conclusions and IF conditions THEN actions structures to construct a knowledge base. The autor catalogs a wide range of applications for which this methodology has proved natural and (at least partially) successful for replicating intelligent behavior. The article also surveys some already-available computational tools for facilitating the construction of rule-based knowledge bases and discusses the inference methods (particularly backward- and forward-chaining) that are provided as part of these tools. The article concludes with a consideration of the future improvement and expansion of such tools.The third article, "Logic Programming, " by Michael Genesereth and Matthew Ginsberg, provides a tutorial introduction to the formal method of programming by description in the predicate calculus. Unlike traditional programming, which emphasizes how computations are to be performed, logic programming focuses on the what of objects and their behavior. The article illustrates the ease with which incremental additions can be made to a logic-oriented knowledge base, as well as the automatic facilities for inference (through theorem proving) and explanation that result from such formal descriptions. A practical example of diagnosis of digital device malfunctions is used to show how significantand complex problems can be represented in the formalism.A note to the reader who may infer that the AI community is being split into competing camps by these three methodologies: Although each provides advantages in certain specific domains (logic where the domain can be readily axiomatized and where complete causal models are available, rules where most of the knowledge can be conveniently expressed as experiential heuristics, and frames where complex structural descriptions are necessary to adequately describe the domain), the current view is one of synthesis rather than exclusivity. Both logic and rule-based systems commonly incorporate frame-like structures to facilitate the representation of large amounts of factual information, and frame-based systems like KEE allow both production rules and predicate calculus statements to be stored within and activated from frames to do inference. The next generation of knowledge representation tools may even help users to select appropriate methodologies for each particular class of knowledge, and then automatically integrate the various methodologies so selected into a consistent framework for knowledge. "
series journal paper
last changed 2003/04/23 13:14

_id acadia03_031
id acadia03_031
authors Paolo Fiamma
year 2003
title Architectural Design and Digital Paradigm: from Renaissance Models to Digital Architecture
source Connecting >> Crossroads of Digital Discourse [Proceedings of the 2003 Annual Conference of the Association for Computer Aided Design In Architecture / ISBN 1-880250-12-8] Indianapolis (Indiana) 24-27 October 2003, pp. 247-253
summary Means of expression have always affected our ways of thinking. Designers, who have to interpret signs, languages, and evolution in order to translate into an organised “form” the recurring problems and values of mankind, have left thoughts, projects and wishes to the study of representational techniques. In this way, they have also disclosed a unique view of reality and at the same time a “way of being” towards the meaning of design itself. In the relationship between architecture and representational techniques, Brunelleschi said that “perspicere” was no longer just the science of optics, but also the science that contained the lines of research on geometry and shape that he was the first to exploit in design. Centuries later, in the axonometric representation advocated by De Stijl and intended for factories and industries, the object, shown in all its parts, easy to reconstruct even in the space to which it referred, revealed with extreme clarity the mass-production building and assembly materials and systems. Digital representational media make a great entrance in the heuristic process, invalidate all signs, and promote its quality. The result is an ever-changing, computerised architecture, dominated by curvilinear, wavy shapes that flow from a generative process made of the deformations, additions, and interference of different volumes.
series ACADIA
last changed 2003/10/30 15:20

_id ecec
authors Requicha, Aristides A.G. and Voelcker, H.B.
year 1977
title Constructive Solid Geometry
source November, 1977. [3] 36 p. : ill. includes bibliography: p. 31-33
summary The term 'constructive solid geometry' denotes a class of schemes for describing solid objects as compositions (usually 'additions' and 'subtractions') of primitive solid 'building blocks.' The notion of adding and subtracting solids has been used by mechanical designers and others for generations, but attempts to embody it in computer-based modelling systems have been hindered by the absence of a firm mathematical foundation. This paper provides such a foundation by drawing on established results in modern axiomatic geometry and point set topology. The paper also initiates a broader discussion, to be continued in subsequent papers, of three seminal topics: mathematical modelling of solids, representation of solids, and calculation of geometrical properties of solids
keywords solid modeling, computational geometry, geometric modeling, CSG, topology, mathematics, representation
series CADline
last changed 2003/06/02 11:58

_id ecaadesigradi2019_140
id ecaadesigradi2019_140
authors Zahedi, Ata and Petzold, Frank
year 2019
title Interaction with analysis and simulation methods via minimized computer-readable BIM-based communication protocol
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 1, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 241-250
summary The early stages of building design are characterized by a continuous endeavor for the development of variants and their evaluation and consistent detailing. The concept of adaptive detailing aims to enable the architect to evaluate and compare design variants which are partially incomplete and vague (Zahedi and Petzold 2018b). This paper discusses a minimized communication protocol based on BIM, which enables computer-readable interactions between the architect and different domain-experts (representing various analysis and simulation procedures) (Zahedi and Petzold 2018a). This comprises the selection of simulation procedures as well as any necessary consolidation of the information content according to the requirements of the simulations. Any additions required on the part of the simulation procedures are visually prepared globally or space-and component-oriented respectively, in order to perform detailing of a building model in a targeted way. Moreover, this paper proposes various supportive methods for visual representation and exploration of analysis results.
keywords Building Information Modeling (BIM); Early Stages of Design; Adaptive Detailing; Minimized Communication Protocol
series eCAADeSIGraDi
last changed 2019/08/26 20:24

No more hits.

HOMELOGIN (you are user _anon_937343 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002