CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 61

_id af53
authors Boyer, E. and Mitgang, L.
year 1996
title Building community: a new future for architecture education and practice
source Carnegie Foundation for the Advancement of Teaching
summary Internships, before and after graduation, are the most essential link connecting students to the world of practice. Yet, by all accounts, internship is perhaps the most troubled phase of the continuing education of architects. During this century, as architectural knowledge grew more complex, the apprenticeship system withered away and schools assumed much of the responsibility for preparing architects for practice. However, schools cannot do the whole job. It is widely acknowledged that certain kinds of technical and practical knowledge are best learned in the workplace itself, under the guidance of experienced professionals. All state accrediting boards require a minimum period of internship-usually about three years-before a person is eligible to take the licensing exam. The National Council of Architectural Registration Boards (NCARB) allows students to earn up to two years of work credit prior to acquisition of an accredited degree. The Intern Development Program (IDP), launched by NCARB and the American Institute of Architects in 1979, provides the framework for internship in some forty states. The program was designed to assure that interns receive adequate mentoring, that experiences are well-documented, and that employers and interns allocate enough time to a range of educational and vocational experiences to prepare students for eventual licensure. As the IDP Guidelines state, "The shift from school to office is not a transition from theory to pragmatism. It is a period when theory merges with pragmatism.... It's a time when you: apply your formal education to the daily realities of architectural practice; acquire comprehensive experience in basic practice areas; explore specialized areas of practice; develop professional judgment; continue your formal education in architecture; and refine your career goals." Whatever its accomplishments, however, we found broad consensus that the Intern Development Program has not, by itself, solved the problems of internship. Though we found mutually satisfying internship programs at several of the firms we visited or heard about around the country, at many others interns told us they were not receiving the continuing education and experience they needed. The truth is that architecture has serious, unsolved problems compared with other fields when it comes to supplying on-the-job learning experiences to induct students into the profession on a massive scale. Medicine has teaching hospitals. Beginning teachers work in actual classrooms, supported by school taxes. Law offices are, for the most part, in a better financial position to support young lawyers and pay them living wages. The architecture profession, by contrast, must support a required system of internship prior to licensure in an industry that has neither the financial resources of law or medicine, the stability and public support of teaching, nor a network of locations like hospitals or schools where education and practice can be seamlessly connected. And many employers acknowledged those problems. "The profession has all but undermined the traditional relationship between the profession and the academy," said Neil Frankel, FAIA, executive vice president of Perkins & Will, a multinational firm with offices in New York, Chicago, Washington, and London. "Historically, until the advent of the computer, the profession said, 'Okay, go to school, then we in the profession will teach you what the real world is like.' With the coming of the computer, the profession needed a skill that students had, and has left behind the other responsibilities." One intern told us she had been stuck for months doing relatively menial tasks such as toilet elevations. Another intern at a medium-sized firm told us he had been working sixty to seventy hours per week for a year and a half. "Then my wife had a baby and I 'slacked off' to fifty hours. The partner called me in and I got called on the carpet for not working hard enough." "The whole process of internship is being outmoded by economics," one frustrated intern told us. "There's not the time or the money. There's no conception of people being groomed for careers. The younger staff are chosen for their value as productive workers." "We just don't have the best structure here to use an intern's abilities to their best," said a Mississippi architect. "The people who come out of school are really problems. I lost patience with one intern who was demanding that I switch him to another section so that he could learn what he needed for his IDP. I told him, 'It's not my job to teach you. You are here to produce.'" What steps might help students gain more satisfying work opportunities, both during and after graduation?
series other
last changed 2003/04/23 15:14

_id 69b3
authors Markelin, Antero
year 1993
title Efficiency of Model Endoscopic Simulation - An Experimental Research at the University of Stuttgart
source Endoscopy as a Tool in Architecture [Proceedings of the 1st European Architectural Endoscopy Association Conference / ISBN 951-722-069-3] Tampere (Finland), 25-28 August 1993, pp. 31-34
summary At the Institute of Urban Planning at the University of Stuttgart early experiments were made with the help of endoscopes in the late 1970’s. The intention was to find new instruments to visualize urban design projects. The first experiment included the use of a 16 mm film of a 1:170 scale model of the market place at Karlsruhe, including design alternatives (with trees, without trees etc). The film was shown to the Karlsruhe authorities, who had to make the decision about the alternatives. It was said, that the film gave a great help for the decision-making and a design proposition had never before been presented in such understandable way. In 1975-77, with the support of the Deutsche Forschungsgemeinschaft (German Research Foundation) an investigation was carried out into existing endoscopic simulation facilities, such as those in Wageningen, Lund and Berkeley. The resulting publication was mainly concerned with technical installations and their applications. However a key question remained: ”Can reality be simulated with endoscopy?” In 1979-82, in order to answer that question, at the Institute was carried out the most extensive research of the time, into the validity of endoscopic simulation. Of special importance was the inclusion of social scientists and psychologists from the University of Heidelberg and Mannheim. A report was produced in 1983. The research was concerned with the theory of model simulation, its ways of use and its users, and then the establishment of requirements for effective model simulation. For the main research work with models or simulation films, psychological tests were developed which enabled a tested person to give accurate responses or evidence without getting involved in alien technical terminology. It was also thought that the use of semantic differentials would make the work imprecise or arbitrary.

keywords Architectural Endoscopy
series EAEA
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id caadria2018_033
id caadria2018_033
authors Bai, Nan and Huang, Weixin
year 2018
title Quantitative Analysis on Architects Using Culturomics - Pattern Study of Prizker Winners Based on Google N-gram Data
doi https://doi.org/10.52842/conf.caadria.2018.2.257
source T. Fukuda, W. Huang, P. Janssen, K. Crolla, S. Alhadidi (eds.), Learning, Adapting and Prototyping - Proceedings of the 23rd CAADRIA Conference - Volume 2, Tsinghua University, Beijing, China, 17-19 May 2018, pp. 257-266
summary Quantitative studies using the corpus Google Ngram, namely Culturomics, have been analyzing the implicit patterns of culture changes. Being the top-standard prize in the field of Architecture since 1979, the Pritzker Prize has been increasingly diversified in the recent years. This study intends to reveal the implicit pattern of Pritzker Winners using the method of Culturomics, based on the corpus of Google Ngram to reveal the relationship of the sign of their fame and the fact of prize-winning. 48 architects including 32 awarded and 16 promising are analyzed in the printed corpus of English language between 1900 and 2008. Multiple regression models and multiple imputation methods are used during the data processing. Self-Organizing Map is used to define clusters among the awarded and promising architects. Six main clusters are detected, forming a 3×2 network of fame patterns. Most promising architects can be told from the clustering, according to their similarity to the more typical prize winners. The method of Culturomics could expand the sight of architecture study, giving more possibilities to reveal the implicit patterns of the existing empirical world.
keywords Culturomics; Google Ngram; Pritzker Prize; Fame Pattern; Self-Organizing Map
series CAADRIA
email bn16@mails.tsinghua.edu.cn
last changed 2022/06/07 07:54

_id 6890
authors Eastman, Charles M. and Weiler, Kevin
year 1979
title Geometric Modeling Using the Euler Operators
source 12 p. : ill. Pittsburgh: Institute of Physical Planning, Carnegie Mellon University, February, 1979. includes bibliography
summary A recent advance in the modeling of three-dimensional shapes is the joint development of bounded shape models, capable of representing complete and well-formed arbitrary polyhedra, and operators for manipulating them. Two approaches have been developed thus far in forming bounded shape models: to combine a given fixed set of primitive shapes into other possibly more complex ones using the spatial set operators, and/or to apply lower level operators that define and combine faces, edges, loops and vertices to directly construct a shape. The name that has come to be applied to these latter operators is the Euler operators. This paper offers a description of the Euler operators, in a form expected to be useful for prospective implementers and others wishing to better understand their function and behavior. It includes considerations regarding their specification in terms of being able to completely describe different classes of shapes, how to properly specify them and the extent of their well-formedness, especially in terms of their interaction with geometric operations. Example specifications are provided as well as some useful applications. The Euler operators provide different capabilities from the spatial set operators. An extensible CAD/CAM facility needs them both
keywords Euler operators, boolean operations, CSG, geometric modeling, CAD, CAM, B-rep, solid modeling, theory
series CADline
email chuck.eastman@arch.gatech.edu
last changed 2003/05/17 10:15

_id 4966
authors Kaplan, Michael and Greenberg, Donald P.
year 1979
title Parallel Processing Techniques for Hidden Surface Removal
source SIGGRAPH '79 Conference Proceedings. 1979. vol. 13 ; no. 2: pp. 300-307 : ill. includes bibliography
summary Previous work in the hidden-surface problem has revealed two key concepts. First, the removal of non-visible surfaces is essentially a sorting problem. Second, some form of coherence is essential for the efficient solution of this problem. In order to provide real-time simulations, it is not only the amount of sorting which must be reduced, but the total time required for computation. One potentially economic strategy to attain this goal is the use of parallel processor systems. This approach implies that the computational time will no longer be dependent on the total amount of sorting, but more on the appropriate division of responsibility. This paper investigates two existing algorithmic approaches to the hidden-surface problem with a view towards their applicability to implementation on a parallel machine organization. In particular, the statistical results of a parallel processor implementation indicate the difficulties stemming from a loss of coherence and imply potentially important design criteria for a parallel configuration
keywords computer graphics, rendering, display, hidden surfaces, parallel processing, algorithms
series CADline
last changed 2003/06/02 13:58

_id 0b63
authors Ledgard, H.F., Hueras, J.F. and Nagin, P.A.
year 1979
title Pascal with Style : Programming Proverbs
source 210 p. : ill. Rochelle Park, New Jersey: Hayden Book Company, Inc., 1979. include bibliography: p. 205-206 and index. -- (Hayden computer programming series)
summary For PASCAL programmers, it offers short rules and guidelines for writing more accurate, error free programs. Includes many sample of PASCAL programs, introduces superior methods of program design and construction, like how to use the top-down approach
keywords PASCAL, languages, programming, education
series CADline
last changed 2003/06/02 14:41

_id ddss2006-hb-187
id DDSS2006-HB-187
authors Lidia Diappi and Paola Bolchi
year 2006
title Gentrification Waves in the Inner-City of Milan - A multi agent / cellular automata model based on Smith's Rent Gap theory
source Van Leeuwen, J.P. and H.J.P. Timmermans (eds.) 2006, Innovations in Design & Decision Support Systems in Architecture and Urban Planning, Dordrecht: Springer, ISBN-10: 1-4020-5059-3, ISBN-13: 978-1-4020-5059-6, p. 187-201
summary The aim of this paper is to investigate the gentrification process by applying an urban spatial model of gentrification, based on Smith's (1979; 1987; 1996) Rent Gap theory. The rich sociological literature on the topic mainly assumes gentrification to be a cultural phenomenon, namely the result of a demand pressure of the suburban middle and upper class, willing to return to the city (Ley, 1980; Lipton, 1977, May, 1996). Little attempt has been made to investigate and build a sound economic explanation on the causes of the process. The Rent Gap theory (RGT) of Neil Smith still represents an important contribution in this direction. At the heart of Smith's argument there is the assumption that gentrification takes place because capitals return to the inner city, creating opportunities for residential relocation and profit. This paper illustrates a dynamic model of Smith's theory through a multi-agent/ cellular automata system approach (Batty, 2005) developed on a Netlogo platform. A set of behavioural rules for each agent involved (homeowner, landlord, tenant and developer, and the passive 'dwelling' agent with their rent and level of decay) are formalised. The simulations show the surge of neighbouring degradation or renovation and population turn over, starting with different initial states of decay and estate rent values. Consistent with a Self Organized Criticality approach, the model shows that non linear interactions at local level may produce different configurations of the system at macro level. This paper represents a further development of a previous version of the model (Diappi, Bolchi, 2005). The model proposed here includes some more realistic factors inspired by the features of housing market dynamics in the city of Milan. It includes the shape of the potential rent according to city form and functions, the subdivision in areal submarkets according to the current rents, and their maintenance levels. The model has a more realistic visualisation of the city and its form, and is able to show the different dynamics of the emergent neighbourhoods in the last ten years in Milan.
keywords Multi agent systems, Housing market, Gentrification, Emergent systems
series DDSS
last changed 2006/08/29 12:55

_id 98bd
authors Pea, R.
year 1993
title Practices of Distributed Intelligence and Designs for Education
source Distributed Cognitions, edited by G. Salomon. New York, NY: CambridgeUniversity Press
summary v Knowledge is commonly socially constructed, through collaborative efforts... v Intelligence may also be distributed for use in designed artifacts as diverse as physical tools, representations such as diagrams, and computer-user interfaces to complex tasks. v Leont'ev 1978 for activity theory that argues forcibly for the centrality of people-in-action, activity systems, as units of analysis for deepening our understanding of thinking. v Intelligence is distributed: the resources that shape and enable activity are distributed across people, environments, and situations. v Intelligence is accomplished rather than possessed. v Affordance refers to the perceived and actual properties of a thing, primarily those functional properties that determine how the thing could possibly be used. v Norman 1988 on design and psychology - the psychology of everyday things" v We deploy effort-saving strategies in recognition of their cognitive economy and diminished opportunity for error. v The affordances of artifacts may be more or less difficult to convey to novice users of these artifacts in the activities to which they contribute distributed intelligence. v Starts with Norman's seven stages of action Ø Forming a goal; an intention § Task desire - clear goal and intention - an action and a means § Mapping desire - unable to map goal back to action § Circumstantial desire - no specific goal or intention - opportunistic approach to potential new goal § Habitual desire - familiar course of action - rapidly cycle all seven stages of action v Differentiates inscriptional systems from representational or symbol systems because inscriptional systems are completely external, while representational or symbol systems have been used in cognitive science as mental constructs. v The situated properties of everyday cognition are highly inventive in exploiting features of the physical and social situation as resources for performing a task, thereby avoiding the need for mental symbol manipulations unless they are required by that task. v Explicit recognition of the intelligence represented and representable in design, specifically in designed artifacts that play important roles in human activities. v Once intelligence is designed into the affordances properties of artifacts, it both guides and constrains the likely contributions of that artifact to distributed intelligence in activity. v Culturally valued designs for distributed intelligence will change over time, especially as new technology becomes associated with a task domain. v If we treat distributed intelligence in action as the scientific unit of analysis for research and theory on learning and reasoning... Ø What is distributed? Ø What constraints govern the dynamics of such distributions in different time scales? Ø Through what reconfigurations of distributed intelligence might the performance of an activity system improve over time? v Intelligence is manifest in activity and distributed in nature. v Intelligent activities ...in the real world... are often collaborative, depend on resources beyond an individual's long-term memory, and require the use of information-handling tools... v Wartofsky 1979 - the artifact is to cultural evolution what the gene is to biological evolution - the vehicle of information across generations. v Systems of activity - involving persons, environment, tools - become the locus of developmental investigation. v Disagrees with Salomon et al.'s entity-oriented approach - a language of containers holding things. v Human cognition aspires to efficiency in distributing intelligence - across individuals, environment, external symbolic representations, tools, and artifacts - as a means of coping with the complexity of activities we often cal "mental." "
series other
last changed 2003/04/23 15:14

_id ecaade2016_113
id ecaade2016_113
authors Poinet, Paul, Baharlou, Ehsan, Schwinn, Tobias and Menges, Achim
year 2016
title Adaptive Pneumatic Shell Structures - Feedback-driven robotic stiffening of inflated extensible membranes and further rigidification for architectural applications
doi https://doi.org/10.52842/conf.ecaade.2016.1.549
source Herneoja, Aulikki; Toni Österlund and Piia Markkanen (eds.), Complexity & Simplicity - Proceedings of the 34th eCAADe Conference - Volume 1, University of Oulu, Oulu, Finland, 22-26 August 2016, pp. 549-558
summary The paper presents the development of a design framework that aims to reduce the complexity of designing and fabricating free-form inflatables structures, which often results in the generation of very complex geometries. In previous research the form-finding potential of actuated and constrained inflatable membranes has already been investigated however without a focus on fabrication (Otto 1979). Consequently, in established design-to-fabrication approaches, complex geometry is typically post-rationalized into smaller parts and are finally fabricated through methods, which need to take into account cutting pattern strategies and material constraints. The design framework developed and presented in this paper aims to transform a complex design process (that always requires further post-rationalization) into a more integrated one that simultaneously unfolds in a physical and digital environment - hence the term cyber-physical (Menges 2015). At a full scale, a flexible material (extensible membrane, e.g. latex) is actuated through inflation and modulated through additive stiffening processes, before being completely rigidified with glass fibers and working as a thin-shell under compression.
wos WOS:000402063700060
keywords pneumatic systems; robotic fabrication; feedback strategy; cyber-physical; scanning processes
series eCAADe
email paul.poinet@kadk.dk
last changed 2022/06/07 08:00

_id 2c14
authors Sharji, E.A., Hussain, H. and Ahmad, R.E.
year 2002
title Electronic Gallery : Case Study of A New Design Approach in Malaysia
doi https://doi.org/10.52842/conf.ecaade.2002.370
source Connecting the Real and the Virtual - design e-ducation [20th eCAADe Conference Proceedings / ISBN 0-9541183-0-8] Warsaw (Poland) 18-20 September 2002, pp. 370-373
summary A building comprises of more than the skin and the structural works. It is the soul that comes in the form of SPACE that is intriguing and provokes the mind. To be able to experience a building relies heavily on the spatial concept and internal lay out. How one is captured right from entering the entrance and through the layering of space, of horizontal and vertical planes and finally the euphoria, or depressed feeling that concludes the tour depending on the feeling intended (Miller, 1995). The common norm at present celebrates the outer skin and grandeur of facades. Not many include the hidden grids and fragmentation that can lead to a surprisingly good form AND space. Thus a number of them fail, in the sense of a sensuous building. ‘The circulation path can be conceived as the perceptual thread that links the spaces of a building or any series of interior or exterior spaces, together. Since we move in TIME, through SEQUENCE of SPACES, we experience a space in relation to where we’ve been, and where we anticipate going’ (Ching, 1979). This research intends to study and analyze the unconventional electronic gallery or ‘e-gallery’ as a versatile hybrid container. The focus of the research will be on documenting spaces in the e-gallery, bringing to light the unlimited possibilities that can take place in such a space.
series eCAADe
email elyna.amir@mmu.edu.my
last changed 2022/06/07 07:59

_id ddss9503
id ddss9503
authors Wineman, Jean and Serrato, Margaret
year 1994
title Visual and Spatial Analysis in Office Design
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary The demands for rapid response to complex problems, flexibility, and other characteristics of today's workplace, such as a highly trained work force, have led many organizations to move from strict hierarchical structures to a more flexible project team organization. The organizational structure is broader and flatter, with greater independence given to organizational units, in this case the project teams. To understand the relationship between project team communication patterns and the design and layout of team space, a study was conducted of an architectural office before and after a move to new space. The study involved three project teams. Information was collected on individual communication patterns; perceptions of the ease of communication; and the effectiveness of the design and layout of physical space to support these communications. In order to provide guidance for critical decision-making in design, these communication data were correlated with a series of measures for the specification of team space enclosure and layout. These group/team space measures were adaptations of existing measures of individual work space, and included an enclosure measure, based on an enclosure measure developed by Stokols (1990); a measure of visual field, based on the "isovist" fields of Benedikt (1979); and an "integration" measure, based on the work of Hillier and Hanson (1984). Results indicate both linear and non-linear relationships between interaction patterns and physical space measures. This work is the initial stage of a research program to define a set of specific physical measures to guide the design of supportive work space for project teams and work groups within various types of organizations.
series DDSS
email jean.winem@arch.gatech.edu
last changed 2003/08/07 16:36

_id ga0015
id ga0015
authors Daru, R., Vreedenburgh, E. and Scha, R.
year 2000
title Architectural Innovation as an evolutionary process
source International Conference on Generative Art
summary Traditionally in art and architectural history, innovation is treated as a history of ideas of individuals (pioneers), movements and schools. The monograph is in that context one of the most used forms of scientific exercise. History of architecture is then mostly seen as a succession of dominant architectural paradigms imposed by great architectural creators fighting at the beginning against mainstream establishment until they themselves come to be recognised. However, there have been attempts to place architectural innovation and creativity in an evolutionary perspective. Charles Jencks for example, has described the evolution of architectural and art movements according to a diagram inspired by ecological models. Philip Steadman, in his book "The Evolution of Designs. Biological analogy in architecture and the applied arts" (1979), sketches the history of various biological analogies and their impact on architectural theory: the organic, classificatory, anatomical, ecological and Darwinian or evolutionary analogies. This last analogy "explains the design of useful objects and buildings, particularly in primitive society and in the craft tradition, in terms of a sequence of repeated copyings (corresponding to inheritance), with small changes made at each stage ('variations'), which are then subjected to a testing process when the object is put into use ('selection')." However, Steadman has confined his study to a literature survey as the basis of a history of ideas. Since this pioneering work, new developments like Dawkins' concept of memes allow further steps in the field of cultural evolution of architectural innovation. The application of the concept of memes to architectural design has been put forward in a preceding "Generative Art" conference (Daru, 1999), showing its application in a pilot study on the analysis of projects of and by architectural students. This first empirical study is now followed by a study of 'real life' architectural practice. The case taken has a double implication for the evolutionary analogy. It takes a specific architectural innovative concept as a 'meme' and develops the analysis of the trajectory of this meme in the individual context of the designer and at large. At the same time, the architect involved (Eric Vreedenburgh, Archipel Ontwerpers) is knowledgeable about the theory of memetic evolution and is applying a computer tool (called 'Artificial') together with Remko Scha, the authoring computer scientist of the program who collaborates frequently with artists and architects. This case study (the penthouse in Dutch town planning and the application of 'Artificial') shall be discussed in the paper as presented. The theoretical and methodological problems of various models of diffusion of memes shall be discussed and a preliminary model shall be presented as a framework to account for not only Darwinian but also Lamarckian processes, and for individual as well as collective transmission, consumption and creative transformation of memes.
keywords evolutionary design, architectural innovation, memetic diffusion, CAAD, penthouses, Dutch design, creativity, Darwinian and Lamarckian processes
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id c6a9
authors Kay, Douglas Scott and Greenberg, Donald P.
year 1979
title Transparency for Computer Synthesized Images
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 158-164 : ill. (some col.). includes bibliography
summary Simple transparency algorithms which assume a linear transparency over an entire surface are the type most often employed to produce computer synthesized images of transparent objects with curved surfaces. Although most of the images created with these algorithms do give the impression of transparency, they usually do not look realistic. One of the most serious problems is that the intensity of the light that is transmitted through the objects is generally not proportional to the amount of material through which it must pass. Another problem is that the image seen behind the objects is not distorted as would naturally occur when the light is refracted as it passes through a material of different density. Use of a non-linear transparency algorithm can provide a great improvement in the realism of an image at a small additional cost. Making the transparency proportional to the normal to the surface causes it to decrease towards the edges of the surface where the path of the light through the object is longer. The exact simulation of refraction, however, requires that each sight ray be individually traced from the observer, through the picture plane and through each transparent object until an opaque surface is intersected. Since the direction of the ray would change as each material of differing optical density was entered, the hidden surface calculations required would be very time consuming. However, if a few assumptions are made about the geometry of each object and about the conditions under which they are viewed, a much simpler algorithm can be used to approximate the refractive effect. This method proceeds in a back-to-front order, mapping the current background image onto the next surface, until all surfaces have been considered
keywords computer graphics, shading, transformation, display, visualization, algorithms, realism
series CADline
last changed 2003/06/02 13:58

_id sigradi2007_af93
id sigradi2007_af93
authors Sperling, David; Ruy Sardinha
year 2007
title Dislocations of the spatial experience: From earthwork to liquid architecture [Deslocamentos da experiência espacial: De earthwork a arquitetura líquida]
source SIGraDi 2007 - [Proceedings of the 11th Iberoamerican Congress of Digital Graphics] México D.F. - México 23-25 October 2007, pp. 423-427
summary This article reflects about the contemporary notion of “spatial experience” that can be drawn by means of the emergence of the “expanded field” in arts (Rosalind Krauss, 1979). For a perspective view of this notion and for its problematization in the current time we pretend to stablish a counterpoint between two historic moments related to the expansion of the spatial field and its experience: the 1960´s, with the focus on the immanent space by artistic propositions, and nowadays, with the ocurrence of “fusional fields” of art-architecture-landscape-digital media. We adopt as strategy to construct the question, the approximation of two paradigmatic works for their respective epochs: the “earthwork” Spiral Jetty of Robert Smithson, constructed in 1970 in the Great Salt Lake (Utah, USA) and the ephemeral architecture of Blur Building of Elizabeth Diller and Ricardo Scofidio, constructed in 2002 in the Lake Neuchatel, for the Suiss Expo (Yverdon-les-Bains – Suiss).
keywords Art; architecture; media; expanded field; spatial experience
series SIGRADI
email sperling@sc.usp.br
last changed 2016/03/10 10:01

_id ea14
authors Anson, Ed
year 1979
title The Semantics of Graphical Input
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 113- 120. includes bibliography
summary This paper describes the semantics of action, an approach to describing input devices which allow the full utilization of all useful device characteristics and provides a high degree of hardware device independence. Part one discusses the semantics of graphical input device. The second shows how to create hierarchies of devices which provide a large measure of hardware independence. The third part applies these concepts to some typical problems, to demonstrate their completeness
keywords computer graphics, user interface, semantics
series CADline
last changed 1999/02/12 15:07

_id f42f
authors Baer, A., Eastman, C. and Henrion, M.
year 1979
title Geometric modeling: a survey
source Computer Aided Design; 11: 253
summary Computer programs are being developed to aid the design of physical systems ranging from individual mechanical parts to entire buildings or ships. These efforts highlight the importance of computer models of three dimensional objects. Issues and alternatives in geometric modelling are discussed and illustrated with comparisons of 11 existing modelling systems, in particular coherently-structured models of polyhedral solids where the faces may be either planar or curved. Four categories of representation are distinguished: data representations that store full, explicit shape information; definition languages with which the user can enter descriptions of shapes into the system, and which can constitute procedural representations; special subsets of the information produced by application programs; and conceptual models that define the logical structure of the data representation and/or definition language.
series journal paper
last changed 2003/04/23 15:14

_id 60d4
authors Baer, A., Eastman, C.M. and Henrion, M.
year 1979
title Geometric Modeling : a Survey
source business Press. September, 1979. vol. 11: pp. 253-271 : ill. includes bibliography
summary Computer programs are being developed to aid the design of physical systems ranging from individual mechanical parts to entire buildings or ships. These efforts highlight the importance of computer models of three dimensional objects. Issues and alternatives in geometric modeling are discussed and illustrated with comparisons of 11 existing modelling systems, in particular coherently-structured models of polyhedral solids where the faces may be either planar or curved. Four categories of representation are distinguished: data representations that store full, explicit shape information; definition languages with which the user can enter description of shapes into the system, and which can constitute procedural representations; special subsets of the information produced by application programs; and conceptual models that define the logical structure of the dada representation and/or definition language
keywords solid modeling, B-rep, CSG, languages, CAD, programming, data structures, boolean operations, polyhedra
series CADline
email chuck.eastman@arch.gatech.edu
last changed 2003/05/17 10:15

_id 00f3
authors Baybars, Ilker and Eastman, Charles M.
year 1979
title Generating the Underlying Graphs for Architectural Arrangements
source 10 p. : ill. Pittsburgh: School of Urban and Public Affairs, Carnegie Mellon University, April, 1979. Research report No.79. Includes bibliography
summary The mathematical correspondence to a floorplan is a Metric Planar Graph. Several methods for systematic direct generation of metric planar graphs have been developed including polyominoes, March and Matela and shape grammars. Another approach has been to develop a spatial composition in two separate steps. The first step involves discrete variables, and consists of enumerating a defined set of non-metric planar graphs. The second step involves spatial dimensions, e.g. continuous variables, and maps the graphs onto the Euclidean plane, from which a satisfactory or optimal one is selected. This paper focusses on the latter 2-step process. It presents a general method of solving the first step, that is the exhaustive enumeration of a set of planar graphs. The paper consists of three sections: The first section is an introduction to graph theory. The second section presents the generation of maximal planar graphs. The last section summarizes the presentation and comments on the appropriateness of the method
keywords graphs, floor plans, architecture, design, automation, space allocation
series CADline
email chuck.eastman@arch.gatech.edu
last changed 2003/05/17 10:15

_id fcd6
authors Berger, S.R.
year 1979
title Artificial Intelligence and its impact on Coimputer-Aided Design
source Design Studies, vol 1, no. 3
summary This paper provides, for readers unfamiliar with the field, an introductory account of research which has been carried out in artificial intelligence. It attempts to distingussh between an artificial intelligence and a conventional computing approach and to assess the future influence of the former on computer-aided design.
series journal paper
last changed 2003/04/23 15:14

_id 6733
authors Bettels, Juergen and Myers, David R.
year 1986
title The PIONS Graphics System
source IEEE Computer Graphics and Applications. July, 1986. vol. 6: pp. 30-38 : col. ill. includes a short bibliography
summary During 1979, CERN began to evaluate how interactive computer graphics displays could aid the analysis of high-energy physics experiments at the new Super Proton Synchrotron collider. This work led to PIONS, a 3D graphics system, which features the ability to store and view hierarchical graphics structures in a directed-acyclic-graph database. It is possible to change the attributes of these structures by making selections on nongraphical information also stored in the database. PIONS is implemented as an object-oriented message-passing system based on SmallTalk design principles. It supports multiple viewing transformations, logical input devices, and 2D and 3D primitives. The design allows full use to be made of display hardware that provides dynamic 3D picture transformation
keywords visualization, computer graphics, database, systems, modeling
series CADline
last changed 2003/06/02 13:58

For more results click below:

this is page 0show page 1show page 2show page 3HOMELOGIN (you are user _anon_765218 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002