CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 62

_id 00f3
authors Baybars, Ilker and Eastman, Charles M.
year 1979
title Generating the Underlying Graphs for Architectural Arrangements
source 10 p. : ill. Pittsburgh: School of Urban and Public Affairs, Carnegie Mellon University, April, 1979. Research report No.79. Includes bibliography
summary The mathematical correspondence to a floorplan is a Metric Planar Graph. Several methods for systematic direct generation of metric planar graphs have been developed including polyominoes, March and Matela and shape grammars. Another approach has been to develop a spatial composition in two separate steps. The first step involves discrete variables, and consists of enumerating a defined set of non-metric planar graphs. The second step involves spatial dimensions, e.g. continuous variables, and maps the graphs onto the Euclidean plane, from which a satisfactory or optimal one is selected. This paper focusses on the latter 2-step process. It presents a general method of solving the first step, that is the exhaustive enumeration of a set of planar graphs. The paper consists of three sections: The first section is an introduction to graph theory. The second section presents the generation of maximal planar graphs. The last section summarizes the presentation and comments on the appropriateness of the method
keywords graphs, floor plans, architecture, design, automation, space allocation
series CADline
email
last changed 2003/05/17 10:15

_id 9d45
authors Ching, F.D.K.
year 1979
title Architecture: Form, Space and Order
source Van Nostrand Reinhold. New York
summary The Second Edition of this classic introduction to the principles of architecture is everything you would expect from the celebrated architect, author, and illustrator, Francis D. K. Ching. Each page has been meticulously revised to incorporate contemporary examples of the principles of form, space, and order-the fundamental vocabulary of every designer. The result is a beautifully illustrated volume that embraces today's forms and looks at conventional models with a fresh perspective. Here, Ching examines every principal of architecture, juxtaposing images that span centuries and cross cultural boundaries to create a design vocabulary that is both elemental and timeless. Among the topics covered are point, line, plane, volume, proportion, scale, circulation, and the interdependence of form and space. While this revision continues to be a comprehensive primer on the ways form and space are interrelated and organized in the shaping of our environment, it has been refined to amplify and clarify concepts. In addition, the Second Edition contains: * Numerous new hand-rendered drawings * Expanded sections on openings and scale * Expanded chapter on design principles * New glossary and index categorized by the author * New 8 1/2 ? 11 upright trim In the Second Edition of Architecture: Form, Space, and Order, the author has opted for a larger format and crisper images. Mr. Ching has retained the style of his hand-lettered text, a hallmark of each of his books. This rich source of architectural prototypes, each rendered in Mr. Ching's signature style, also serves as a guide to architectural drawing. Doubtless, many will want this handsome volume for the sheer beauty of it. Architects and students alike will treasure this book for its wealth of practical information and its precise illustrations. Mr. Ching has once again created a visual reference that illuminates the world of architectural form.
series other
last changed 2003/04/23 15:14

_id 0868
authors Gero, John S. and Volfneuk, M.
year 1979
title Building Fuzzy CAD Systems
source 1979? pp. 74-79 : ill. includes bibliography
summary The paper introduces the need to include subjectivities in computer aided design systems. It commences with the differences between uncertainty, which has been used to model subjectivity, and imprecision. The former provides the basis of probability theory, whilst the latter the basis of fuzzy set theory. The thesis is that subjectivities introduce imprecision. It shows that subjectivities can be included in the description of the interactions between parts of the system. After presenting a brief introduction to fuzzy set theory the paper shows how a fuzzy CAD system can be built. An example is presented which demonstrates the approach
keywords CAD, fuzzy logic
series CADline
email
last changed 2003/06/02 13:58

_id 2ccd
authors Kalisperis, Loukas N.
year 1994
title 3D Visualization in Design Education
doi https://doi.org/10.52842/conf.acadia.1994.177
source Reconnecting [ACADIA Conference Proceedings / ISBN 1-880250-03-9] Washington University (Saint Louis / USA) 1994, pp. 177-184
summary It has been said that "The beginning of architecture is empty space." (Mitchell 1990) This statement typifies a design education philosophy in which the concepts of space and form are separated and defined respectively as the negative and positive of the physical world, a world where solid objects exist and void-the mere absence of substance-is a surrounding atmospheric emptiness. Since the beginning of the nineteenth century, however, there has been an alternative concept of space as a continuum: that there is a continuously modified surface between the pressures of form and space in which the shape of the space in our lungs is directly connected to the shape of the space within which we exist. (Porter 1979). The nature of the task of representing architecture alters to reflect the state of architectural understanding at each period of time. The construction of architectural space and form represents a fundamental achievement of humans in their environment and has always involved effort and materials requiring careful planning, preparation, and forethought. In architecture there is a necessary conversion to that which is habitable, experiential, and functional from an abstraction in an entirely different medium. It is often an imperfect procedure that centers on the translation rather than the actual design. Design of the built environment is an art of distinctions within the continuum of space, for example: between solid and void, interior and exterior, light and dark, or warm and cold. It is concerned with the physical organization and articulation of space. The amount and shape of the void contained and generated by the building create the fabric and substance of the built environment. Architecture as a design discipline, therefore, can be considered as a creative expression of the coexistence of form and space on a human scale. As Frank Ching writes in Architecture: Form, Space, and Order, "These elements of form and space are the critical means of architecture. While the utilitarian concerns of function and use can be relatively short lived, and symbolic interpretations can vary from age to age, these primary elements of form and space comprise timeless and fundamental vocabulary of the architectural designer." (1979)

series ACADIA
email
last changed 2022/06/07 07:52

_id ddss2006-hb-187
id DDSS2006-HB-187
authors Lidia Diappi and Paola Bolchi
year 2006
title Gentrification Waves in the Inner-City of Milan - A multi agent / cellular automata model based on Smith's Rent Gap theory
source Van Leeuwen, J.P. and H.J.P. Timmermans (eds.) 2006, Innovations in Design & Decision Support Systems in Architecture and Urban Planning, Dordrecht: Springer, ISBN-10: 1-4020-5059-3, ISBN-13: 978-1-4020-5059-6, p. 187-201
summary The aim of this paper is to investigate the gentrification process by applying an urban spatial model of gentrification, based on Smith's (1979; 1987; 1996) Rent Gap theory. The rich sociological literature on the topic mainly assumes gentrification to be a cultural phenomenon, namely the result of a demand pressure of the suburban middle and upper class, willing to return to the city (Ley, 1980; Lipton, 1977, May, 1996). Little attempt has been made to investigate and build a sound economic explanation on the causes of the process. The Rent Gap theory (RGT) of Neil Smith still represents an important contribution in this direction. At the heart of Smith's argument there is the assumption that gentrification takes place because capitals return to the inner city, creating opportunities for residential relocation and profit. This paper illustrates a dynamic model of Smith's theory through a multi-agent/ cellular automata system approach (Batty, 2005) developed on a Netlogo platform. A set of behavioural rules for each agent involved (homeowner, landlord, tenant and developer, and the passive 'dwelling' agent with their rent and level of decay) are formalised. The simulations show the surge of neighbouring degradation or renovation and population turn over, starting with different initial states of decay and estate rent values. Consistent with a Self Organized Criticality approach, the model shows that non linear interactions at local level may produce different configurations of the system at macro level. This paper represents a further development of a previous version of the model (Diappi, Bolchi, 2005). The model proposed here includes some more realistic factors inspired by the features of housing market dynamics in the city of Milan. It includes the shape of the potential rent according to city form and functions, the subdivision in areal submarkets according to the current rents, and their maintenance levels. The model has a more realistic visualisation of the city and its form, and is able to show the different dynamics of the emergent neighbourhoods in the last ten years in Milan.
keywords Multi agent systems, Housing market, Gentrification, Emergent systems
series DDSS
last changed 2006/08/29 12:55

_id 98bd
authors Pea, R.
year 1993
title Practices of Distributed Intelligence and Designs for Education
source Distributed Cognitions, edited by G. Salomon. New York, NY: CambridgeUniversity Press
summary v Knowledge is commonly socially constructed, through collaborative efforts... v Intelligence may also be distributed for use in designed artifacts as diverse as physical tools, representations such as diagrams, and computer-user interfaces to complex tasks. v Leont'ev 1978 for activity theory that argues forcibly for the centrality of people-in-action, activity systems, as units of analysis for deepening our understanding of thinking. v Intelligence is distributed: the resources that shape and enable activity are distributed across people, environments, and situations. v Intelligence is accomplished rather than possessed. v Affordance refers to the perceived and actual properties of a thing, primarily those functional properties that determine how the thing could possibly be used. v Norman 1988 on design and psychology - the psychology of everyday things" v We deploy effort-saving strategies in recognition of their cognitive economy and diminished opportunity for error. v The affordances of artifacts may be more or less difficult to convey to novice users of these artifacts in the activities to which they contribute distributed intelligence. v Starts with Norman's seven stages of action Ø Forming a goal; an intention § Task desire - clear goal and intention - an action and a means § Mapping desire - unable to map goal back to action § Circumstantial desire - no specific goal or intention - opportunistic approach to potential new goal § Habitual desire - familiar course of action - rapidly cycle all seven stages of action v Differentiates inscriptional systems from representational or symbol systems because inscriptional systems are completely external, while representational or symbol systems have been used in cognitive science as mental constructs. v The situated properties of everyday cognition are highly inventive in exploiting features of the physical and social situation as resources for performing a task, thereby avoiding the need for mental symbol manipulations unless they are required by that task. v Explicit recognition of the intelligence represented and representable in design, specifically in designed artifacts that play important roles in human activities. v Once intelligence is designed into the affordances properties of artifacts, it both guides and constrains the likely contributions of that artifact to distributed intelligence in activity. v Culturally valued designs for distributed intelligence will change over time, especially as new technology becomes associated with a task domain. v If we treat distributed intelligence in action as the scientific unit of analysis for research and theory on learning and reasoning... Ø What is distributed? Ø What constraints govern the dynamics of such distributions in different time scales? Ø Through what reconfigurations of distributed intelligence might the performance of an activity system improve over time? v Intelligence is manifest in activity and distributed in nature. v Intelligent activities ...in the real world... are often collaborative, depend on resources beyond an individual's long-term memory, and require the use of information-handling tools... v Wartofsky 1979 - the artifact is to cultural evolution what the gene is to biological evolution - the vehicle of information across generations. v Systems of activity - involving persons, environment, tools - become the locus of developmental investigation. v Disagrees with Salomon et al.'s entity-oriented approach - a language of containers holding things. v Human cognition aspires to efficiency in distributing intelligence - across individuals, environment, external symbolic representations, tools, and artifacts - as a means of coping with the complexity of activities we often cal "mental." "
series other
last changed 2003/04/23 15:14

_id ecaade2016_113
id ecaade2016_113
authors Poinet, Paul, Baharlou, Ehsan, Schwinn, Tobias and Menges, Achim
year 2016
title Adaptive Pneumatic Shell Structures - Feedback-driven robotic stiffening of inflated extensible membranes and further rigidification for architectural applications
doi https://doi.org/10.52842/conf.ecaade.2016.1.549
source Herneoja, Aulikki; Toni Österlund and Piia Markkanen (eds.), Complexity & Simplicity - Proceedings of the 34th eCAADe Conference - Volume 1, University of Oulu, Oulu, Finland, 22-26 August 2016, pp. 549-558
summary The paper presents the development of a design framework that aims to reduce the complexity of designing and fabricating free-form inflatables structures, which often results in the generation of very complex geometries. In previous research the form-finding potential of actuated and constrained inflatable membranes has already been investigated however without a focus on fabrication (Otto 1979). Consequently, in established design-to-fabrication approaches, complex geometry is typically post-rationalized into smaller parts and are finally fabricated through methods, which need to take into account cutting pattern strategies and material constraints. The design framework developed and presented in this paper aims to transform a complex design process (that always requires further post-rationalization) into a more integrated one that simultaneously unfolds in a physical and digital environment - hence the term cyber-physical (Menges 2015). At a full scale, a flexible material (extensible membrane, e.g. latex) is actuated through inflation and modulated through additive stiffening processes, before being completely rigidified with glass fibers and working as a thin-shell under compression.
wos WOS:000402063700060
keywords pneumatic systems; robotic fabrication; feedback strategy; cyber-physical; scanning processes
series eCAADe
email
last changed 2022/06/07 08:00

_id ddss9503
id ddss9503
authors Wineman, Jean and Serrato, Margaret
year 1994
title Visual and Spatial Analysis in Office Design
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary The demands for rapid response to complex problems, flexibility, and other characteristics of today's workplace, such as a highly trained work force, have led many organizations to move from strict hierarchical structures to a more flexible project team organization. The organizational structure is broader and flatter, with greater independence given to organizational units, in this case the project teams. To understand the relationship between project team communication patterns and the design and layout of team space, a study was conducted of an architectural office before and after a move to new space. The study involved three project teams. Information was collected on individual communication patterns; perceptions of the ease of communication; and the effectiveness of the design and layout of physical space to support these communications. In order to provide guidance for critical decision-making in design, these communication data were correlated with a series of measures for the specification of team space enclosure and layout. These group/team space measures were adaptations of existing measures of individual work space, and included an enclosure measure, based on an enclosure measure developed by Stokols (1990); a measure of visual field, based on the "isovist" fields of Benedikt (1979); and an "integration" measure, based on the work of Hillier and Hanson (1984). Results indicate both linear and non-linear relationships between interaction patterns and physical space measures. This work is the initial stage of a research program to define a set of specific physical measures to guide the design of supportive work space for project teams and work groups within various types of organizations.
series DDSS
email
last changed 2003/08/07 16:36

_id fcd6
authors Berger, S.R.
year 1979
title Artificial Intelligence and its impact on Coimputer-Aided Design
source Design Studies, vol 1, no. 3
summary This paper provides, for readers unfamiliar with the field, an introductory account of research which has been carried out in artificial intelligence. It attempts to distingussh between an artificial intelligence and a conventional computing approach and to assess the future influence of the former on computer-aided design.
series journal paper
last changed 2003/04/23 15:14

_id 6733
authors Bettels, Juergen and Myers, David R.
year 1986
title The PIONS Graphics System
source IEEE Computer Graphics and Applications. July, 1986. vol. 6: pp. 30-38 : col. ill. includes a short bibliography
summary During 1979, CERN began to evaluate how interactive computer graphics displays could aid the analysis of high-energy physics experiments at the new Super Proton Synchrotron collider. This work led to PIONS, a 3D graphics system, which features the ability to store and view hierarchical graphics structures in a directed-acyclic-graph database. It is possible to change the attributes of these structures by making selections on nongraphical information also stored in the database. PIONS is implemented as an object-oriented message-passing system based on SmallTalk design principles. It supports multiple viewing transformations, logical input devices, and 2D and 3D primitives. The design allows full use to be made of display hardware that provides dynamic 3D picture transformation
keywords visualization, computer graphics, database, systems, modeling
series CADline
last changed 2003/06/02 13:58

_id 6d0b
authors Brown, Bruce Eric
year 1979
title Computer Graphics for Large Scale Two- and Three-Dimensional Analysis of Complex Geometries
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 33-40 : ill. includes bibliography
summary A comprehensive set of programs have been developed for analysis of complex two- and three- dimensional geometries. State of the art finite element and hydrodynamic codes are being used for the analytical portion of the work. Several additional codes depending heavily on graphics have been developed to assist the analytical effort. These are basically used for the pre- and post-processing of the data. Prior to running any analysis, the geometry of the body of interest must be represented in the form of small 'finite elements.' After the analysis is run, the data must be post-processed. Both spatial and temporal data exist in the database. It is the database between the analysis codes and the post- processors which allows a wide variety of analysis codes to use the same post-processors. The temporal plotting codes produce time histories for specified quantities (i.e. temperature, pressure, velocity, stress, etc.) at various locations within the body. They may also produce cross-plots of these variables (i.e. stress vs. strain at a particular position). One of the two codes used for plotting of the spatial data is for two-dimensional geometries and the other for three-dimensional models. For three dimensions, the Watkins' hidden surface / line processor is utilized for plots. The spatial plotters display contour lines on vector output devices and color fringes (or gray values) on raster output devices. They both may also display deformed geometries. Further the three-dimensional code has extensive animation capabilities for movie productions
keywords computer graphics, finite elements, modeling, engineering, database, animation, mechanical engineering
series CADline
last changed 1999/02/12 15:07

_id 4435
authors Cheatham, Th.E., Townley, J.A. and Holloway, G.H.
year 1979
title A System for Program Refinement
source 1979. pp. 53-62. includes bibliography
summary The Program Development System (PDS) is a programming environment, an integrated collection of interactive tools that support the process of program definition, testing, and maintenance. The PDS is intended to aid the development of large programs, especially program families whose members must be maintained in synchrony. The system facilitates implementation by stepwise refinement, and it keeps a refinement history that allows program modifications made at a high level of abstraction to be reflected efficiently and automatically in the corresponding low level code. Analysis tools are used both to support program validation and to guide program refinement
keywords user interface, software, systems, programming, tools
series CADline
last changed 2003/06/02 14:41

_id ga0015
id ga0015
authors Daru, R., Vreedenburgh, E. and Scha, R.
year 2000
title Architectural Innovation as an evolutionary process
source International Conference on Generative Art
summary Traditionally in art and architectural history, innovation is treated as a history of ideas of individuals (pioneers), movements and schools. The monograph is in that context one of the most used forms of scientific exercise. History of architecture is then mostly seen as a succession of dominant architectural paradigms imposed by great architectural creators fighting at the beginning against mainstream establishment until they themselves come to be recognised. However, there have been attempts to place architectural innovation and creativity in an evolutionary perspective. Charles Jencks for example, has described the evolution of architectural and art movements according to a diagram inspired by ecological models. Philip Steadman, in his book "The Evolution of Designs. Biological analogy in architecture and the applied arts" (1979), sketches the history of various biological analogies and their impact on architectural theory: the organic, classificatory, anatomical, ecological and Darwinian or evolutionary analogies. This last analogy "explains the design of useful objects and buildings, particularly in primitive society and in the craft tradition, in terms of a sequence of repeated copyings (corresponding to inheritance), with small changes made at each stage ('variations'), which are then subjected to a testing process when the object is put into use ('selection')." However, Steadman has confined his study to a literature survey as the basis of a history of ideas. Since this pioneering work, new developments like Dawkins' concept of memes allow further steps in the field of cultural evolution of architectural innovation. The application of the concept of memes to architectural design has been put forward in a preceding "Generative Art" conference (Daru, 1999), showing its application in a pilot study on the analysis of projects of and by architectural students. This first empirical study is now followed by a study of 'real life' architectural practice. The case taken has a double implication for the evolutionary analogy. It takes a specific architectural innovative concept as a 'meme' and develops the analysis of the trajectory of this meme in the individual context of the designer and at large. At the same time, the architect involved (Eric Vreedenburgh, Archipel Ontwerpers) is knowledgeable about the theory of memetic evolution and is applying a computer tool (called 'Artificial') together with Remko Scha, the authoring computer scientist of the program who collaborates frequently with artists and architects. This case study (the penthouse in Dutch town planning and the application of 'Artificial') shall be discussed in the paper as presented. The theoretical and methodological problems of various models of diffusion of memes shall be discussed and a preliminary model shall be presented as a framework to account for not only Darwinian but also Lamarckian processes, and for individual as well as collective transmission, consumption and creative transformation of memes.
keywords evolutionary design, architectural innovation, memetic diffusion, CAAD, penthouses, Dutch design, creativity, Darwinian and Lamarckian processes
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 6890
authors Eastman, Charles M. and Weiler, Kevin
year 1979
title Geometric Modeling Using the Euler Operators
source 12 p. : ill. Pittsburgh: Institute of Physical Planning, Carnegie Mellon University, February, 1979. includes bibliography
summary A recent advance in the modeling of three-dimensional shapes is the joint development of bounded shape models, capable of representing complete and well-formed arbitrary polyhedra, and operators for manipulating them. Two approaches have been developed thus far in forming bounded shape models: to combine a given fixed set of primitive shapes into other possibly more complex ones using the spatial set operators, and/or to apply lower level operators that define and combine faces, edges, loops and vertices to directly construct a shape. The name that has come to be applied to these latter operators is the Euler operators. This paper offers a description of the Euler operators, in a form expected to be useful for prospective implementers and others wishing to better understand their function and behavior. It includes considerations regarding their specification in terms of being able to completely describe different classes of shapes, how to properly specify them and the extent of their well-formedness, especially in terms of their interaction with geometric operations. Example specifications are provided as well as some useful applications. The Euler operators provide different capabilities from the spatial set operators. An extensible CAD/CAM facility needs them both
keywords Euler operators, boolean operations, CSG, geometric modeling, CAD, CAM, B-rep, solid modeling, theory
series CADline
email
last changed 2003/05/17 10:15

_id eb8e
authors Fowler, Robert J. and Little, James J.
year 1979
title Automatic Extraction of Irregular Network Digital Terrain Models
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 199- 207 : ill. includes bibliography
summary For representation of terrain, an efficient alternative to dense grids is the Triangulated Irregular Network (TIN), which represents a surface as a set of non-overlapping contiguous triangular facets, of irregular size and shape. The source of digital terrain data is increasingly dense raster models produced by automated orthophoto machines or by direct sensors such as synthetic aperture radar. A method is described for automatically extracting a TIN model from dense raster data. An initial approximation is constructed by automatically triangulating a set of feature points derived from the raster model. The method works by local incremental refinement of this model by the addition of new points until a uniform approximation of specified tolerance is obtained. Empirical results show that substantial savings in storage can be obtained
keywords GIS, mapping, computational geometry, data structures, mapping, representation, computer graphics, triangulation
series CADline
last changed 2003/06/02 13:58

_id 4517
authors Fuchs, Henry, Kedem, Zvi M. and Naylor, Bruce F.
year 1979
title Predetermining Visibility Priority in 3-D Scenes
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 175-181 : ill. includes bibliography
summary The principal calculation performed by all visible surface algorithms is the determination of the visible polygon at each pixel in the image. Of the many possible speedups and efficiencies found for this problem, only one published algorithm (developed almost a decade ago by a group at General Electric) took advantage of an observation that many visibility calculations could be performed without knowledge of the eventual viewing position and orientation -- once for all possible images. The method is based on a 'potential obscuration' relation between polygons in the simulated environment. Unfortunately, the method worked only for certain objects; unmanageable objects had to be manually (and expertly!) subdivided into manageable pieces. Described in this paper is a solution to this problem which allows substantial a-priori visibility determination for all possible objects without any manual intervention. The method also identifies the (hopefully, few) visibility calculations which remain to be performed after the viewing position is specified. Also discussed is the development of still stronger solutions which could further reduce the number of these visibility calculations remaining at image generation time
keywords algorithms, hidden lines, hidden surfaces, computer graphics
series CADline
last changed 2003/06/02 13:58

_id c6a9
authors Kay, Douglas Scott and Greenberg, Donald P.
year 1979
title Transparency for Computer Synthesized Images
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 158-164 : ill. (some col.). includes bibliography
summary Simple transparency algorithms which assume a linear transparency over an entire surface are the type most often employed to produce computer synthesized images of transparent objects with curved surfaces. Although most of the images created with these algorithms do give the impression of transparency, they usually do not look realistic. One of the most serious problems is that the intensity of the light that is transmitted through the objects is generally not proportional to the amount of material through which it must pass. Another problem is that the image seen behind the objects is not distorted as would naturally occur when the light is refracted as it passes through a material of different density. Use of a non-linear transparency algorithm can provide a great improvement in the realism of an image at a small additional cost. Making the transparency proportional to the normal to the surface causes it to decrease towards the edges of the surface where the path of the light through the object is longer. The exact simulation of refraction, however, requires that each sight ray be individually traced from the observer, through the picture plane and through each transparent object until an opaque surface is intersected. Since the direction of the ray would change as each material of differing optical density was entered, the hidden surface calculations required would be very time consuming. However, if a few assumptions are made about the geometry of each object and about the conditions under which they are viewed, a much simpler algorithm can be used to approximate the refractive effect. This method proceeds in a back-to-front order, mapping the current background image onto the next surface, until all surfaces have been considered
keywords computer graphics, shading, transformation, display, visualization, algorithms, realism
series CADline
last changed 2003/06/02 13:58

_id 8023
authors Lang, M.S., Cohen, R.L. and Eschenberg, K.E. (et al)
year 1979
title Implementation of An Interactive Computer Graphics Environment at NASA/JSC
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 246-252 : ill. includes bibliography
summary The implementation of visually-oriented software for graphics support on the high-performance computer graphics hardware at NASA's Johnson Space Center is the latest step in the evolution of an interactive computer applications technology being developed by the Computer Graphics Group at The Applied Research Laboratory of Penn State University. This technology is designed to aid the typical scientist or engineer in learning and using computer graphics productively, including writing his own programs and interfacing to software specialists who will write and maintain his programs. Key aspects of the current development include the creation and incorporation of a visually-oriented learning package for graphics geometric perception and graphics programming, as well as a sophisticated control environment which aides the user in obtaining a quick understanding of and access to the system. Preliminary results indicate that this software support can substantially reduce the start-up time for a novice graphics user with some background in Fortran
keywords computer graphics, user interface, software, learning, programming, control, education
series CADline
last changed 2003/06/02 13:58

_id ecaade2020_402
id ecaade2020_402
authors Leibovich, Liz, Nitzan-Shiftan, Alona and Sprecher, Aaron
year 2020
title Cybernetic Methodologies for Flexible and Generative Architectural Systems - the case of Fun Palace and Pattern Language
doi https://doi.org/10.52842/conf.ecaade.2020.1.703
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 703-708
summary The study focuses on early attempts to deal with complex physical environments through a comparative analysis of two canonic projects that combine architectural design with cybernetic theories: (1) "The Fun Palace", by British architect Cedric Price, 1962; and (2) "A Pattern Language", by architectural theorist Christopher Alexander, 1979. This study suggests that both projects dared to advance the relationship between architecture and cybernetics in order to create active reciprocity between architectural design and cybernetic system theories. Drawing on ideas and terms from systems theory, we suggest using a cybernetic system diagram to compare the two projects. We compare the work of Alexander and Price through the terminology of current technologies in order to better understand the reciprocity between the two fields. Such terms include feedback loop, optimization and translation processes, input and output, influence on the environment, automation and user interaction.
keywords Cybernetic; Architecture; System; Feedback
series eCAADe
email
last changed 2022/06/07 07:52

_id 69b3
authors Markelin, Antero
year 1993
title Efficiency of Model Endoscopic Simulation - An Experimental Research at the University of Stuttgart
source Endoscopy as a Tool in Architecture [Proceedings of the 1st European Architectural Endoscopy Association Conference / ISBN 951-722-069-3] Tampere (Finland), 25-28 August 1993, pp. 31-34
summary At the Institute of Urban Planning at the University of Stuttgart early experiments were made with the help of endoscopes in the late 1970’s. The intention was to find new instruments to visualize urban design projects. The first experiment included the use of a 16 mm film of a 1:170 scale model of the market place at Karlsruhe, including design alternatives (with trees, without trees etc). The film was shown to the Karlsruhe authorities, who had to make the decision about the alternatives. It was said, that the film gave a great help for the decision-making and a design proposition had never before been presented in such understandable way. In 1975-77, with the support of the Deutsche Forschungsgemeinschaft (German Research Foundation) an investigation was carried out into existing endoscopic simulation facilities, such as those in Wageningen, Lund and Berkeley. The resulting publication was mainly concerned with technical installations and their applications. However a key question remained: ”Can reality be simulated with endoscopy?” In 1979-82, in order to answer that question, at the Institute was carried out the most extensive research of the time, into the validity of endoscopic simulation. Of special importance was the inclusion of social scientists and psychologists from the University of Heidelberg and Mannheim. A report was produced in 1983. The research was concerned with the theory of model simulation, its ways of use and its users, and then the establishment of requirements for effective model simulation. For the main research work with models or simulation films, psychological tests were developed which enabled a tested person to give accurate responses or evidence without getting involved in alien technical terminology. It was also thought that the use of semantic differentials would make the work imprecise or arbitrary.

keywords Architectural Endoscopy
series EAEA
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

For more results click below:

this is page 0show page 1show page 2show page 3HOMELOGIN (you are user _anon_338326 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002