CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 61

_id 98bd
authors Pea, R.
year 1993
title Practices of Distributed Intelligence and Designs for Education
source Distributed Cognitions, edited by G. Salomon. New York, NY: CambridgeUniversity Press
summary v Knowledge is commonly socially constructed, through collaborative efforts... v Intelligence may also be distributed for use in designed artifacts as diverse as physical tools, representations such as diagrams, and computer-user interfaces to complex tasks. v Leont'ev 1978 for activity theory that argues forcibly for the centrality of people-in-action, activity systems, as units of analysis for deepening our understanding of thinking. v Intelligence is distributed: the resources that shape and enable activity are distributed across people, environments, and situations. v Intelligence is accomplished rather than possessed. v Affordance refers to the perceived and actual properties of a thing, primarily those functional properties that determine how the thing could possibly be used. v Norman 1988 on design and psychology - the psychology of everyday things" v We deploy effort-saving strategies in recognition of their cognitive economy and diminished opportunity for error. v The affordances of artifacts may be more or less difficult to convey to novice users of these artifacts in the activities to which they contribute distributed intelligence. v Starts with Norman's seven stages of action Ø Forming a goal; an intention § Task desire - clear goal and intention - an action and a means § Mapping desire - unable to map goal back to action § Circumstantial desire - no specific goal or intention - opportunistic approach to potential new goal § Habitual desire - familiar course of action - rapidly cycle all seven stages of action v Differentiates inscriptional systems from representational or symbol systems because inscriptional systems are completely external, while representational or symbol systems have been used in cognitive science as mental constructs. v The situated properties of everyday cognition are highly inventive in exploiting features of the physical and social situation as resources for performing a task, thereby avoiding the need for mental symbol manipulations unless they are required by that task. v Explicit recognition of the intelligence represented and representable in design, specifically in designed artifacts that play important roles in human activities. v Once intelligence is designed into the affordances properties of artifacts, it both guides and constrains the likely contributions of that artifact to distributed intelligence in activity. v Culturally valued designs for distributed intelligence will change over time, especially as new technology becomes associated with a task domain. v If we treat distributed intelligence in action as the scientific unit of analysis for research and theory on learning and reasoning... Ø What is distributed? Ø What constraints govern the dynamics of such distributions in different time scales? Ø Through what reconfigurations of distributed intelligence might the performance of an activity system improve over time? v Intelligence is manifest in activity and distributed in nature. v Intelligent activities ...in the real world... are often collaborative, depend on resources beyond an individual's long-term memory, and require the use of information-handling tools... v Wartofsky 1979 - the artifact is to cultural evolution what the gene is to biological evolution - the vehicle of information across generations. v Systems of activity - involving persons, environment, tools - become the locus of developmental investigation. v Disagrees with Salomon et al.'s entity-oriented approach - a language of containers holding things. v Human cognition aspires to efficiency in distributing intelligence - across individuals, environment, external symbolic representations, tools, and artifacts - as a means of coping with the complexity of activities we often cal "mental." "
series other
last changed 2003/04/23 15:14

_id e7b8
authors Dahl, Veronica
year 1983
title Logic Programming as a Representation of Knowledge
source IEEE Computer. IEEE Computer Society, October, 1983. vol. 16: pp. 106-110 : ill. includes bibliography
summary Logic has traditionally provided a firm conceptual framework for representing knowledge. As it can formally deal with the notion of logical consequence, the introduction of Prolog has made it possible to represent knowledge in terms of logic and also to expect appropriate inferences to be drawn from it automatically. This article illustrates and explores these ideas with respect to two central representational issues: problem solving knowledge and database knowledge. The technical aspects of both subjects have been covered elsewhere (Kowalski, R. Logic for problem solving, North- Holland pub. 1979 ; Dahl, V. on database system development through logic ACM Trans.vol.7/no.3/Mar.1982 pp.102). This explanation uses simple, nontechnical terms
keywords PROLOG, knowledge, representation, logic, programming, problem solving, database
series CADline
last changed 1999/02/12 15:08

_id 7e54
authors Ömer, Akin
year 1979
title Models of Architectural Knowledge - An Information Processing Model of Design
source Carnegie Mellon University, College of Fine Arts, Pittsburgh
summary Throughout the history of art the position of the artist towards his goals and his product has been constantly redefined. The two opposing views in the above quotation, those of . German Romanticism and Classicism, are typical of the temperamental nature of the state of the art. Today's artist uses intuition as well as reason in his creative work. Similarly, whether we consider the architect an artist or a scientist, he is constantly required to use his intellectal as well as emotional resources while designing. I do not intend to endorse an attitude for the architect which condones only one of those sources at the expense of the other. Today there i s a real opportunity for understanding the reasoning used in problem-solving and applying these to the area of architectural design, the opportunity arises due to a large amount of knowledge accumulated in the area of ' human problem-solving, methods of anlayzing and developing models for human problem solving behavior. The most frequently refered points of departure in this area are Simon's pioneering work in the area of decision-making (1944) and Newell, Shaw and Simon's work on "heuristics" (1957).
series thesis:PhD
email
last changed 2003/02/12 22:39

_id 4eb9
authors Brown, Kevin Q.
year 1979
title Dynamic Programming in Computer Science
source 44 p. : ill. Pittsburgh, PA: Department of Computer Science, CMU, February, 1979. CMU-CS-79-106. Includes bibliography
summary This paper is a survey of dynamic programming algorithms for problems in computer science. For each of the problems the author derives the functional equations and provides numerous references to related results. For many of the problems a dynamic programming algorithm is explicitly given. In addition, the author presents several new problems and results
keywords algorithms, problem solving, dynamic programming
series CADline
last changed 2003/06/02 10:24

_id 4517
authors Fuchs, Henry, Kedem, Zvi M. and Naylor, Bruce F.
year 1979
title Predetermining Visibility Priority in 3-D Scenes
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 175-181 : ill. includes bibliography
summary The principal calculation performed by all visible surface algorithms is the determination of the visible polygon at each pixel in the image. Of the many possible speedups and efficiencies found for this problem, only one published algorithm (developed almost a decade ago by a group at General Electric) took advantage of an observation that many visibility calculations could be performed without knowledge of the eventual viewing position and orientation -- once for all possible images. The method is based on a 'potential obscuration' relation between polygons in the simulated environment. Unfortunately, the method worked only for certain objects; unmanageable objects had to be manually (and expertly!) subdivided into manageable pieces. Described in this paper is a solution to this problem which allows substantial a-priori visibility determination for all possible objects without any manual intervention. The method also identifies the (hopefully, few) visibility calculations which remain to be performed after the viewing position is specified. Also discussed is the development of still stronger solutions which could further reduce the number of these visibility calculations remaining at image generation time
keywords algorithms, hidden lines, hidden surfaces, computer graphics
series CADline
last changed 2003/06/02 13:58

_id 69b3
authors Markelin, Antero
year 1993
title Efficiency of Model Endoscopic Simulation - An Experimental Research at the University of Stuttgart
source Endoscopy as a Tool in Architecture [Proceedings of the 1st European Architectural Endoscopy Association Conference / ISBN 951-722-069-3] Tampere (Finland), 25-28 August 1993, pp. 31-34
summary At the Institute of Urban Planning at the University of Stuttgart early experiments were made with the help of endoscopes in the late 1970’s. The intention was to find new instruments to visualize urban design projects. The first experiment included the use of a 16 mm film of a 1:170 scale model of the market place at Karlsruhe, including design alternatives (with trees, without trees etc). The film was shown to the Karlsruhe authorities, who had to make the decision about the alternatives. It was said, that the film gave a great help for the decision-making and a design proposition had never before been presented in such understandable way. In 1975-77, with the support of the Deutsche Forschungsgemeinschaft (German Research Foundation) an investigation was carried out into existing endoscopic simulation facilities, such as those in Wageningen, Lund and Berkeley. The resulting publication was mainly concerned with technical installations and their applications. However a key question remained: ”Can reality be simulated with endoscopy?” In 1979-82, in order to answer that question, at the Institute was carried out the most extensive research of the time, into the validity of endoscopic simulation. Of special importance was the inclusion of social scientists and psychologists from the University of Heidelberg and Mannheim. A report was produced in 1983. The research was concerned with the theory of model simulation, its ways of use and its users, and then the establishment of requirements for effective model simulation. For the main research work with models or simulation films, psychological tests were developed which enabled a tested person to give accurate responses or evidence without getting involved in alien technical terminology. It was also thought that the use of semantic differentials would make the work imprecise or arbitrary.

keywords Architectural Endoscopy
series EAEA
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id cebc
authors Rhodes, Michael L.
year 1979
title An Algorithmic Approach to Controlling Search in Three-Dimensional Image Data
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 134- 141 : ill. includes bibliography
summary In many three-dimensional imaging applications random shaped objects, reconstructed from serial sections, are isolated to display their overall structure in a single view. This paper presents an algorithm to control an ordered search strategy for locating all contours of random shaped objects intersected by a series of cross-section image planes. Classic search techniques in AI problem solving and software for image processing and computer graphics are combined here to aid program initialization and automate the search process thereafter. Using three-dimensional region growing, this method isolates all spatially connected pixels forming a structure's volume and enters image planes the least number of times to do so. An algorithmic description is given to generalize the process for controlling search in 3-D image data where little core memory is available. Phantom and medical computer tomographic data are used to illustrate the algorithm's performance
keywords algorithms, AI, image processing, computer graphics, methods, search
series CADline
last changed 2003/06/02 10:24

_id af53
authors Boyer, E. and Mitgang, L.
year 1996
title Building community: a new future for architecture education and practice
source Carnegie Foundation for the Advancement of Teaching
summary Internships, before and after graduation, are the most essential link connecting students to the world of practice. Yet, by all accounts, internship is perhaps the most troubled phase of the continuing education of architects. During this century, as architectural knowledge grew more complex, the apprenticeship system withered away and schools assumed much of the responsibility for preparing architects for practice. However, schools cannot do the whole job. It is widely acknowledged that certain kinds of technical and practical knowledge are best learned in the workplace itself, under the guidance of experienced professionals. All state accrediting boards require a minimum period of internship-usually about three years-before a person is eligible to take the licensing exam. The National Council of Architectural Registration Boards (NCARB) allows students to earn up to two years of work credit prior to acquisition of an accredited degree. The Intern Development Program (IDP), launched by NCARB and the American Institute of Architects in 1979, provides the framework for internship in some forty states. The program was designed to assure that interns receive adequate mentoring, that experiences are well-documented, and that employers and interns allocate enough time to a range of educational and vocational experiences to prepare students for eventual licensure. As the IDP Guidelines state, "The shift from school to office is not a transition from theory to pragmatism. It is a period when theory merges with pragmatism.... It's a time when you: apply your formal education to the daily realities of architectural practice; acquire comprehensive experience in basic practice areas; explore specialized areas of practice; develop professional judgment; continue your formal education in architecture; and refine your career goals." Whatever its accomplishments, however, we found broad consensus that the Intern Development Program has not, by itself, solved the problems of internship. Though we found mutually satisfying internship programs at several of the firms we visited or heard about around the country, at many others interns told us they were not receiving the continuing education and experience they needed. The truth is that architecture has serious, unsolved problems compared with other fields when it comes to supplying on-the-job learning experiences to induct students into the profession on a massive scale. Medicine has teaching hospitals. Beginning teachers work in actual classrooms, supported by school taxes. Law offices are, for the most part, in a better financial position to support young lawyers and pay them living wages. The architecture profession, by contrast, must support a required system of internship prior to licensure in an industry that has neither the financial resources of law or medicine, the stability and public support of teaching, nor a network of locations like hospitals or schools where education and practice can be seamlessly connected. And many employers acknowledged those problems. "The profession has all but undermined the traditional relationship between the profession and the academy," said Neil Frankel, FAIA, executive vice president of Perkins & Will, a multinational firm with offices in New York, Chicago, Washington, and London. "Historically, until the advent of the computer, the profession said, 'Okay, go to school, then we in the profession will teach you what the real world is like.' With the coming of the computer, the profession needed a skill that students had, and has left behind the other responsibilities." One intern told us she had been stuck for months doing relatively menial tasks such as toilet elevations. Another intern at a medium-sized firm told us he had been working sixty to seventy hours per week for a year and a half. "Then my wife had a baby and I 'slacked off' to fifty hours. The partner called me in and I got called on the carpet for not working hard enough." "The whole process of internship is being outmoded by economics," one frustrated intern told us. "There's not the time or the money. There's no conception of people being groomed for careers. The younger staff are chosen for their value as productive workers." "We just don't have the best structure here to use an intern's abilities to their best," said a Mississippi architect. "The people who come out of school are really problems. I lost patience with one intern who was demanding that I switch him to another section so that he could learn what he needed for his IDP. I told him, 'It's not my job to teach you. You are here to produce.'" What steps might help students gain more satisfying work opportunities, both during and after graduation?
series other
last changed 2003/04/23 15:14

_id ddss9201
id ddss9201
authors Van Bakel, A.P.M.
year 1993
title Personality assessment in regard to design strategies
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary This paper discusses some preliminary results of several knowledge-acquisition and documentation-structuring techniques that were used to assess the working styles of architects. The focus of this assessment was on their strategic design behaviour. Hettema's Interactive Personality Model (Hettema 1979, 1989) was used to explain and interpret these results. The methods used to acquire the necessary data are protocol analysis, card sorting and interviews. The results suggest that at least three parameters can be used to explain and differentiate the strategic design behaviour of architects. These parameters are S (site-oriented), B (brief-oriented) and C (concept-oriented). A priority hierarchy of these parameters reveals six major distinguishable working styles. These results are captured in a new design model that can be used in data bank implementations.
series DDSS
last changed 2003/08/07 16:36

_id c949
authors Even, Simon
year 1979
title Graph Algorithms
source ix, 249 p. : ill. Potomac, MD: Computer Science Press Inc., 1979. includes bibliography and index -- (computer software Engineering series)
summary The recent progress concerning efficient algorithms for graph processing and graph theory. Each chapter has a set of exercises, which makes it a text book
keywords algorithms, graphs, theory
series CADline
last changed 1999/02/12 15:08

_id 00f3
authors Baybars, Ilker and Eastman, Charles M.
year 1979
title Generating the Underlying Graphs for Architectural Arrangements
source 10 p. : ill. Pittsburgh: School of Urban and Public Affairs, Carnegie Mellon University, April, 1979. Research report No.79. Includes bibliography
summary The mathematical correspondence to a floorplan is a Metric Planar Graph. Several methods for systematic direct generation of metric planar graphs have been developed including polyominoes, March and Matela and shape grammars. Another approach has been to develop a spatial composition in two separate steps. The first step involves discrete variables, and consists of enumerating a defined set of non-metric planar graphs. The second step involves spatial dimensions, e.g. continuous variables, and maps the graphs onto the Euclidean plane, from which a satisfactory or optimal one is selected. This paper focusses on the latter 2-step process. It presents a general method of solving the first step, that is the exhaustive enumeration of a set of planar graphs. The paper consists of three sections: The first section is an introduction to graph theory. The second section presents the generation of maximal planar graphs. The last section summarizes the presentation and comments on the appropriateness of the method
keywords graphs, floor plans, architecture, design, automation, space allocation
series CADline
email
last changed 2003/05/17 10:15

_id c3b5
authors Hinds, John K. and Kuan, L.P.
year 1979
title Sculptured Surface Technology as a Unified Approach to Geometric Definition
source CASA - The Computer and Automated System Association of SME. 23 p. : ill Dearborn: SME, 1979. MS79-146. includes bibliography.
summary The purpose of this paper is to describe a comprehensive approach to representing and machining complex surface shapes in an APT programming system. The APT (Automatically Programmed Tools) user language was extended to permit the definition of a hierarchy of curves and surfaces. Much of the logic has been implemented using matrix canonical forms which are closed under the full family of projective transformations, permitting family of parts storage and retrieval and part compensation. The area of numerical control machining was addressed, but the solutions for tool positioning were only partially successful due to the complexity of the algorithmic problem. This paper first outlines some of the mathematical methods adopted and then illustrates how these have been implemented with an APT part programming example
keywords curved surfaces, representation, geometric modeling, mechanical engineering, CAM
series CADline
last changed 2003/06/02 13:58

_id 4966
authors Kaplan, Michael and Greenberg, Donald P.
year 1979
title Parallel Processing Techniques for Hidden Surface Removal
source SIGGRAPH '79 Conference Proceedings. 1979. vol. 13 ; no. 2: pp. 300-307 : ill. includes bibliography
summary Previous work in the hidden-surface problem has revealed two key concepts. First, the removal of non-visible surfaces is essentially a sorting problem. Second, some form of coherence is essential for the efficient solution of this problem. In order to provide real-time simulations, it is not only the amount of sorting which must be reduced, but the total time required for computation. One potentially economic strategy to attain this goal is the use of parallel processor systems. This approach implies that the computational time will no longer be dependent on the total amount of sorting, but more on the appropriate division of responsibility. This paper investigates two existing algorithmic approaches to the hidden-surface problem with a view towards their applicability to implementation on a parallel machine organization. In particular, the statistical results of a parallel processor implementation indicate the difficulties stemming from a loss of coherence and imply potentially important design criteria for a parallel configuration
keywords computer graphics, rendering, display, hidden surfaces, parallel processing, algorithms
series CADline
last changed 2003/06/02 13:58

_id c6a9
authors Kay, Douglas Scott and Greenberg, Donald P.
year 1979
title Transparency for Computer Synthesized Images
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 158-164 : ill. (some col.). includes bibliography
summary Simple transparency algorithms which assume a linear transparency over an entire surface are the type most often employed to produce computer synthesized images of transparent objects with curved surfaces. Although most of the images created with these algorithms do give the impression of transparency, they usually do not look realistic. One of the most serious problems is that the intensity of the light that is transmitted through the objects is generally not proportional to the amount of material through which it must pass. Another problem is that the image seen behind the objects is not distorted as would naturally occur when the light is refracted as it passes through a material of different density. Use of a non-linear transparency algorithm can provide a great improvement in the realism of an image at a small additional cost. Making the transparency proportional to the normal to the surface causes it to decrease towards the edges of the surface where the path of the light through the object is longer. The exact simulation of refraction, however, requires that each sight ray be individually traced from the observer, through the picture plane and through each transparent object until an opaque surface is intersected. Since the direction of the ray would change as each material of differing optical density was entered, the hidden surface calculations required would be very time consuming. However, if a few assumptions are made about the geometry of each object and about the conditions under which they are viewed, a much simpler algorithm can be used to approximate the refractive effect. This method proceeds in a back-to-front order, mapping the current background image onto the next surface, until all surfaces have been considered
keywords computer graphics, shading, transformation, display, visualization, algorithms, realism
series CADline
last changed 2003/06/02 13:58

_id 4ec4
authors Rosenthal, David S.H., Stone, David and Bijl, Aart
year 1979
title Integrated CAAD Systems
source Edinburgh: March, 1979. [17] p. includes bibliography
summary A study of the fundamental design considerations underlying CAAD systems is presented. Although this study does not concentrate on specific computing applications, nor on the problem of implementing applications in specific design offices, it does present insights into computing systems which should be of interest to architects, and provides a basis for informed judgements on the future of CAAD
keywords CAD, architecture, design, methods, integration, systems
series CADline
last changed 2003/06/02 13:58

_id ea14
authors Anson, Ed
year 1979
title The Semantics of Graphical Input
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 113- 120. includes bibliography
summary This paper describes the semantics of action, an approach to describing input devices which allow the full utilization of all useful device characteristics and provides a high degree of hardware device independence. Part one discusses the semantics of graphical input device. The second shows how to create hierarchies of devices which provide a large measure of hardware independence. The third part applies these concepts to some typical problems, to demonstrate their completeness
keywords computer graphics, user interface, semantics
series CADline
last changed 1999/02/12 15:07

_id f42f
authors Baer, A., Eastman, C. and Henrion, M.
year 1979
title Geometric modeling: a survey
source Computer Aided Design; 11: 253
summary Computer programs are being developed to aid the design of physical systems ranging from individual mechanical parts to entire buildings or ships. These efforts highlight the importance of computer models of three dimensional objects. Issues and alternatives in geometric modelling are discussed and illustrated with comparisons of 11 existing modelling systems, in particular coherently-structured models of polyhedral solids where the faces may be either planar or curved. Four categories of representation are distinguished: data representations that store full, explicit shape information; definition languages with which the user can enter descriptions of shapes into the system, and which can constitute procedural representations; special subsets of the information produced by application programs; and conceptual models that define the logical structure of the data representation and/or definition language.
series journal paper
last changed 2003/04/23 15:14

_id 60d4
authors Baer, A., Eastman, C.M. and Henrion, M.
year 1979
title Geometric Modeling : a Survey
source business Press. September, 1979. vol. 11: pp. 253-271 : ill. includes bibliography
summary Computer programs are being developed to aid the design of physical systems ranging from individual mechanical parts to entire buildings or ships. These efforts highlight the importance of computer models of three dimensional objects. Issues and alternatives in geometric modeling are discussed and illustrated with comparisons of 11 existing modelling systems, in particular coherently-structured models of polyhedral solids where the faces may be either planar or curved. Four categories of representation are distinguished: data representations that store full, explicit shape information; definition languages with which the user can enter description of shapes into the system, and which can constitute procedural representations; special subsets of the information produced by application programs; and conceptual models that define the logical structure of the dada representation and/or definition language
keywords solid modeling, B-rep, CSG, languages, CAD, programming, data structures, boolean operations, polyhedra
series CADline
email
last changed 2003/05/17 10:15

_id caadria2018_033
id caadria2018_033
authors Bai, Nan and Huang, Weixin
year 2018
title Quantitative Analysis on Architects Using Culturomics - Pattern Study of Prizker Winners Based on Google N-gram Data
doi https://doi.org/10.52842/conf.caadria.2018.2.257
source T. Fukuda, W. Huang, P. Janssen, K. Crolla, S. Alhadidi (eds.), Learning, Adapting and Prototyping - Proceedings of the 23rd CAADRIA Conference - Volume 2, Tsinghua University, Beijing, China, 17-19 May 2018, pp. 257-266
summary Quantitative studies using the corpus Google Ngram, namely Culturomics, have been analyzing the implicit patterns of culture changes. Being the top-standard prize in the field of Architecture since 1979, the Pritzker Prize has been increasingly diversified in the recent years. This study intends to reveal the implicit pattern of Pritzker Winners using the method of Culturomics, based on the corpus of Google Ngram to reveal the relationship of the sign of their fame and the fact of prize-winning. 48 architects including 32 awarded and 16 promising are analyzed in the printed corpus of English language between 1900 and 2008. Multiple regression models and multiple imputation methods are used during the data processing. Self-Organizing Map is used to define clusters among the awarded and promising architects. Six main clusters are detected, forming a 3×2 network of fame patterns. Most promising architects can be told from the clustering, according to their similarity to the more typical prize winners. The method of Culturomics could expand the sight of architecture study, giving more possibilities to reveal the implicit patterns of the existing empirical world.
keywords Culturomics; Google Ngram; Pritzker Prize; Fame Pattern; Self-Organizing Map
series CAADRIA
email
last changed 2022/06/07 07:54

_id fcd6
authors Berger, S.R.
year 1979
title Artificial Intelligence and its impact on Coimputer-Aided Design
source Design Studies, vol 1, no. 3
summary This paper provides, for readers unfamiliar with the field, an introductory account of research which has been carried out in artificial intelligence. It attempts to distingussh between an artificial intelligence and a conventional computing approach and to assess the future influence of the former on computer-aided design.
series journal paper
last changed 2003/04/23 15:14

For more results click below:

this is page 0show page 1show page 2show page 3HOMELOGIN (you are user _anon_449571 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002