CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 145

_id c898
authors Gero, John S.
year 1986
title An Overview of Knowledge Engineering and its Relevance to CAAD
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 107-119
summary Computer-aided architectural design (CAAD) has come to mean a number of often disparate activities. These can be placed into one of two categories: using the computer as a drafting and, to a lesser extent, modelling system; and using it as a design medium. The distinction between the two categories is often blurred. Using the computer as a drafting and modelling tool relies on computing notions concerned with representing objects and structures numerically and with ideas of computer programs as procedural algorithms. Similar notions underly the use of computers as a design medium. We shall return to these later. Clearly, all computer programs contain knowledge, whether methodological knowledge about processes or knowledge about structural relationships in models or databases. However, this knowledge is so intertwined with the procedural representation within the program that it can no longer be seen or found. Architecture is concerned with much more than numerical descriptions of buildings. It is concerned with concepts, ideas, judgement and experience. All these appear to be outside the realm of traditional computing. Yet architects discoursing use models of buildings largely unrelated to either numerical descriptions or procedural representations. They make use of knowledge - about objects, events and processes - and make nonprocedural (declarative) statements that can only be described symbolically. The limits of traditional computing are the limits of traditional computer-aided design systems, namely, that it is unable directly to represent and manipulate declarative, nonalgorithmic, knowledge or to perform symbolic reasoning. Developments in artificial intelligence have opened up ways of increasing the applicability of computers by acquiring and representing knowledge in computable forms. These approaches supplement rather than supplant existing uses of computers. They begin to allow the explicit representations of human knowledge. The remainder of this chapter provides a brief introduction to this field and describes, through applications, its relevance to computer- aided architectural design.
series CAAD Futures
email
last changed 2003/05/16 20:58

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id c50a
authors Bartschi, Martin
year 1985
title An Overview of Information Retrieval Subjects
source IEEE Computer. May, 1985. vol. 18: pp. 67-84 : ill. includes bibliography
summary The aim of an information retrieval system is to find information items relevant to an information need. As relevance is a kind of similarity relation between the concepts represented by the information item and those represented by the formulation of the information need, it is not astonishing to discover that the class of possible query forms -formulations of the information needs - is the same as the class of possible representations of information items. This article overviews current research problems in information structure and query evaluation
keywords database, information, queries, systems
series CADline
last changed 1999/02/12 15:07

_id 29ff
authors Farouki, Rida T. and Hinds, John K.
year 1985
title A Hierarchy of Geometric Forms
source IEEE Computer Graphics and Applications. May, 1985. vol. 5: pp. 51-78 : ill. includes bibliography
summary This article describes a unified approach to geometric modeling based on the mathematics of parametric polynomial functions. Such a unified scheme for geometric representation and computation provides a natural base for a geometric modeler of considerable versatility and robustness
keywords geometric modeling, parametrization, representation, curves, curved surfaces, B-splines
series CADline
last changed 2003/06/02 13:58

_id 00ed
authors O'Leary, Dianne and Stewart, G.W.
year 1985
title Data-Flow Algorithms for Parallel Matrix Computations
source Communications of the ACM August, 1985. vol. 28: pp. 840-853. includes bibliography.
summary In this article the authors develop some algorithms and tools for solving matrix problems on parallel processing computers. Operations are synchronized through data-flow alone, which makes global synchronization unnecessary and enables the algorithms to be implemented on machines with very simple operating systems and communication protocols. As examples, an algorithm that forms the main modules for solving Liapounov matrix equations is presented. The authors compare this approach to wave front array processors and systolic arrays, and note its advantages in handling missized problems, in evaluating variations of algorithms or architectures, in moving algorithms from system to system, and in debugging parallel algorithms on sequential machines
keywords tools, algorithms, mathematics, parallel processing
series CADline
last changed 2003/06/02 13:58

_id 6c66
authors Perlin, Ken
year 1985
title An Image Synthesizer
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 287- 296 : ill. includes bibliography
summary The authors introduce the concept of a Pixel Stream Editor. This forms the basis for an interactive synthesizer for designing highly realistic Computer Generated Imagery. The designer works in an interactive Very High Level programming environment which provides a very fast concept/implement/view iteration cycle. Naturalistic visual complexity is built up by composition of non-linear functions, as opposed to the more conventional texture mapping or growth model algorithms. Powerful primitives are included for creating controlled stochastic effects. The concept of 'solid texture' to the field of CGI is introduced. The authors have used this system to create very convincing representations of clouds, fire, water, stars, marble, wood, rock, soap films and crystals. The algorithms created with this paradigm are generally extremely fast, highly realistic, and asynchronously parallelizable at the pixel level
keywords computer graphics, programming, algorithms, synthesis, realism
series CADline
last changed 1999/02/12 15:09

_id avocaad_2001_16
id avocaad_2001_16
authors Yu-Ying Chang, Yu-Tung Liu, Chien-Hui Wong
year 2001
title Some Phenomena of Spatial Characteristics of Cyberspace
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary "Space," which has long been an important concept in architecture (Bloomer & Moore, 1977; Mitchell, 1995, 1999), has attracted interest of researchers from various academic disciplines in recent years (Agnew, 1993; Benko & Strohmayer, 1996; Chang, 1999; Foucault, 1982; Gould, 1998). Researchers from disciplines such as anthropology, geography, sociology, philosophy, and linguistics regard it as the basis of the discussion of various theories in social sciences and humanities (Chen, 1999). On the other hand, since the invention of Internet, Internet users have been experiencing a new and magic "world." According to the definitions in traditional architecture theories, "space" is generated whenever people define a finite void by some physical elements (Zevi, 1985). However, although Internet is a virtual, immense, invisible and intangible world, navigating in it, we can still sense the very presence of ourselves and others in a wonderland. This sense could be testified by our naming of Internet as Cyberspace -- an exotic kind of space. Therefore, as people nowadays rely more and more on the Internet in their daily life, and as more and more architectural scholars and designers begin to invest their efforts in the design of virtual places online (e.g., Maher, 1999; Li & Maher, 2000), we cannot help but ask whether there are indeed sensible spaces in Internet. And if yes, these spaces exist in terms of what forms and created by what ways?To join the current interdisciplinary discussion on the issue of space, and to obtain new definition as well as insightful understanding of "space", this study explores the spatial phenomena in Internet. We hope that our findings would ultimately be also useful for contemporary architectural designers and scholars in their designs in the real world.As a preliminary exploration, the main objective of this study is to discover the elements involved in the creation/construction of Internet spaces and to examine the relationship between human participants and Internet spaces. In addition, this study also attempts to investigate whether participants from different academic disciplines define or experience Internet spaces in different ways, and to find what spatial elements of Internet they emphasize the most.In order to achieve a more comprehensive understanding of the spatial phenomena in Internet and to overcome the subjectivity of the members of the research team, the research design of this study was divided into two stages. At the first stage, we conducted literature review to study existing theories of space (which are based on observations and investigations of the physical world). At the second stage of this study, we recruited 8 Internet regular users to approach this topic from different point of views, and to see whether people with different academic training would define and experience Internet spaces differently.The results of this study reveal that the relationship between human participants and Internet spaces is different from that between human participants and physical spaces. In the physical world, physical elements of space must be established first; it then begins to be regarded as a place after interaction between/among human participants or interaction between human participants and the physical environment. In contrast, in Internet, a sense of place is first created through human interactions (or activities), Internet participants then begin to sense the existence of a space. Therefore, it seems that, among the many spatial elements of Internet we found, "interaction/reciprocity" Ñ either between/among human participants or between human participants and the computer interface Ð seems to be the most crucial element.In addition, another interesting result of this study is that verbal (linguistic) elements could provoke a sense of space in a degree higher than 2D visual representation and no less than 3D visual simulations. Nevertheless, verbal and 3D visual elements seem to work in different ways in terms of cognitive behaviors: Verbal elements provoke visual imagery and other sensory perceptions by "imagining" and then excite personal experiences of space; visual elements, on the other hand, provoke and excite visual experiences of space directly by "mapping".Finally, it was found that participants with different academic training did experience and define space differently. For example, when experiencing and analyzing Internet spaces, architecture designers, the creators of the physical world, emphasize the design of circulation and orientation, while participants with linguistics training focus more on subtle language usage. Visual designers tend to analyze the graphical elements of virtual spaces based on traditional painting theories; industrial designers, on the other hand, tend to treat these spaces as industrial products, emphasizing concept of user-center and the control of the computer interface.The findings of this study seem to add new information to our understanding of virtual space. It would be interesting for future studies to investigate how this information influences architectural designers in their real-world practices in this digital age. In addition, to obtain a fuller picture of Internet space, further research is needed to study the same issue by examining more Internet participants who have no formal linguistics and graphical training.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 678e
authors Aish, Robert
year 1986
title Three-dimensional Input and Visualization
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 68-84
summary The aim of this chapter is to investigate techniques by which man-computer interaction could be improved, specifically in the context of architectural applications of CAD. In this application the object being designed is often an assembly of defined components. Even if the building is not actually fabricated from such components, it is usually conceptualized in these terms. In a conventional graphics- based CAD system these components are usually represented by graphical icons which are displayed on the graphics screen and arranged by the user. The system described here consists of three- dimensional modelling elements which the user physically assembles to form his design. Unlike conventional architectural models which are static (i.e. cannot be changed by the users) and passive (i.e. cannot be read by a CAD system), this model is both 'user generated' and 'machine readable'. The user can create, edit and view the model by simple, natural modelling activities and without the need to learn complex operating commands often associated with CAD systems. In particular, the user can view the model, altering his viewpoint and focus of attention in a completely natural way. Conventional computer graphics within an associated CAD system are used to represent the detailed geometry which the different three-dimensional icons may represent. In addition, computer graphics are also used to present the output of the performance attributes of the objects being modelled. In the architectural application described in this chapter an energy- balance evaluation is displayed for a building designed using the modelling device. While this system is not intended to offer a completely free-form input facility it can be considered to be a specialist man-machine interface of particular relevance to architects or engineers.
series CAAD Futures
email
last changed 2003/11/21 15:15

_id a6f1
authors Bridges, A.H.
year 1986
title Any Progress in Systematic Design?
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 5-15
summary In order to discuss this question it is necessary to reflect awhile on design methods in general. The usual categorization discusses 'generations' of design methods, but Levy (1981) proposes an alternative approach. He identifies five paradigm shifts during the course of the twentieth century which have influenced design methods debate. The first paradigm shift was achieved by 1920, when concern with industrial arts could be seen to have replaced concern with craftsmanship. The second shift, occurring in the early 1930s, resulted in the conception of a design profession. The third happened in the 1950s, when the design methods debate emerged; the fourth took place around 1970 and saw the establishment of 'design research'. Now, in the 1980s, we are going through the fifth paradigm shift, associated with the adoption of a holistic approach to design theory and with the emergence of the concept of design ideology. A major point in Levy's paper was the observation that most of these paradigm shifts were associated with radical social reforms or political upheavals. For instance, we may associate concern about public participation with the 1970s shift and the possible use (or misuse) of knowledge, information and power with the 1980s shift. What has emerged, however, from the work of colleagues engaged since the 1970s in attempting to underpin the practice of design with a coherent body of design theory is increasing evidence of the fundamental nature of a person's engagement with the design activity. This includes evidence of the existence of two distinctive modes of thought, one of which can be described as cognitive modelling and the other which can be described as rational thinking. Cognitive modelling is imagining, seeing in the mind's eye. Rational thinking is linguistic thinking, engaging in a form of internal debate. Cognitive modelling is externalized through action, and through the construction of external representations, especially drawings. Rational thinking is externalized through verbal language and, more formally, through mathematical and scientific notations. Cognitive modelling is analogic, presentational, holistic, integrative and based upon pattern recognition and pattern manipulation. Rational thinking is digital, sequential, analytical, explicatory and based upon categorization and logical inference. There is some relationship between the evidence for two distinctive modes of thought and the evidence of specialization in cerebral hemispheres (Cross, 1984). Design methods have tended to focus upon the rational aspects of design and have, therefore, neglected the cognitive aspects. By recognizing that there are peculiar 'designerly' ways of thinking combining both types of thought process used to perceive, construct and comprehend design representations mentally and then transform them into an external manifestation current work in design theory is promising at last to have some relevance to design practice.
series CAAD Futures
email
last changed 2003/11/21 15:16

_id 0533
authors Clemons, Eric K. and Greenfield, Arnold J.
year 1985
title The SAGE System Architecture: A System for the Rapid Development of Graphics Interfaces for Decision Support
source IEEE Computer Graphics and Applications. November, 1985. vol. 5: pp. 38-50 : ill. includes bibliography
summary Graphics interfaces support the decision maker in sensitivity analysis - the exploration of proposed solutions and examination of alternatives. The authors present an architecture for rapid preparation of graphics interfaces for large classes of management sciences, operations research, and expert systems models. This architecture is based on a detailed study of sensitivity analysis requests is also presented. The architecture was the basis of a prototype, now operational, which is illustrated through a case study of sensitivity analysis in a vehicle-routing system
keywords expert systems, user interface, operations research
series CADline
last changed 2003/06/02 10:24

_id 298e
authors Dave, Bharat and Woodbury, Robert
year 1990
title Computer Modeling: A First Course in Design Computing
source The Electronic Design Studio: Architectural Knowledge and Media in the Computer Era [CAAD Futures ‘89 Conference Proceedings / ISBN 0-262-13254-0] Cambridge (Massachusetts / USA), 1989, pp. 61-76
summary Computation in design has long been a focus in our department. In recent years our faculty has paid particular attention to the use of computation in professional architectural education. The result is a shared vision of computers in the curriculum [Woodbury 1985] and a set of courses, some with considerable historyland others just now being initiated. We (Dave and Woodbury) have jointly developed and at various times over the last seven years have taught Computer Modeling, the most introductory of these courses. This is a required course for all the incoming freshmen students in the department. In this paper we describe Computer Modeling: its context, the issues and topics it addresses, the tasks it requires of students, and the questions and opportunities that it raises. Computer Modeling is a course about concepts, about ways of explicitly understanding design and its relation to computation. Procedural skills and algorithmic problem solving techniques are given only secondary emphasis. In essential terms, the course is about models, of design processes, of designed objects, of computation and of computational design. Its lessons are intended to communicate a structure of such models to students and through this structure to demonstrate a relationship between computation and design. It is hoped that this structure can be used as a framework, around which students can continue to develop an understanding of computers in design.
series CAAD Futures
email
last changed 2003/05/16 20:58

_id 0faa
authors Duelund Mortensen, Peder
year 1991
title THE FULL-SCALE MODEL WORKSHOP
source Proceedings of the 3rd European Full-Scale Modelling Conference / ISBN 91-7740044-5 / Lund (Sweden) 13-16 September 1990, pp. 10-11
summary The workshop is an institution, available for use by the public and established at the Laboratory of Housing in the Art Academy's school of Architecture for a 3 year trial period beginning April 1985. This resumé contains brief descriptions of a variety of representative model projects and an overview of all projects carried out so far, including the pilot projects from 1983 and planned projects to and including January 1987. The Full Scale Model Workshop builds full size models of buildings, rooms and parts of buildings. The purpose of the Full Scale Model Workshop is to promote communication among building's users. The workshop is a tool in an attempt to build bridges between theory and practice in research, experimentation and communication of research results. New ideas and experiments of various sorts can be tried out cheaply, quickly and efficiently through the building of full scale models. Changes can be done on the spot as a planned part of the project and on the basis of ideas and experiments achieved through the model work itself. Buildings and their space can thus be communicated directly to all involved persons, regardless of technical background or training in evaluation of building projects.
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 15:23

_id 25de
authors Ervamaa, Pekka
year 1993
title Integrated Visualization
source Endoscopy as a Tool in Architecture [Proceedings of the 1st European Architectural Endoscopy Association Conference / ISBN 951-722-069-3] Tampere (Finland), 25-28 August 1993, pp. 157-160
summary The Video and Multimedia studio at VTT, Technical Research Centre of Finland, started with endoscopy photography of scale models. Video recordings has been made since 1985 and computer graphic since 1989. New visualization methods and techniques has been taken into use as a part of research projects, but mainly we have been working with clients commissions only. Theoretical background for the visualizations is strong. Research professor Hilkka Lehtonen has published several papers concerning the theory of visualization in urban planning. This studio is the only professional level video unit at Technical Research Centre, which is a large polytechnic research unit. We produce video tapes for many other research units. All kind of integrated methods of visualization are useful in these video productions, too.
keywords Architectural Endoscopy
series EAEA
email
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id 78ca
authors Friedland, P. (Ed.)
year 1985
title Special Section on Architectures for Knowledge-Based Systems
source CACM (28), 9, September
summary A fundamental shift in the preferred approach to building applied artificial intelligence (AI) systems has taken place since the late 1960s. Previous work focused on the construction of general-purpose intelligent systems; the emphasis was on powerful inference methods that could function efficiently even when the available domain-specific knowledge was relatively meager. Today the emphasis is on the role of specific and detailed knowledge, rather than on reasoning methods.The first successful application of this method, which goes by the name of knowledge-based or expert-system research, was the DENDRAL program at Stanford, a long-term collaboration between chemists and computer scientists for automating the determination of molecular structure from empirical formulas and mass spectral data. The key idea is that knowledge is power, for experts, be they human or machine, are often those who know more facts and heuristics about a domain than lesser problem solvers. The task of building an expert system, therefore, is predominantly one of teaching" a system enough of these facts and heuristics to enable it to perform competently in a particular problem-solving context. Such a collection of facts and heuristics is commonly called a knowledge base. Knowledge-based systems are still dependent on inference methods that perform reasoning on the knowledge base, but experience has shown that simple inference methods like generate and test, backward-chaining, and forward-chaining are very effective in a wide variety of problem domains when they are coupled with powerful knowledge bases. If this methodology remains preeminent, then the task of constructing knowledge bases becomes the rate-limiting factor in expert-system development. Indeed, a major portion of the applied AI research in the last decade has been directed at developing techniques and tools for knowledge representation. We are now in the third generation of such efforts. The first generation was marked by the development of enhanced AI languages like Interlisp and PROLOG. The second generation saw the development of knowledge representation tools at AI research institutions; Stanford, for instance, produced EMYCIN, The Unit System, and MRS. The third generation is now producing fully supported commercial tools like KEE and S.1. Each generation has seen a substantial decrease in the amount of time needed to build significant expert systems. Ten years ago prototype systems commonly took on the order of two years to show proof of concept; today such systems are routinely built in a few months. Three basic methodologies-frames, rules, and logic-have emerged to support the complex task of storing human knowledge in an expert system. Each of the articles in this Special Section describes and illustrates one of these methodologies. "The Role of Frame-Based Representation in Reasoning," by Richard Fikes and Tom Kehler, describes an object-centered view of knowledge representation, whereby all knowldge is partitioned into discrete structures (frames) having individual properties (slots). Frames can be used to represent broad concepts, classes of objects, or individual instances or components of objects. They are joined together in an inheritance hierarchy that provides for the transmission of common properties among the frames without multiple specification of those properties. The authors use the KEE knowledge representation and manipulation tool to illustrate the characteristics of frame-based representation for a variety of domain examples. They also show how frame-based systems can be used to incorporate a range of inference methods common to both logic and rule-based systems.""Rule-Based Systems," by Frederick Hayes-Roth, chronicles the history and describes the implementation of production rules as a framework for knowledge representation. In essence, production rules use IF conditions THEN conclusions and IF conditions THEN actions structures to construct a knowledge base. The autor catalogs a wide range of applications for which this methodology has proved natural and (at least partially) successful for replicating intelligent behavior. The article also surveys some already-available computational tools for facilitating the construction of rule-based knowledge bases and discusses the inference methods (particularly backward- and forward-chaining) that are provided as part of these tools. The article concludes with a consideration of the future improvement and expansion of such tools.The third article, "Logic Programming, " by Michael Genesereth and Matthew Ginsberg, provides a tutorial introduction to the formal method of programming by description in the predicate calculus. Unlike traditional programming, which emphasizes how computations are to be performed, logic programming focuses on the what of objects and their behavior. The article illustrates the ease with which incremental additions can be made to a logic-oriented knowledge base, as well as the automatic facilities for inference (through theorem proving) and explanation that result from such formal descriptions. A practical example of diagnosis of digital device malfunctions is used to show how significantand complex problems can be represented in the formalism.A note to the reader who may infer that the AI community is being split into competing camps by these three methodologies: Although each provides advantages in certain specific domains (logic where the domain can be readily axiomatized and where complete causal models are available, rules where most of the knowledge can be conveniently expressed as experiential heuristics, and frames where complex structural descriptions are necessary to adequately describe the domain), the current view is one of synthesis rather than exclusivity. Both logic and rule-based systems commonly incorporate frame-like structures to facilitate the representation of large amounts of factual information, and frame-based systems like KEE allow both production rules and predicate calculus statements to be stored within and activated from frames to do inference. The next generation of knowledge representation tools may even help users to select appropriate methodologies for each particular class of knowledge, and then automatically integrate the various methodologies so selected into a consistent framework for knowledge. "
series journal paper
last changed 2003/04/23 15:14

_id 76ce
authors Grimson, W.
year 1985
title Computational Experiments with a Feature Based Stereo Algorithm
source IEEE Trans. Pattern Anal. Machine Intell., Vol. PAMI-7, No. 1
summary Computational models of the human stereo system' can provide insight into general information processing constraints that apply to any stereo system, either artificial or biological. In 1977, Marr and Poggio proposed one such computational model, that was characterized as matching certain feature points in difference-of-Gaussian filtered images, and using the information obtained by matching coarser resolution representations to restrict the search'space for matching finer resolution representations. An implementation of the algorithm and'its testing on a range of images was reported in 1980. Since then a number of psychophysical experiments have suggested possible refinements to the model and modifications to the algorithm. As well, recent computational experiments applying the algorithm to a variety of natural images, especially aerial photographs, have led to a number of modifications. In this article, we present a version of the Marr-Poggio-Gfimson algorithm that embodies these modifications and illustrate its performance on a series of natural images.
series journal paper
last changed 2003/04/23 15:14

_id c361
authors Logan, Brian S.
year 1986
title Representing the Structure of Design Problems
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 158-170
summary In recent years several experimental CAD systems have emerged which, focus specifically on the structure of design problems rather than on solution generation or appraisal (Sussman and Steele, 1980; McCallum, 1982). However, the development of these systems has been hampered by the lack of an adequate theoretical basis. There is little or no argument as to what the statements comprising these models actually mean, or on the types of operations that should be provided. This chapter describes an attempt to develop a semantically adequate basis for a model of the structure of design problems and presents a representation of this model in formal logic.
series CAAD Futures
last changed 1999/04/03 17:58

_id 85d0
authors Peachey, Darwyn R.
year 1985
title Solid Texturing of Complex Surfaces
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 279-286 : ill. includes bibliography
summary Texturing is an effective method of simulating surface detail at relatively low cost. Traditionally, texture functions have been defined on the two-dimensional surface coordinate systems of individual surface patches. This paper introduces the notion of 'solid texturing.' Solid texturing uses texture functions defined throughout a region of three-dimensional space. Many nonhomogeneous materials, including wood and stone, may be more realistically rendered using solid texture functions. In addition, solid texturing can easily be applied to complex surfaces which are difficult to texture using two- dimensional texture functions. The paper gives examples of solid texture functions based on Fourier synthesis, stochastic texture models, projections of two-dimensional textures, and combinations of other solid textures
keywords shading, texture mapping, solid modeling, objects, computer graphics, rendering, visualization
series CADline
last changed 2003/06/02 10:24

_id 62ff
authors Peckham, R. J.
year 1985
title Shading Evaluations with General Three- Dimensional Models
source Computer Aided Design. September, 1985. vol. 17: pp. 305-310 : ill. includes bibliography
summary The SHADOWPACK package of computer programs has been developed to facilitate shading evaluations, for the direct component of solar radiation, with general 3D models. An interactive solid modelling program allows the user to construct and view the 3D model before saving it for further analysis and display. Other programs permit the graphical display of the shading situation throughout the year, the quantitative assessment of energy received on different faces of the model, and the display of the distribution of energy received on particular faces by means of contour plots. The use of the computer graphics approach has proved particularly convenient because of the similarity between the techniques used for graphical and numerical algorithms
keywords shading, solid modeling, evaluation, energy, computer graphics
series CADline
last changed 2003/06/02 13:58

_id 452c
authors Vanier, D. J. and Worling, Jamie
year 1986
title Three-dimensional Visualization: A Case Study
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 92-102
summary Three-dimensional computer visualization has intrigued both building designers and computer scientists for decades. Research and conference papers present an extensive list of existing and potential uses for threedimensional geometric data for the building industry (Baer et al., 1979). Early studies on visualization include urban planning (Rogers, 1980), treeshading simulation (Schiler and Greenberg, 1980), sun studies (Anon, 1984), finite element analysis (Proulx, 1983), and facade texture rendering (Nizzolese, 1980). With the advent of better interfaces, faster computer processing speeds and better application packages, there had been interest on the part of both researchers and practitioners in three-dimensional -models for energy analysis (Pittman and Greenberg, 1980), modelling with transparencies (Hebert, 1982), super-realistic rendering (Greenberg, 1984), visual impact (Bridges, 1983), interference clash checking (Trickett, 1980), and complex object visualization (Haward, 1984). The Division of Building Research is currently investigating the application of geometric modelling in the building delivery process using sophisticated software (Evans, 1985). The first stage of the project (Vanier, 1985), a feasibility study, deals with the aesthetics of the mode. It identifies two significant requirements for geometric modelling systems: the need for a comprehensive data structure and the requirement for realistic accuracies and tolerances. This chapter presents the results of the second phase of this geometric modelling project, which is the construction of 'working' and 'presentation' models for a building.
series CAAD Futures
email
last changed 2003/05/16 20:58

_id ce52
authors Abram, Greg, Weslover, Lee and Whitted, Turner
year 1985
title Efficient Alias-Free Rendering using Bit-masks and Look-up Tables
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 53-59 : ill. (some col.). includes bibliography
summary The authors demonstrate methods of rendering alias-free synthetic images using a precomputed convolution integral. The method is based on the observation that a visible polygon fragment's contribution to an image is solely a function of its position and shape, and that within a reasonable level of accuracy, a limited number of shapes represent the majority of cases encountered in images commonly rendered. The basic technique has been applied to several different rendering algorithms. A version of the newly non-uniform sampling technique implemented in the same program but with different tables values was introduced
keywords algorithms, computer graphics, anti-aliasing
series CADline
last changed 2003/06/02 13:58

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 7HOMELOGIN (you are user _anon_815728 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002