CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 24

_id a6f1
authors Bridges, A.H.
year 1986
title Any Progress in Systematic Design?
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 5-15
summary In order to discuss this question it is necessary to reflect awhile on design methods in general. The usual categorization discusses 'generations' of design methods, but Levy (1981) proposes an alternative approach. He identifies five paradigm shifts during the course of the twentieth century which have influenced design methods debate. The first paradigm shift was achieved by 1920, when concern with industrial arts could be seen to have replaced concern with craftsmanship. The second shift, occurring in the early 1930s, resulted in the conception of a design profession. The third happened in the 1950s, when the design methods debate emerged; the fourth took place around 1970 and saw the establishment of 'design research'. Now, in the 1980s, we are going through the fifth paradigm shift, associated with the adoption of a holistic approach to design theory and with the emergence of the concept of design ideology. A major point in Levy's paper was the observation that most of these paradigm shifts were associated with radical social reforms or political upheavals. For instance, we may associate concern about public participation with the 1970s shift and the possible use (or misuse) of knowledge, information and power with the 1980s shift. What has emerged, however, from the work of colleagues engaged since the 1970s in attempting to underpin the practice of design with a coherent body of design theory is increasing evidence of the fundamental nature of a person's engagement with the design activity. This includes evidence of the existence of two distinctive modes of thought, one of which can be described as cognitive modelling and the other which can be described as rational thinking. Cognitive modelling is imagining, seeing in the mind's eye. Rational thinking is linguistic thinking, engaging in a form of internal debate. Cognitive modelling is externalized through action, and through the construction of external representations, especially drawings. Rational thinking is externalized through verbal language and, more formally, through mathematical and scientific notations. Cognitive modelling is analogic, presentational, holistic, integrative and based upon pattern recognition and pattern manipulation. Rational thinking is digital, sequential, analytical, explicatory and based upon categorization and logical inference. There is some relationship between the evidence for two distinctive modes of thought and the evidence of specialization in cerebral hemispheres (Cross, 1984). Design methods have tended to focus upon the rational aspects of design and have, therefore, neglected the cognitive aspects. By recognizing that there are peculiar 'designerly' ways of thinking combining both types of thought process used to perceive, construct and comprehend design representations mentally and then transform them into an external manifestation current work in design theory is promising at last to have some relevance to design practice.
series CAAD Futures
email
last changed 2003/11/21 15:16

_id 02c6
authors Wheeler, B.J.Q
year 1986
title A Unified Model for Building
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 200-231
summary It is commonly recognized that the time-honoured procedure for preparing an architectural design for building on site is inefficient. Each member of a team of consultant professionals makes an independently documented contribution. For a typical project involving an architect and structural, electrical, mechanical and public services engineers there will be at least five separate sets of general- arrangement drawings, each forming a model of the building, primarily illustrating one discipline but often having to include elements of others in order to make the drawing readable. For example, an air-conditioning duct-work layout is more easily understood when superimposed on the room layout it serves which the engineer is not responsible for but has to understand. Both during their parallel evolution and later, when changes have to be made during the detailed design and production drawing stages, it is difficult and time consuming to keep all versions coordinated. Complete coordination is rarely achieved in time, and conflicts between one discipline and another have to be rectified when encountered on site with resulting contractual implications. Add the interior designer, the landscape architect and other specialized consultants at one end of the list and contractors' shop drawings relating to the work of all the consultants at the other, and the number of different versions of the same thing grows, escalating the concomitant task of coordination. The potential for disputes over what is the current status of the design is enormous, first, amongst the consultants and second, between the consultants and the contractor. When amendments are made by one party, delay and confusion tend to follow during the period it takes the other parties to update their versions to include them. The idea of solving this problem by using a common computer-based model which all members of the project team can directly contribute to is surely a universally assumed goal amongst all those involved in computer-aided building production. The architect produces a root drawing or model, the 'Architect's base plan', to which the other consultants have read-only access and on top of which they can add their own write-protected files. Every time they access the model to write in the outcome of their work on the project they see the current version of the 'Architect's base plan' and can thus respond immediately to recent changes and avoid wasting time on redundant work. The architect meanwhile adds uniquely architectural material in his own overlaid files and maintains the root model as everybody's work requires. The traditional working pattern is maintained while all the participants have the ability to see their colleagues, work but only make changes to those parts for which they are responsible.
series CAAD Futures
last changed 1999/04/03 17:58

_id cbd0
authors Brown, David C.
year 1985
title Failure Handling in a Design Expert System
source computer Aided Design. November, 1985. vol. 17: pp. 436-442 : ill. Includes bibliography
summary This paper is concerned with how to handle the failures that occur during design problem-solving. Failure handlers and redesigners are introduced. Failure recovery action and the knowledge involved is presented for each agent. The role of suggestions and redesign strategies is discussed. The handling of plan failures is also presented. The paper concludes by surveying other methods of failure handling from the literature
keywords expert systems, problem solving, mechanical engineering, planning,constraints, design, techniques
series CADline
last changed 2003/06/02 13:58

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id ascaad2006_paper20
id ascaad2006_paper20
authors Chougui, Ali
year 2006
title The Digital Design Process: reflections on architectural design positions on complexity and CAAD
source Computing in Architecture / Re-Thinking the Discourse: The Second International Conference of the Arab Society for Computer Aided Architectural Design (ASCAAD 2006), 25-27 April 2006, Sharjah, United Arab Emirates
summary These instructions are intended to guide contributors to the Second Architecture is presently engaged in an impatient search for solutions to critical questions about the nature and the identity of the discipline, and digital technology is a key agent for prevailing innovations in architectural design. The problem of complexity underlies all design problems. With the advent of CAD however, Architect’s ability to truly represent complexity has increased considerably. Another source that provides information about dealing with complexity is architectural theory. As Rowe (1987) states, architectural theory constitutes “a corpus of principles that are agreed upon and therefore worthy of emulation”. Architectural theory often is a mixed reflection on the nature of architectural design, design processes, made in descriptive and prescriptive terms (see Kruft 1985). Complexity is obviously not a new issue in architectural theory. Since it is an inherent characteristic of design problems, it has been dealt with in many different ways throughout history. Contemporary architects incorporate the computer in their design process. They produce architecture that is generated by the use of particle systems, simulation software, animation software, but also the more standard modelling tools. The architects reflect on the impact of the computer in their theories, and display changes in style by using information modelling techniques that have become versatile enough to encompass the complexity of information in the architectural design process. In this way, architectural style and theory can provide directions to further develop CAD. Most notable is the acceptance of complexity as a given fact, not as a phenomenon to oppose in systems of organization, but as a structuring principle to begin with. No matter what information modelling paradigm is used, complex and huge amounts of information need to be processed by designers. A key aspect in the combination of CAD, complexity, and architectural design is the role of the design representation. The way the design is presented and perceived during the design process is instrumental to understanding the design task. More architects are trying to reformulate this working of the representation. The intention of this paper is to present and discuss the current state of the art in architectural design positions on complexity and CAAD, and to reflect in particular on the role of digital design representations in this discussion. We also try to investigate how complexity can be dealt with, by looking at architects, in particular their styles and theories. The way architects use digital media and graphic representations can be informative how units of information can be formed and used in the design process. A case study is a concrete architect’s design processes such as Peter Eisenman Rem Koolhaas, van Berkel, Lynn, and Franke gehry, who embrace complexity and make it a focus point in their design, Rather than viewing it as problematic issue, by using computer as an indispensable instrument in their approaches.
series ASCAAD
email
last changed 2007/04/08 19:47

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id sigradi2007_af13
id sigradi2007_af13
authors Granero, Adriana Edith; Alicia Barrón; María Teresa Urruti
year 2007
title Transformations in the educational system, Influence of the Digital Graph [Transformaciones en el sistema educacional, influencia de la Gráfica Digital]
source SIGraDi 2007 - [Proceedings of the 11th Iberoamerican Congress of Digital Graphics] México D.F. - México 23-25 October 2007, pp. 182-186
summary The educative proposal was based on the summary attained through experiences piled up during the 2 last semester courses, 2/2006-1/2007. This proposal corresponds to a mix of methodology (by personal attendance / by internet). Founding on the Theory of the Game (Eric Berne 1960) and on different theories such as: Multiple intelligences (Haward Gardner 1983), Emotional Intelligence (Peter Salowey and John Mayer 1990, Goleman 1998), Social Intelligence (Goleman 2006), the Triarchy of Intelligence (Stemberg, R.J. 1985, 1997), “the hand of the human power”, it´s established that the power of the voice, that of the imagination, the reward, the commitment and association produce a significant increase of the productivity (Rosabeth Moss Kanter 2000), aside from the constructive processes of the knowledge (new pedagogical concepts constructivista of Ormrod J.E. 2003 and Tim O´Reilly 2004).
series SIGRADI
email
last changed 2016/03/10 09:52

_id 68aa
authors Greenberg, Donald P.
year 1986
title Computer Graphics and Visualization
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 63-67
summary The field of computer graphics has made enormous progress during the past decade. It is rapidly approaching the time when we will be able to create images of such realism that it will be possible to 'walk through' nonexistent spaces and to evaluate their aesthetic quality based on the simulations. In this chapter we wish to document the historical development of computer graphics image creation and describe some techniques which are currently being developed. We will try to explain some pilot projects that we are just beginning to undertake at the Program of Computer Graphics and the Center for Theory and Simulation in Science and Engineering at Cornell University.
series CAAD Futures
last changed 1999/04/03 17:58

_id e799
authors Howes, Jaki
year 1986
title Computer Education in Schools of Architecture and the Needs of Practice
doi https://doi.org/10.52842/conf.ecaade.1986.045
source Teaching and Research Experience with CAAD [4th eCAADe Conference Proceedings] Rome (Italy) 11-13 September 1986, pp. 45-48
summary In April 1985 there was a meeting (at Huddersfield Polytechnic) or representatives from 26 Schools of Architecture. At this, concern was expressed about the lack of direction from the RIBA with regard to the appropriate level of computer teaching on architectural courses. In addition, it was felt that it was essential that at least one member of a Visiting Board panel should be computer literate and in a position to give advice. These points were raised at the RIBA Computer Committee later in 1985, and the committee's attention was also drawn to comments contained in the report by HM Inspector on Public Sector Education in Architecture (1985) based on investigations made during 1984.
series eCAADe
email
last changed 2022/06/07 07:50

_id e8ec
authors Weber, Benz
year 1991
title LEARNING FROM THE FULL-SCALE LABORATORY
source Proceedings of the 3rd European Full-Scale Modelling Conference / ISBN 91-7740044-5 / Lund (Sweden) 13-16 September 1990, pp. 12-19
summary The team from the LEA at Lausanne was not actually involved in the construction of the laboratory itself. During the past five years we have been discovering the qualities and limitations of the lab step by step through the experiments we performed. The method in which we use it is quite different from that of its creators. Since 1985 the external services has been limited to clients coming to the laboratory alone. We help them only with basic instructions for the use of the equipment. Most of these experiments are motivated by the excellent possibilities to discuss the design of a new hospital or home for elderly with the people directly affected by it, such as patients, nurses, doctors and specialists for the technical equipment. The main issues discussed in these meetings are of the dimensions and functional organisation of the spaces. The entire process for a normal room including construction, discussions and dismantling of the full-scale model is between three and five days. Today these types of experiments are occupying the lab only about twenty days a year.
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 15:23

_id 2928
authors Barsky, Brian A. and De Rose, Tony D.
year 1985
title The Beta2-spline : A Special Case of the Beta-spline Curve and Surface Representation
source IEEE Computer Graphics and Applications September, 1985. vol. 5: pp. 46-58 : ill. includes bibliography.
summary This article develops a special case of the Beta-spline curve and surface technique called the Beta2-spline. While a general Beta-spline has two parameters (B1 and B2) controlling its shape, the special case presented here has only the single parameter B2. Experience has shown this to be a simple but very useful special case that is computationally more efficient than the general case. Optimized algorithms for the evaluation of the Beta2-spline basis functions and rendering of Beta2-spline curves and surfaces via subdivision are presented. This technique is proving to be quite useful in the modeling of complex shapes. The representation is sufficiently general and flexible so as to be capable of modeling irregular curved-surface objects such as automobile bodies, aircraft fuselages, ship hulls, turbine blades, and bottles
keywords B-splines, curved surfaces, computational geometry, representation, algorithms, computer graphics, rendering
series CADline
last changed 2003/06/02 14:41

_id ddss9408
id ddss9408
authors Bax, Thijs and Trum, Henk
year 1994
title A Taxonomy of Architecture: Core of a Theory of Design
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary The authors developed a taxonomy of concepts in architectural design. It was accepted by the Advisory Committee for education in the field of architecture, a committee advising the European Commission and Member States, as a reference for their task to harmonize architectural education in Europe. The taxonomy is based on Domain theory, a theory developed by the authors, based on General Systems Theory and the notion of structure according to French Structuralism, takes a participatory viewpoint for the integration of knowledge and interests by parties in the architectural design process. The paper discusses recent developments of the taxonomy, firstly as a result of a confrontation with similar endeavours to structure the field of architectural design, secondly as a result of applications of education and architectural design practice, and thirdly as a result of theapplication of some views derived from the philosophical work from Charles Benjamin Peirce. Developments concern the structural form of the taxonomy comprising basic concepts and levelbound scale concepts, and the specification of the content of the fields which these concepts represent. The confrontation with similar endeavours concerns mainly the work of an ARCUK workingparty, chaired by Tom Marcus, based on the European Directive from 1985. The application concerns experiences with a taxonomy-based enquiry in order to represent the profile of educational programmes of schools and faculties of architecture in Europe in qualitative and quantitative terms. This enquiry was carried out in order to achieve a basis for comparison and judgement, and a basis for future guidelines including quantitative aspects. Views of Peirce, more specifically his views on triarchy as a way of ordering and structuring processes of thinking,provide keys for a re-definition of concepts as building stones of the taxonomy in terms of the form-function-process-triad, which strengthens the coherence of the taxonomy, allowing for a more regular representation in the form of a hierarchical ordered matrix.
series DDSS
last changed 2003/08/07 16:36

_id 23bc
authors Demko, Stephen, Hodges, Laurie and Naylor, Bruce F.
year 1985
title Construction of Fractal Objects with Iterated Function Systems
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 271-278 : ill. col. includes bibliography
summary In computer graphics, geometric modeling of complex objects is a difficult process. An important class of complex objects arise from natural phenomena: trees, plants, clouds, mountains, etc. Researchers are investigating a variety of techniques for extending modeling capabilities to include these as well as other classes. One mathematical concept that appears to have significant potential for this is fractals. Much interest currently exists in the general scientific community in using fractals as a model of complex natural phenomena. However, only a few methods for generating fractal sets are known. We have been involved in the development of a new approach to computing fractals. Any set of linear maps (affine transformations) and an associated set of probabilities determines an Iterated Function System (IFS). Each IFS has a unique 'attractor' which is typically a fractal set (object). Specification of only a few maps can produce very complicated objects. Design of fractal objects is made relatively simple and intuitive by the discovery of an important mathematical property relating the fractal sets to the IFS. The method also provides the possibility of solving the inverse problem, given the geometry of an object, determine an IFS that will (approximately) generate that geometry. This paper presents the application of the theory of IFS to geometric modeling
keywords computer graphics, geometric modeling, fractals, visualization
series CADline
last changed 2003/06/02 13:58

_id 78ca
authors Friedland, P. (Ed.)
year 1985
title Special Section on Architectures for Knowledge-Based Systems
source CACM (28), 9, September
summary A fundamental shift in the preferred approach to building applied artificial intelligence (AI) systems has taken place since the late 1960s. Previous work focused on the construction of general-purpose intelligent systems; the emphasis was on powerful inference methods that could function efficiently even when the available domain-specific knowledge was relatively meager. Today the emphasis is on the role of specific and detailed knowledge, rather than on reasoning methods.The first successful application of this method, which goes by the name of knowledge-based or expert-system research, was the DENDRAL program at Stanford, a long-term collaboration between chemists and computer scientists for automating the determination of molecular structure from empirical formulas and mass spectral data. The key idea is that knowledge is power, for experts, be they human or machine, are often those who know more facts and heuristics about a domain than lesser problem solvers. The task of building an expert system, therefore, is predominantly one of teaching" a system enough of these facts and heuristics to enable it to perform competently in a particular problem-solving context. Such a collection of facts and heuristics is commonly called a knowledge base. Knowledge-based systems are still dependent on inference methods that perform reasoning on the knowledge base, but experience has shown that simple inference methods like generate and test, backward-chaining, and forward-chaining are very effective in a wide variety of problem domains when they are coupled with powerful knowledge bases. If this methodology remains preeminent, then the task of constructing knowledge bases becomes the rate-limiting factor in expert-system development. Indeed, a major portion of the applied AI research in the last decade has been directed at developing techniques and tools for knowledge representation. We are now in the third generation of such efforts. The first generation was marked by the development of enhanced AI languages like Interlisp and PROLOG. The second generation saw the development of knowledge representation tools at AI research institutions; Stanford, for instance, produced EMYCIN, The Unit System, and MRS. The third generation is now producing fully supported commercial tools like KEE and S.1. Each generation has seen a substantial decrease in the amount of time needed to build significant expert systems. Ten years ago prototype systems commonly took on the order of two years to show proof of concept; today such systems are routinely built in a few months. Three basic methodologies-frames, rules, and logic-have emerged to support the complex task of storing human knowledge in an expert system. Each of the articles in this Special Section describes and illustrates one of these methodologies. "The Role of Frame-Based Representation in Reasoning," by Richard Fikes and Tom Kehler, describes an object-centered view of knowledge representation, whereby all knowldge is partitioned into discrete structures (frames) having individual properties (slots). Frames can be used to represent broad concepts, classes of objects, or individual instances or components of objects. They are joined together in an inheritance hierarchy that provides for the transmission of common properties among the frames without multiple specification of those properties. The authors use the KEE knowledge representation and manipulation tool to illustrate the characteristics of frame-based representation for a variety of domain examples. They also show how frame-based systems can be used to incorporate a range of inference methods common to both logic and rule-based systems.""Rule-Based Systems," by Frederick Hayes-Roth, chronicles the history and describes the implementation of production rules as a framework for knowledge representation. In essence, production rules use IF conditions THEN conclusions and IF conditions THEN actions structures to construct a knowledge base. The autor catalogs a wide range of applications for which this methodology has proved natural and (at least partially) successful for replicating intelligent behavior. The article also surveys some already-available computational tools for facilitating the construction of rule-based knowledge bases and discusses the inference methods (particularly backward- and forward-chaining) that are provided as part of these tools. The article concludes with a consideration of the future improvement and expansion of such tools.The third article, "Logic Programming, " by Michael Genesereth and Matthew Ginsberg, provides a tutorial introduction to the formal method of programming by description in the predicate calculus. Unlike traditional programming, which emphasizes how computations are to be performed, logic programming focuses on the what of objects and their behavior. The article illustrates the ease with which incremental additions can be made to a logic-oriented knowledge base, as well as the automatic facilities for inference (through theorem proving) and explanation that result from such formal descriptions. A practical example of diagnosis of digital device malfunctions is used to show how significantand complex problems can be represented in the formalism.A note to the reader who may infer that the AI community is being split into competing camps by these three methodologies: Although each provides advantages in certain specific domains (logic where the domain can be readily axiomatized and where complete causal models are available, rules where most of the knowledge can be conveniently expressed as experiential heuristics, and frames where complex structural descriptions are necessary to adequately describe the domain), the current view is one of synthesis rather than exclusivity. Both logic and rule-based systems commonly incorporate frame-like structures to facilitate the representation of large amounts of factual information, and frame-based systems like KEE allow both production rules and predicate calculus statements to be stored within and activated from frames to do inference. The next generation of knowledge representation tools may even help users to select appropriate methodologies for each particular class of knowledge, and then automatically integrate the various methodologies so selected into a consistent framework for knowledge. "
series journal paper
last changed 2003/04/23 15:14

_id 027b
authors Griffiths, J.G.
year 1985
title Table-Driven Algorithms for Generating Space-Filling Curves
source Computer Aided Design. January/ February, 1985. vol. 17: pp. 37-41 : ill. includes bibliography
summary A simple general method for constructing space-filling curves is presented, based on the use of tables. It is shown how the use of Hilbert's curve can enhance the performance of Warnock's algorithm. A procedure is given which generates Hilbert curves or Sierpinski curves. A second procedure is given which generates Warnock's windows in Hilbert order
keywords computer graphics, rendering, algorithms, curves, representation, display
series CADline
last changed 2003/06/02 13:58

_id 76ce
authors Grimson, W.
year 1985
title Computational Experiments with a Feature Based Stereo Algorithm
source IEEE Trans. Pattern Anal. Machine Intell., Vol. PAMI-7, No. 1
summary Computational models of the human stereo system' can provide insight into general information processing constraints that apply to any stereo system, either artificial or biological. In 1977, Marr and Poggio proposed one such computational model, that was characterized as matching certain feature points in difference-of-Gaussian filtered images, and using the information obtained by matching coarser resolution representations to restrict the search'space for matching finer resolution representations. An implementation of the algorithm and'its testing on a range of images was reported in 1980. Since then a number of psychophysical experiments have suggested possible refinements to the model and modifications to the algorithm. As well, recent computational experiments applying the algorithm to a variety of natural images, especially aerial photographs, have led to a number of modifications. In this article, we present a version of the Marr-Poggio-Gfimson algorithm that embodies these modifications and illustrate its performance on a series of natural images.
series journal paper
last changed 2003/04/23 15:14

_id 2dd3
authors Hall, Theodore W.
year 1985
title Design-Aided Computing: Adapting Old Spaces to New Uses
doi https://doi.org/10.52842/conf.acadia.1985.025
source ACADIA Workshop ‘85 [ACADIA Conference Proceedings] Tempe (Arizona / USA) 2-3 November 1985, pp. 25-34
summary The introduction of computer-aided design to an architecture school requires many departures from tradition—not only in the curriculum, but also in the facilities. Although there is an abundance of technical information available for the design of new computer rooms, building one from scratch is a luxury that few architecture schools can afford. To catch up with the computer revolution - and, it is to be hoped, come to lead it—colleges must engage in the adaptive re-use of spaces that are often not particularly well-suited to the special needs of computing. This paper describes some of the issues that should be considered when an architecture school takes its first plunge into computing. It is not a technical reference, but rather an overview General guidelines are discussed, followed by a detailed case history of our own mixed experience The emphasis is on the need for developing specific plans regarding computer applications before making any big commitments.
series ACADIA
email
last changed 2022/06/07 07:50

_id e234
authors Kalay, Yehuda E. and Harfmann, Anton C.
year 1985
title An Integrative Approach to Computer-Aided Design Education in Architecture
source February, 1985. [17] p. : [8] p. of ill
summary With the advent of CAD, schools of architecture are now obliged to prepare their graduates for using the emerging new design tools and methods in architectural practices of the future. In addition to this educational obligation, schools of architecture (possibly in partnership with practicing firms) are also the most appropriate agents for pursuing research in CAD that will lead to the development of better CAD software for use by the profession as a whole. To meet these two rather different obligations, two kinds of CAD education curricula are required: one which prepares tool- users, and another that prepares tool-builders. The first educates students about the use of CAD tools for the design of buildings, whereas the second educates them about the design of CAD tools themselves. The School of Architecture and Planning in SUNY at Buffalo has recognized these two obligations, and in Fall 1982 began to meet them by planning and implementing an integrated CAD environment. This environment now consists of 3 components: a tool-building sequence of courses, an advanced research program, and a general tool-users architectural curriculum. Students in the tool-building course sequence learn the principles of CAD and may, upon graduation, become researchers and the managers of CAD systems in practicing offices. While in school they form a pool of research assistants who may be employed in the research component of the CAD environment, thereby facilitating the design and development of advanced CAD tools. The research component, through its various projects, develops and provides state of the art tools to be used by practitioners as well as by students in the school, in such courses as architectural studio, environmental controls, performance programming, and basic design courses. Students in these courses who use the tools developed by the research group constitute the tool-users component of the CAD environment. While they are being educated in the methods they will be using throughout their professional careers, they also act as a 'real-world' laboratory for testing the software and thereby provide feedback to the research component. The School of Architecture and Planning in SUNY at Buffalo has been the first school to incorporate such a comprehensive CAD environment in its curriculum, thereby successfully fulfilling its obligation to train students in the innovative methods of design that will be used in architectural practices of the future, and at the same time making a significant contribution to the profession of architecture as a whole. This paper describes the methodology and illustrates the history of the CAD environment's implementation in the School
keywords CAD, architecture, education
series CADline
email
last changed 2003/06/02 13:58

_id 244d
authors Monedero, J., Casaus, A. and Coll, J.
year 1992
title From Barcelona. Chronicle and Provisional Evaluation of a New Course on Architectural Solid Modelling by Computerized Means
doi https://doi.org/10.52842/conf.ecaade.1992.351
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 351-362
summary The first step made at the ETSAB in the computer field goes back to 1965, when professors Margarit and Buxade acquired an IBM computer, an electromechanical machine which used perforated cards and which was used to produce an innovative method of structural calculation. This method was incorporated in the academic courses and, at that time, this repeated question "should students learn programming?" was readily answered: the exercises required some knowledge of Fortran and every student needed this knowledge to do the exercises. This method, well known in Europe at that time, also provided a service for professional practice and marked the beginning of what is now the CC (Centro de Calculo) of our school. In 1980 the School bought a PDP1134, a computer which had 256 Kb of RAM, two disks of 5 Mb and one of lO Mb, and a multiplexor of 8 lines. Some time later the general politics of the UPC changed their course and this was related to the purchase of a VAX which is still the base of the CC and carries most of the administrative burden of the school. 1985 has probably been the first year in which we can talk of a general policy of the school directed towards computers. A report has been made that year, which includes an inquest adressed to the six Departments of the School (Graphic Expression, Projects, Structures, Construction, Composition and Urbanism) and that contains interesting data. According to the report, there were four departments which used computers in their current courses, while the two others (Projects and Composition) did not use them at all. The main user was the Department of Structures while the incidence of the remaining three was rather sporadic. The kind of problems detected in this report are very typical: lack of resources for hardware and software and for maintenance of the few computers that the school had at that moment; a demand (posed by the students) greatly exceeding the supply (computers and teachers). The main problem appeared to be the lack of computer graphic devices and proper software.

series eCAADe
email
last changed 2022/06/07 07:58

_id cd92
authors Pavlidis, Theo and Van Wyk, Christopher J.
year 1985
title An Automatic Beautifier for Drawings and Illustrations
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 225- 230. includes bibliography
summary A method for inferring constraints that are desirable for a given (rough) drawing and then modifying the drawing to satisfy the constraints wherever possible, is described. The method has been implemented as part of an online graphics editor running under the UNIX operating system and it has undergone modifications in response to user input. Although the framework discussed is general, the current implementation is polygon-oriented. The relations examined are: approximate equality of the slope or length of sides, collinearity of sides, and vertical and horizontal alignment of points
keywords drafting, computer graphics, algorithms
series CADline
last changed 2003/06/02 13:58

For more results click below:

this is page 0show page 1HOMELOGIN (you are user _anon_153738 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002