CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 16 of 16

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ddssar0206
id ddssar0206
authors Bax, M.F.Th. and Trum, H.M.G.J.
year 2002
title Faculties of Architecture
source Timmermans, Harry (Ed.), Sixth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings Avegoor, the Netherlands), 2002
summary In order to be inscribed in the European Architect’s register the study program leading to the diploma ‘Architect’ has to meet the criteria of the EC Architect’s Directive (1985). The criteria are enumerated in 11 principles of Article 3 of the Directive. The Advisory Committee, established by the European Council got the task to examine such diplomas in the case some doubts are raised by other Member States. To carry out this task a matrix was designed, as an independent interpreting framework that mediates between the principles of Article 3 and the actual study program of a faculty. Such a tool was needed because of inconsistencies in the list of principles, differences between linguistic versions ofthe Directive, and quantification problems with time, devoted to the principles in the study programs. The core of the matrix, its headings, is a categorisation of the principles on a higher level of abstractionin the form of a taxonomy of domains and corresponding concepts. Filling in the matrix means that each study element of the study programs is analysed according to their content in terms of domains; thesummation of study time devoted to the various domains results in a so-called ‘profile of a faculty’. Judgement of that profile takes place by committee of peers. The domains of the taxonomy are intrinsically the same as the concepts and categories, needed for the description of an architectural design object: the faculties of architecture. This correspondence relates the taxonomy to the field of design theory and philosophy. The taxonomy is an application of Domain theory. This theory,developed by the authors since 1977, takes as a view that the architectural object only can be described fully as an integration of all types of domains. The theory supports the idea of a participatory andinterdisciplinary approach to design, which proved to be awarding both from a scientific and a social point of view. All types of domains have in common that they are measured in three dimensions: form, function and process, connecting the material aspects of the object with its social and proceduralaspects. In the taxonomy the function dimension is emphasised. It will be argued in the paper that the taxonomy is a categorisation following the pragmatistic philosophy of Charles Sanders Peirce. It will bedemonstrated as well that the taxonomy is easy to handle by giving examples of its application in various countries in the last 5 years. The taxonomy proved to be an adequate tool for judgement ofstudy programs and their subsequent improvement, as constituted by the faculties of a Faculty of Architecture. The matrix is described as the result of theoretical reflection and practical application of a matrix, already in use since 1995. The major improvement of the matrix is its direct connection with Peirce’s universal categories and the self-explanatory character of its structure. The connection with Peirce’s categories gave the matrix a more universal character, which enables application in other fieldswhere the term ‘architecture’ is used as a metaphor for artefacts.
series DDSS
last changed 2003/11/21 15:16

_id 62ff
authors Peckham, R. J.
year 1985
title Shading Evaluations with General Three- Dimensional Models
source Computer Aided Design. September, 1985. vol. 17: pp. 305-310 : ill. includes bibliography
summary The SHADOWPACK package of computer programs has been developed to facilitate shading evaluations, for the direct component of solar radiation, with general 3D models. An interactive solid modelling program allows the user to construct and view the 3D model before saving it for further analysis and display. Other programs permit the graphical display of the shading situation throughout the year, the quantitative assessment of energy received on different faces of the model, and the display of the distribution of energy received on particular faces by means of contour plots. The use of the computer graphics approach has proved particularly convenient because of the similarity between the techniques used for graphical and numerical algorithms
keywords shading, solid modeling, evaluation, energy, computer graphics
series CADline
last changed 2003/06/02 13:58

_id cc15
authors Ansaldi, Silvia, De Floriani, Leila and Falcidieno, Bianca
year 1985
title Geometric Modeling of Solid Objects by Using a Face Adjacency Graph Representation
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 131-139 : ill. includes bibliography
summary A relational graph structure based on a boundary representation of solid objects is described. In this structure, called Face Adjacency Graph, nodes represent object faces, whereas edges and vertices are encoded into arcs and hyperarcs. Based on the face adjacency graph, the authors define a set of primitive face-oriented Euler operators, and a set of macro operators for face manipulation, which allow a compact definition and an efficient updating of solid objects. The authors briefly describe a hierarchical graph structure based on the face adjacency graph, which provides a representation of an object at different levels of detail. Thus it is consistent with the stepwise refinement process through which the object description is produced
keywords geometric modeling, graphs, objects, representation, data structures,B-rep, solid modeling, Euler operators
series CADline
last changed 2003/06/02 10:24

_id a6f1
authors Bridges, A.H.
year 1986
title Any Progress in Systematic Design?
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 5-15
summary In order to discuss this question it is necessary to reflect awhile on design methods in general. The usual categorization discusses 'generations' of design methods, but Levy (1981) proposes an alternative approach. He identifies five paradigm shifts during the course of the twentieth century which have influenced design methods debate. The first paradigm shift was achieved by 1920, when concern with industrial arts could be seen to have replaced concern with craftsmanship. The second shift, occurring in the early 1930s, resulted in the conception of a design profession. The third happened in the 1950s, when the design methods debate emerged; the fourth took place around 1970 and saw the establishment of 'design research'. Now, in the 1980s, we are going through the fifth paradigm shift, associated with the adoption of a holistic approach to design theory and with the emergence of the concept of design ideology. A major point in Levy's paper was the observation that most of these paradigm shifts were associated with radical social reforms or political upheavals. For instance, we may associate concern about public participation with the 1970s shift and the possible use (or misuse) of knowledge, information and power with the 1980s shift. What has emerged, however, from the work of colleagues engaged since the 1970s in attempting to underpin the practice of design with a coherent body of design theory is increasing evidence of the fundamental nature of a person's engagement with the design activity. This includes evidence of the existence of two distinctive modes of thought, one of which can be described as cognitive modelling and the other which can be described as rational thinking. Cognitive modelling is imagining, seeing in the mind's eye. Rational thinking is linguistic thinking, engaging in a form of internal debate. Cognitive modelling is externalized through action, and through the construction of external representations, especially drawings. Rational thinking is externalized through verbal language and, more formally, through mathematical and scientific notations. Cognitive modelling is analogic, presentational, holistic, integrative and based upon pattern recognition and pattern manipulation. Rational thinking is digital, sequential, analytical, explicatory and based upon categorization and logical inference. There is some relationship between the evidence for two distinctive modes of thought and the evidence of specialization in cerebral hemispheres (Cross, 1984). Design methods have tended to focus upon the rational aspects of design and have, therefore, neglected the cognitive aspects. By recognizing that there are peculiar 'designerly' ways of thinking combining both types of thought process used to perceive, construct and comprehend design representations mentally and then transform them into an external manifestation current work in design theory is promising at last to have some relevance to design practice.
series CAAD Futures
email a.h.bridges@strath.ac.uk
last changed 2003/11/21 15:16

_id 78ca
authors Friedland, P. (Ed.)
year 1985
title Special Section on Architectures for Knowledge-Based Systems
source CACM (28), 9, September
summary A fundamental shift in the preferred approach to building applied artificial intelligence (AI) systems has taken place since the late 1960s. Previous work focused on the construction of general-purpose intelligent systems; the emphasis was on powerful inference methods that could function efficiently even when the available domain-specific knowledge was relatively meager. Today the emphasis is on the role of specific and detailed knowledge, rather than on reasoning methods.The first successful application of this method, which goes by the name of knowledge-based or expert-system research, was the DENDRAL program at Stanford, a long-term collaboration between chemists and computer scientists for automating the determination of molecular structure from empirical formulas and mass spectral data. The key idea is that knowledge is power, for experts, be they human or machine, are often those who know more facts and heuristics about a domain than lesser problem solvers. The task of building an expert system, therefore, is predominantly one of teaching" a system enough of these facts and heuristics to enable it to perform competently in a particular problem-solving context. Such a collection of facts and heuristics is commonly called a knowledge base. Knowledge-based systems are still dependent on inference methods that perform reasoning on the knowledge base, but experience has shown that simple inference methods like generate and test, backward-chaining, and forward-chaining are very effective in a wide variety of problem domains when they are coupled with powerful knowledge bases. If this methodology remains preeminent, then the task of constructing knowledge bases becomes the rate-limiting factor in expert-system development. Indeed, a major portion of the applied AI research in the last decade has been directed at developing techniques and tools for knowledge representation. We are now in the third generation of such efforts. The first generation was marked by the development of enhanced AI languages like Interlisp and PROLOG. The second generation saw the development of knowledge representation tools at AI research institutions; Stanford, for instance, produced EMYCIN, The Unit System, and MRS. The third generation is now producing fully supported commercial tools like KEE and S.1. Each generation has seen a substantial decrease in the amount of time needed to build significant expert systems. Ten years ago prototype systems commonly took on the order of two years to show proof of concept; today such systems are routinely built in a few months. Three basic methodologies-frames, rules, and logic-have emerged to support the complex task of storing human knowledge in an expert system. Each of the articles in this Special Section describes and illustrates one of these methodologies. "The Role of Frame-Based Representation in Reasoning," by Richard Fikes and Tom Kehler, describes an object-centered view of knowledge representation, whereby all knowldge is partitioned into discrete structures (frames) having individual properties (slots). Frames can be used to represent broad concepts, classes of objects, or individual instances or components of objects. They are joined together in an inheritance hierarchy that provides for the transmission of common properties among the frames without multiple specification of those properties. The authors use the KEE knowledge representation and manipulation tool to illustrate the characteristics of frame-based representation for a variety of domain examples. They also show how frame-based systems can be used to incorporate a range of inference methods common to both logic and rule-based systems.""Rule-Based Systems," by Frederick Hayes-Roth, chronicles the history and describes the implementation of production rules as a framework for knowledge representation. In essence, production rules use IF conditions THEN conclusions and IF conditions THEN actions structures to construct a knowledge base. The autor catalogs a wide range of applications for which this methodology has proved natural and (at least partially) successful for replicating intelligent behavior. The article also surveys some already-available computational tools for facilitating the construction of rule-based knowledge bases and discusses the inference methods (particularly backward- and forward-chaining) that are provided as part of these tools. The article concludes with a consideration of the future improvement and expansion of such tools.The third article, "Logic Programming, " by Michael Genesereth and Matthew Ginsberg, provides a tutorial introduction to the formal method of programming by description in the predicate calculus. Unlike traditional programming, which emphasizes how computations are to be performed, logic programming focuses on the what of objects and their behavior. The article illustrates the ease with which incremental additions can be made to a logic-oriented knowledge base, as well as the automatic facilities for inference (through theorem proving) and explanation that result from such formal descriptions. A practical example of diagnosis of digital device malfunctions is used to show how significantand complex problems can be represented in the formalism.A note to the reader who may infer that the AI community is being split into competing camps by these three methodologies: Although each provides advantages in certain specific domains (logic where the domain can be readily axiomatized and where complete causal models are available, rules where most of the knowledge can be conveniently expressed as experiential heuristics, and frames where complex structural descriptions are necessary to adequately describe the domain), the current view is one of synthesis rather than exclusivity. Both logic and rule-based systems commonly incorporate frame-like structures to facilitate the representation of large amounts of factual information, and frame-based systems like KEE allow both production rules and predicate calculus statements to be stored within and activated from frames to do inference. The next generation of knowledge representation tools may even help users to select appropriate methodologies for each particular class of knowledge, and then automatically integrate the various methodologies so selected into a consistent framework for knowledge. "
series journal paper
last changed 2003/04/23 15:14

_id a18b
authors Samet, Hanan and Webber, Robert E.
year 1985
title Storing a Collection of Polygons Using Quadtrees
source ACM Transactions on Graphics July, 1985. vol. 4: pp. 182-222 : some ill. includes bibliography.
summary An adaptation of the quadtree data structure that represents polygonal maps (i.e., collections of polygons, possibly containing holes) is described in a manner that is also useful for the manipulation of arbitrary collections of straight line segments. The goal is to store these maps without the loss of information that results from digitization, and to obtain a worst-case execution time that is not overly sensitive to the positioning of the map. Regular decomposition variant of the region quadtree is usedÔ h)0*0*0*°° ÔŒ to organize the vertices and edges of the maps. A number of related data organizations are proposed in an iterative manner until a method is obtained that meets the stated goals. The result is termed a PM (Polygonal Map) quadtree and is based on a regular decomposition Point Space quadtree (PS quadtree) that stores additional information about the edges at its terminal nodes. Algorithms are given for inserting and deleting line segments from a PM quadtree. Use of the PM quadtree to perform point location, dynamic line insertion, and map overlay is discussed. An empirical comparison of the PM quadtree with other quadtree-based representations for polygonal maps is also provided
keywords data structures, quadtree, polygons, representation, point inclusion, algorithms
series CADline
last changed 2003/06/02 10:24

_id a217
authors Bhatt, Rajesh V., Fisher, Edward L. and Rasdorf, William J.
year 1985
title Information Retrieval Architectures For Expert System/DBMS Communication
source Industrial Engineering Fall Conference Proceedings. December, 1985. pp. 315-320. CADLINE has abstract only
summary The development of expert systems (ES) for manufacturing problems indicates a need to interact with potentially large amounts of data, much of which resides elsewhere in the ES user's organization. A large amount of information required for planning, design, and control operations can be made available through an existing database management system (DBMS). The need for an ES to access that data is critical. This paper presents two approaches to the development of ES- DBMS interfaces, both query-language based. One approach uses a procedural attachment to the ES language to obtain the required data via the DBMS query language, while the other one uses a separate interface program between the ES and the query language of the DBMS. The procedural attachment is able to acquire data from a DBMS at a faster rate than the interface program; however, the procedural attachment lacks knowledge of the DBMS schema. On the other hand, the interface program sacrifices speed but promotes flexibility, as it has the capability of selecting which DBMS to extract the required data from and allowing augmentation of schema knowledge outside of the ES. A disadvantage of the interface approach is the amount of time involved in data retrieval. The process of writing information to disk files is I/O intensive. This can be quite slow, particularly in PROLOG, the language used to implement the ES. Thus the use of such an interface is only suitable in applications such as design, where extremely fast I/O is not required
keywords design, engineering, expert systems, information, database, DBMS
series CADline
last changed 2003/06/02 10:24

_id 4532
authors Bono, Peter R.
year 1985
title A Survey of Graphics Standards and Their Role in Information Interchange
source IEEE Computer. October, 1985. vol. 18: pp. 63-75 : ill. ; tables. includes bibliography
summary The survey describes each graphic standard and explains the interrelationships among the standards. The role and commercial impact of PCs serving as workstations in a distributed, network, multimedia environment is emphasized. It is shown that current graphics standardization activity focused on three principal areas: the application interface, the device interface, and picture exchange. The operator interface and hardware interfaces will be expected to be the subjects for standardization in the future. In addition, picture exchange will be replaced by information exchange, where information includes text, image, and voice components merged with graphics to create an integrated whole
keywords computer graphics, standards, GKS, communication
series CADline
last changed 2003/06/02 13:58

_id 0533
authors Clemons, Eric K. and Greenfield, Arnold J.
year 1985
title The SAGE System Architecture: A System for the Rapid Development of Graphics Interfaces for Decision Support
source IEEE Computer Graphics and Applications. November, 1985. vol. 5: pp. 38-50 : ill. includes bibliography
summary Graphics interfaces support the decision maker in sensitivity analysis - the exploration of proposed solutions and examination of alternatives. The authors present an architecture for rapid preparation of graphics interfaces for large classes of management sciences, operations research, and expert systems models. This architecture is based on a detailed study of sensitivity analysis requests is also presented. The architecture was the basis of a prototype, now operational, which is illustrated through a case study of sensitivity analysis in a vehicle-routing system
keywords expert systems, user interface, operations research
series CADline
last changed 2003/06/02 10:24

_id 0711
authors Kunnath, S.K., Reinhorn, A.M. and Abel, J.F.
year 1990
title A Computational Tool for Evaluation of Seismic Performance of RC Buildings
source February, 1990. [1] 15 p. : ill. graphs, tables. includes bibliography: p. 10-11
summary Recent events have demonstrated the damaging power of earthquakes on structural assemblages resulting in immense loss of life and property (Mexico City, 1985; Armenia, 1988; San Francisco, 1989). While the present state-of-the-art in inelastic seismic response analysis of structures is capable of estimating response quantities in terms of deformations, stresses, etc., it has not established a physical qualification of these end-results into measures of damage sustained by the structure wherein system vulnerability is ascertained in terms of serviceability, repairability, and/or collapse. An enhanced computational tool is presented in this paper for evaluation of reinforced concrete structures (such as buildings and bridges) subjected to seismic loading. The program performs a series of tasks to enable a complete evaluation of the structural system: (a) elastic collapse- mode analysis to determine the base shear capacity of the system; (b) step-by-step time history analysis using a macromodel approach in which the inelastic behavior of RC structural components is incorporated; (c) reduction of the response quantities to damage indices so that a physical interpretation of the response is possible. The program is built around two graphical interfaces: one for preprocessing of structural and loading data; and the other for visualization of structural damage following the seismic analysis. This program can serve as an invaluable tool in estimating the seismic performance of existing RC buildings and for designing new structures within acceptable levels of damage
keywords seismic, structures, applications, evaluation, civil engineering, CAD
series CADline
last changed 2003/06/02 14:41

_id ae09
authors Lieberman, Henry
year 1985
title There's More to Menu Systems Than Meets the Screen
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 181-189 : ill. includes bibliography
summary Love playing with those fancy menu-based graphical user interfaces, but afraid to program one yourself for your own application? Do windows seem opaque to you? Are you scared of Mice? Like what-you-see-is-what-you-get but don't know how to get-what-you-want-to-see on the screen? Everyone agrees using systems like graphical document illustrators, circuit designers, and iconic file systems is fun, but programming user interfaces for these systems isn't as much fun as it should be. Systems like the Lisp Machines, Xerox D- Machines, and Apple Macintosh provide powerful graphics primitives, but the casual applications designer is often stymied by the difficulty of mastering the details of window specification, multiple processes, interpreting mouse input, etc. This paper presents a kit called EZWin, which provides many services common to implementing a wide variety of interfaces, described as generalized editors for sets of graphical objects. An individual application is programmed simply by creating objects to represent the interface itself, each kind of graphical object, and each command. A unique interaction style is established which is insensitive to whether commands are chosen before or after their arguments. The system anticipates the types of arguments needed by commands preventing selection mistakes which are a common source of frustrating errors. Displayed objects are made 'mouse-sensitive' only if selection of the object is appropriate in the current context. The implementation of a graphical interface for a computer network simulation is described to illustrate how EZWin works
keywords user interface, computer graphics
series CADline
last changed 1999/02/12 15:09

_id a127
authors Rasdorf, William J. and Salley, George C.
year 1985
title Generative Engineering Databases - Toward Expert Systems
source Computers and Structures. Pergamon Press, 1985. vol. 22: pp. 11-15
summary CADLINE has abstract only. Engineering data management, incorporating concepts of optimization with data representation, is receiving increasing attention. Research in this area promises advantages for many engineering applications, particularly those which use data innovatively. This paper presents a framework for a comprehensive, relational database management system that combines a knowledge base (KB) of design constraints with a database (DB) of engineering data items to achieve a 'generative database' - one which automatically generates new engineering design data according to the design constraints stored in the knowledge base. Thus, in addition to the designer and engineering design and analysis application programs, the database itself contributes to the design process. The KB/DB framework proposed here requires a database that is able to store all of the data normally associated with engineering design and to accurately represent the interactions between constraints and the stored data while guaranteeing its integrity. The framework also requires a knowledge base that is able to store all the constraints imposed upon the engineering design process. The goal sought is a central integrated repository of data, supporting interfaces to a wide variety of application programs and supporting processing capabilities for maintaining integrity while generating new data. The resulting system permits the unaided generation of constrained data values, thereby serving as an active design assistant. This paper suggests this new conceptual framework as a means of improving engineering data representation, generation, use, and management
keywords management, optimization, synthesis, database, expert systems, civil engineering
series CADline
last changed 2003/06/02 10:24

_id 020d
authors Shaviv, Edna
year 1986
title Layout Design Problems: Systematic Approaches
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 28-52
summary The complexity of the layout design problems known as the 'spatial allocation problems' gave rise to several approaches, which can be generally classified into two main streams. The first attempts to use the computer to generate solutions of the building layout, while in the second, computers are used only to evaluate manually generated solutions. In both classes the generation or evaluation of the layout are performed systematically. Computer algorithms for 'spatial allocation problems' first appeared more than twenty-five years ago (Koopmans, 1957). From 1957 to 1970 over thirty different programs were developed for generating the floor plan layout automatically, as is summarized in CAP-Computer Architecture Program, Vol. 2 (Stewart et al., 1970). It seems that any architect who entered the area of CAAD felt that it was his responsibility to find a solution to this prime architectural problem. Most of the programs were developed for batch processing, and were run on a mainframe without any sophisticated input/output devices. It is interesting to mention that, because of the lack of these sophisticated input/output devices, early researchers used the approach of automatic generation of optimal or quasioptimal layout solution under given constraints. Gradually, we find a recession and slowdown in the development of computer programs for generation of layout solutions. With the improvement of interactive input/output devices and user interfaces, the inclination today is to develop integrated systems in which the architectural solution is obtained manually by the architect and is introduced to the computer for the appraisal of the designer's layout solution (Maver, 1977). The manmachine integrative systems could work well, but it seems that in most of the integrated systems today, and in the commercial ones in particular, there is no route to any appraisal technique of the layout problem. Without any evaluation techniques in commercial integrated systems it seems that the geometrical database exists Just to create working drawings and sometimes also perspectives.
series CAAD Futures
email arredna@techunix.technion.ac.il
last changed 2003/05/16 20:58

_id e235
authors Van Norman, Mark
year 1985
title THE USER INTERFACE IN PROGRAMS FOR DESIGN EDUCATION: ISSUES AND CRITERIA
doi https://doi.org/10.52842/conf.acadia.1985.155
source ACADIA Workshop ‘85 [ACADIA Conference Proceedings] Tempe (Arizona / USA) 2-3 November 1985, pp. 155-168
summary Due to inexpensive mass-marketed microcomputers and CAAD software the type of "clients" we serve as CAAD educators will soon change. In addition to teaching CAAD programming to 20 students a semester, we may soon be serving a much larger group of casual users from design studios and technical courses. These casual users will require that we provide programs and hardware which allow them to design a better product more swiftly and with less effort than by hand. The most crucial factor in meeting these criteria is the quality of the user interface of the programs and equipment we provide.

At Harvard, we have studied the user interfaces of more than 80 programs used in 10 areas of design. This paper is a summary of a 90 page report in which issues are raised, the answers to which determine the quality of the user interface of a program. In the summarized report, different approaches to resolving each issue are discussed, but no "answers" are provided. In our roles as authors, teachers, and now, consumers of CAAD programs, we must - explicitly or by default - address these issues before designing or purchasing programs and hardware for design education.

series ACADIA
type normal paper
last changed 2022/06/07 07:58

_id 452c
authors Vanier, D. J. and Worling, Jamie
year 1986
title Three-dimensional Visualization: A Case Study
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 92-102
summary Three-dimensional computer visualization has intrigued both building designers and computer scientists for decades. Research and conference papers present an extensive list of existing and potential uses for threedimensional geometric data for the building industry (Baer et al., 1979). Early studies on visualization include urban planning (Rogers, 1980), treeshading simulation (Schiler and Greenberg, 1980), sun studies (Anon, 1984), finite element analysis (Proulx, 1983), and facade texture rendering (Nizzolese, 1980). With the advent of better interfaces, faster computer processing speeds and better application packages, there had been interest on the part of both researchers and practitioners in three-dimensional -models for energy analysis (Pittman and Greenberg, 1980), modelling with transparencies (Hebert, 1982), super-realistic rendering (Greenberg, 1984), visual impact (Bridges, 1983), interference clash checking (Trickett, 1980), and complex object visualization (Haward, 1984). The Division of Building Research is currently investigating the application of geometric modelling in the building delivery process using sophisticated software (Evans, 1985). The first stage of the project (Vanier, 1985), a feasibility study, deals with the aesthetics of the mode. It identifies two significant requirements for geometric modelling systems: the need for a comprehensive data structure and the requirement for realistic accuracies and tolerances. This chapter presents the results of the second phase of this geometric modelling project, which is the construction of 'working' and 'presentation' models for a building.
series CAAD Futures
email Dana.Vanier@nrc-cnrc.gc.ca
last changed 2003/05/16 20:58

No more hits.

HOMELOGIN (you are user _anon_786205 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002