CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 32

_id cc15
authors Ansaldi, Silvia, De Floriani, Leila and Falcidieno, Bianca
year 1985
title Geometric Modeling of Solid Objects by Using a Face Adjacency Graph Representation
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 131-139 : ill. includes bibliography
summary A relational graph structure based on a boundary representation of solid objects is described. In this structure, called Face Adjacency Graph, nodes represent object faces, whereas edges and vertices are encoded into arcs and hyperarcs. Based on the face adjacency graph, the authors define a set of primitive face-oriented Euler operators, and a set of macro operators for face manipulation, which allow a compact definition and an efficient updating of solid objects. The authors briefly describe a hierarchical graph structure based on the face adjacency graph, which provides a representation of an object at different levels of detail. Thus it is consistent with the stepwise refinement process through which the object description is produced
keywords geometric modeling, graphs, objects, representation, data structures,B-rep, solid modeling, Euler operators
series CADline
last changed 2003/06/02 10:24

_id 644f
authors Bijl, Aart
year 1986
title Designing with Words and Pictures in a Logic Modelling Environment
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 128-145
summary At EdCAAD we are interested in design as something people do. Designed artefacts, the products of designing, are interesting only in so far as they tell us something about design. An extreme expression of this position is to say that the world of design is the thoughts in the heads of designers, plus the skills of designers in externalizing their thoughts; design artifacts, once perceived and accepted in the worlds of other people, are no longer part of the world of design. We can describe design, briefly, as a process of synthesis. Design has to achieve a fusion between parts to create new parts, so that the products are recognized, as having a right and proper place in the world of people. Parts should be understood as referring to anything - physical objects, abstract ideas, aspirations. These parts occur in some design environment from which parts are extracted, designed upon and results replaced; in the example of buildings, the environment is people and results have to be judged by reference to that environment. It is characteristic of design that both the process and the product are not subject to explicit and complete criteria. This view of design differs sharply from the more orthodox understanding of scientific and technological endeavours which rely predominantly on a process of analysis. In the latter case, the approach is to decompose a problem into parts until individual parts are recognized as being amenable to known operations and results are reassembled into a solution. This process has a peripheral role in design when evaluating selected aspects of tentative design proposals, but the absence of well-defined and widely recognized criteria for design excludes it from the main stream of analytical developments.
series CAAD Futures
last changed 2003/11/21 15:16

_id e235
authors Van Norman, Mark
year 1985
title THE USER INTERFACE IN PROGRAMS FOR DESIGN EDUCATION: ISSUES AND CRITERIA
source ACADIA Workshop ‘85 [ACADIA Conference Proceedings] Tempe (Arizona / USA) 2-3 November 1985, pp. 155-168
doi https://doi.org/10.52842/conf.acadia.1985.155
summary Due to inexpensive mass-marketed microcomputers and CAAD software the type of "clients" we serve as CAAD educators will soon change. In addition to teaching CAAD programming to 20 students a semester, we may soon be serving a much larger group of casual users from design studios and technical courses. These casual users will require that we provide programs and hardware which allow them to design a better product more swiftly and with less effort than by hand. The most crucial factor in meeting these criteria is the quality of the user interface of the programs and equipment we provide.

At Harvard, we have studied the user interfaces of more than 80 programs used in 10 areas of design. This paper is a summary of a 90 page report in which issues are raised, the answers to which determine the quality of the user interface of a program. In the summarized report, different approaches to resolving each issue are discussed, but no "answers" are provided. In our roles as authors, teachers, and now, consumers of CAAD programs, we must - explicitly or by default - address these issues before designing or purchasing programs and hardware for design education.

series ACADIA
type normal paper
last changed 2022/06/07 07:58

_id ddss9408
id ddss9408
authors Bax, Thijs and Trum, Henk
year 1994
title A Taxonomy of Architecture: Core of a Theory of Design
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary The authors developed a taxonomy of concepts in architectural design. It was accepted by the Advisory Committee for education in the field of architecture, a committee advising the European Commission and Member States, as a reference for their task to harmonize architectural education in Europe. The taxonomy is based on Domain theory, a theory developed by the authors, based on General Systems Theory and the notion of structure according to French Structuralism, takes a participatory viewpoint for the integration of knowledge and interests by parties in the architectural design process. The paper discusses recent developments of the taxonomy, firstly as a result of a confrontation with similar endeavours to structure the field of architectural design, secondly as a result of applications of education and architectural design practice, and thirdly as a result of theapplication of some views derived from the philosophical work from Charles Benjamin Peirce. Developments concern the structural form of the taxonomy comprising basic concepts and levelbound scale concepts, and the specification of the content of the fields which these concepts represent. The confrontation with similar endeavours concerns mainly the work of an ARCUK workingparty, chaired by Tom Marcus, based on the European Directive from 1985. The application concerns experiences with a taxonomy-based enquiry in order to represent the profile of educational programmes of schools and faculties of architecture in Europe in qualitative and quantitative terms. This enquiry was carried out in order to achieve a basis for comparison and judgement, and a basis for future guidelines including quantitative aspects. Views of Peirce, more specifically his views on triarchy as a way of ordering and structuring processes of thinking,provide keys for a re-definition of concepts as building stones of the taxonomy in terms of the form-function-process-triad, which strengthens the coherence of the taxonomy, allowing for a more regular representation in the form of a hierarchical ordered matrix.
series DDSS
last changed 2003/08/07 16:36

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id sigradi2013_41
id sigradi2013_41
authors Luhan, Gregory A.; Robert Gregory
year 2013
title Across Disciplines: Triggering Frame Awareness in Design Education
source SIGraDi 2013 [Proceedings of the 17th Conference of the Iberoamerican Society of Digital Graphics - ISBN: 978-956-7051-86-1] Chile - Valparaíso 20 - 22 November 2013, pp. 619 - 623
summary Tacit knowledge is paradoxical: something we know yet don't know we know, knowledge we sense but can't articulate. In Polanyi’s definition of tacit knowledge, “we know more than we can say" (1966/2009; Scott, 1985; Gelwick, 1977). It's important to see that tacit knowledge is part of a sequence; mental structures, in awareness when first learned, eventually become tacit, operating thenceforth as unquestioned assumptions. These tacit structures pose a problem for professional education in disciplines that encourage creativity. This paper examines the design and re-design of an interdisciplinary course intended to help make these tacit structures visible, to trigger frame awareness.
keywords Tacit knowledge; Design thinking; Sustainability; Systems thinking; Frame reflection
series SIGRADI
email
last changed 2016/03/10 09:55

_id 448d
authors Schmitt, Gerhard N.
year 1985
title Architectural Expert Systems: Definition, Application Areas and Practical Examples
source ACADIA Workshop ‘85 [ACADIA Conference Proceedings] Tempe (Arizona / USA) 2-3 November 1985, pp. 43-51
doi https://doi.org/10.52842/conf.acadia.1985.043
summary Knowledge Based Expert Systems (KBES) have emerged as a new tool for decision making in scientific disciplines. From the definition of the term and from previous experiences in geology, computer science, engineering, and medicine, it seems that they could develop into an important tool for architectural design and the building industry. This paper gives a very general overview over existing expert systems and potential application areas in architecture. It then presents in more detail two of the prototype systems that are under development in the Department of Architecture at Carnegie - Mellon University to gain practical experience.

series ACADIA
email
last changed 2022/06/07 07:57

_id avocaad_2001_16
id avocaad_2001_16
authors Yu-Ying Chang, Yu-Tung Liu, Chien-Hui Wong
year 2001
title Some Phenomena of Spatial Characteristics of Cyberspace
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary "Space," which has long been an important concept in architecture (Bloomer & Moore, 1977; Mitchell, 1995, 1999), has attracted interest of researchers from various academic disciplines in recent years (Agnew, 1993; Benko & Strohmayer, 1996; Chang, 1999; Foucault, 1982; Gould, 1998). Researchers from disciplines such as anthropology, geography, sociology, philosophy, and linguistics regard it as the basis of the discussion of various theories in social sciences and humanities (Chen, 1999). On the other hand, since the invention of Internet, Internet users have been experiencing a new and magic "world." According to the definitions in traditional architecture theories, "space" is generated whenever people define a finite void by some physical elements (Zevi, 1985). However, although Internet is a virtual, immense, invisible and intangible world, navigating in it, we can still sense the very presence of ourselves and others in a wonderland. This sense could be testified by our naming of Internet as Cyberspace -- an exotic kind of space. Therefore, as people nowadays rely more and more on the Internet in their daily life, and as more and more architectural scholars and designers begin to invest their efforts in the design of virtual places online (e.g., Maher, 1999; Li & Maher, 2000), we cannot help but ask whether there are indeed sensible spaces in Internet. And if yes, these spaces exist in terms of what forms and created by what ways?To join the current interdisciplinary discussion on the issue of space, and to obtain new definition as well as insightful understanding of "space", this study explores the spatial phenomena in Internet. We hope that our findings would ultimately be also useful for contemporary architectural designers and scholars in their designs in the real world.As a preliminary exploration, the main objective of this study is to discover the elements involved in the creation/construction of Internet spaces and to examine the relationship between human participants and Internet spaces. In addition, this study also attempts to investigate whether participants from different academic disciplines define or experience Internet spaces in different ways, and to find what spatial elements of Internet they emphasize the most.In order to achieve a more comprehensive understanding of the spatial phenomena in Internet and to overcome the subjectivity of the members of the research team, the research design of this study was divided into two stages. At the first stage, we conducted literature review to study existing theories of space (which are based on observations and investigations of the physical world). At the second stage of this study, we recruited 8 Internet regular users to approach this topic from different point of views, and to see whether people with different academic training would define and experience Internet spaces differently.The results of this study reveal that the relationship between human participants and Internet spaces is different from that between human participants and physical spaces. In the physical world, physical elements of space must be established first; it then begins to be regarded as a place after interaction between/among human participants or interaction between human participants and the physical environment. In contrast, in Internet, a sense of place is first created through human interactions (or activities), Internet participants then begin to sense the existence of a space. Therefore, it seems that, among the many spatial elements of Internet we found, "interaction/reciprocity" Ñ either between/among human participants or between human participants and the computer interface Ð seems to be the most crucial element.In addition, another interesting result of this study is that verbal (linguistic) elements could provoke a sense of space in a degree higher than 2D visual representation and no less than 3D visual simulations. Nevertheless, verbal and 3D visual elements seem to work in different ways in terms of cognitive behaviors: Verbal elements provoke visual imagery and other sensory perceptions by "imagining" and then excite personal experiences of space; visual elements, on the other hand, provoke and excite visual experiences of space directly by "mapping".Finally, it was found that participants with different academic training did experience and define space differently. For example, when experiencing and analyzing Internet spaces, architecture designers, the creators of the physical world, emphasize the design of circulation and orientation, while participants with linguistics training focus more on subtle language usage. Visual designers tend to analyze the graphical elements of virtual spaces based on traditional painting theories; industrial designers, on the other hand, tend to treat these spaces as industrial products, emphasizing concept of user-center and the control of the computer interface.The findings of this study seem to add new information to our understanding of virtual space. It would be interesting for future studies to investigate how this information influences architectural designers in their real-world practices in this digital age. In addition, to obtain a fuller picture of Internet space, further research is needed to study the same issue by examining more Internet participants who have no formal linguistics and graphical training.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id a619
authors Bentley, Jon L. and McGeoch, Catherine C.
year 1985
title Amortized Analyses of Self-Organizing Sequential Search ; Heuristics Programming Techniques and Data Structures
source communications of the ACM April, 1985. vol. 28: pp. 404-411 : ill. includes bibliography.
summary Amortization is used to analyze the heuristics in a worst- case sense. The relative merit of the heuristics in this analysis is different in the probabilistic analyses. Experiments show that the behavior of the heuristics on real data is more closely described by the amortized analyses than by the probabilistic analyses
keywords economics, analysis, search, heuristics
series CADline
last changed 2003/06/02 13:58

_id 4316
authors Bentley, Jon L.
year 1985
title Associative Arrays -- Programming Pearls
source communications of the ACM. June, 1985. vol. 28: pp. 570-576 : ill
summary Anthropological studies have shown that one's language has a profound effect on one's view of the world. This column is about a language feature outside the Algol heritage: associative arrays. The column examines the associative arrays provided by the AWK language
keywords techniques, programming, algorithms, data structures
series CADline
last changed 2003/06/02 13:58

_id a217
authors Bhatt, Rajesh V., Fisher, Edward L. and Rasdorf, William J.
year 1985
title Information Retrieval Architectures For Expert System/DBMS Communication
source Industrial Engineering Fall Conference Proceedings. December, 1985. pp. 315-320. CADLINE has abstract only
summary The development of expert systems (ES) for manufacturing problems indicates a need to interact with potentially large amounts of data, much of which resides elsewhere in the ES user's organization. A large amount of information required for planning, design, and control operations can be made available through an existing database management system (DBMS). The need for an ES to access that data is critical. This paper presents two approaches to the development of ES- DBMS interfaces, both query-language based. One approach uses a procedural attachment to the ES language to obtain the required data via the DBMS query language, while the other one uses a separate interface program between the ES and the query language of the DBMS. The procedural attachment is able to acquire data from a DBMS at a faster rate than the interface program; however, the procedural attachment lacks knowledge of the DBMS schema. On the other hand, the interface program sacrifices speed but promotes flexibility, as it has the capability of selecting which DBMS to extract the required data from and allowing augmentation of schema knowledge outside of the ES. A disadvantage of the interface approach is the amount of time involved in data retrieval. The process of writing information to disk files is I/O intensive. This can be quite slow, particularly in PROLOG, the language used to implement the ES. Thus the use of such an interface is only suitable in applications such as design, where extremely fast I/O is not required
keywords design, engineering, expert systems, information, database, DBMS
series CADline
last changed 2003/06/02 10:24

_id 66b3
authors Bollinger, Elizabeth
year 1985
title Integrating CADD into the AEC Process - A Case Study
source ACADIA Workshop ‘85 [ACADIA Conference Proceedings] Tempe (Arizona / USA) 2-3 November 1985, pp. 13-24
doi https://doi.org/10.52842/conf.acadia.1985.013
summary A research grant was awarded to the Graduate School of Architecture at the University of Houston by Nash Phillips/Copus, a large homebuilding corporation, to study the integration of computer aided design into the entire building process. A computer aided design system had been utilized by the firm's department of architecture and planning for several months. A team of University faculty and graduate students studied the organization of the firm with respect to functions that could be automated. Its determination was that by utilizing an integrated data base, with information to be extracted from the computer generated drawings, the entire process of bidding and building a structure could be made more efficient and cost effective. The research team developed a system in which cost estimating could be done directly from the drawings. As drawings were modified, new reports could be automatically generated. More design solutions could be studied from the impact of cost as well as aesthetics. Additionally, once plans were drawn, a program written by students would automatically generate elevations of wall panels to be sent to the construction department for its use, and which would also generate material reports. The team also studied techniques of computer modelling for usage by the architectural planning department in client presentations.
series ACADIA
email
last changed 2022/06/07 07:54

_id 8a90
authors Buchmann, Alejandro P. and Gerzso, Miguel J.
year 1985
title Handling Heterogeneously Formatted Data in an Object Oriented Database Environment
source NCGA - National Computer Graphics Association Conference Proceedings. 1985. vol. 3: pp. 645-655 : ill. includes bibliography
summary The paper discussed the problems associated with handling heterogeneously formatted data and the interfacing of the subsystems of a CAD system that intervene in the handling of these data: the database management system, the graphic display system and application programs. Object-oriented languages with message passing capabilities were offered as a feasible solution which was illustrated through examples in the language TM
keywords CAD, systems, languages, computer graphics, database
series CADline
last changed 2003/06/02 10:24

_id 78ca
authors Friedland, P. (Ed.)
year 1985
title Special Section on Architectures for Knowledge-Based Systems
source CACM (28), 9, September
summary A fundamental shift in the preferred approach to building applied artificial intelligence (AI) systems has taken place since the late 1960s. Previous work focused on the construction of general-purpose intelligent systems; the emphasis was on powerful inference methods that could function efficiently even when the available domain-specific knowledge was relatively meager. Today the emphasis is on the role of specific and detailed knowledge, rather than on reasoning methods.The first successful application of this method, which goes by the name of knowledge-based or expert-system research, was the DENDRAL program at Stanford, a long-term collaboration between chemists and computer scientists for automating the determination of molecular structure from empirical formulas and mass spectral data. The key idea is that knowledge is power, for experts, be they human or machine, are often those who know more facts and heuristics about a domain than lesser problem solvers. The task of building an expert system, therefore, is predominantly one of teaching" a system enough of these facts and heuristics to enable it to perform competently in a particular problem-solving context. Such a collection of facts and heuristics is commonly called a knowledge base. Knowledge-based systems are still dependent on inference methods that perform reasoning on the knowledge base, but experience has shown that simple inference methods like generate and test, backward-chaining, and forward-chaining are very effective in a wide variety of problem domains when they are coupled with powerful knowledge bases. If this methodology remains preeminent, then the task of constructing knowledge bases becomes the rate-limiting factor in expert-system development. Indeed, a major portion of the applied AI research in the last decade has been directed at developing techniques and tools for knowledge representation. We are now in the third generation of such efforts. The first generation was marked by the development of enhanced AI languages like Interlisp and PROLOG. The second generation saw the development of knowledge representation tools at AI research institutions; Stanford, for instance, produced EMYCIN, The Unit System, and MRS. The third generation is now producing fully supported commercial tools like KEE and S.1. Each generation has seen a substantial decrease in the amount of time needed to build significant expert systems. Ten years ago prototype systems commonly took on the order of two years to show proof of concept; today such systems are routinely built in a few months. Three basic methodologies-frames, rules, and logic-have emerged to support the complex task of storing human knowledge in an expert system. Each of the articles in this Special Section describes and illustrates one of these methodologies. "The Role of Frame-Based Representation in Reasoning," by Richard Fikes and Tom Kehler, describes an object-centered view of knowledge representation, whereby all knowldge is partitioned into discrete structures (frames) having individual properties (slots). Frames can be used to represent broad concepts, classes of objects, or individual instances or components of objects. They are joined together in an inheritance hierarchy that provides for the transmission of common properties among the frames without multiple specification of those properties. The authors use the KEE knowledge representation and manipulation tool to illustrate the characteristics of frame-based representation for a variety of domain examples. They also show how frame-based systems can be used to incorporate a range of inference methods common to both logic and rule-based systems.""Rule-Based Systems," by Frederick Hayes-Roth, chronicles the history and describes the implementation of production rules as a framework for knowledge representation. In essence, production rules use IF conditions THEN conclusions and IF conditions THEN actions structures to construct a knowledge base. The autor catalogs a wide range of applications for which this methodology has proved natural and (at least partially) successful for replicating intelligent behavior. The article also surveys some already-available computational tools for facilitating the construction of rule-based knowledge bases and discusses the inference methods (particularly backward- and forward-chaining) that are provided as part of these tools. The article concludes with a consideration of the future improvement and expansion of such tools.The third article, "Logic Programming, " by Michael Genesereth and Matthew Ginsberg, provides a tutorial introduction to the formal method of programming by description in the predicate calculus. Unlike traditional programming, which emphasizes how computations are to be performed, logic programming focuses on the what of objects and their behavior. The article illustrates the ease with which incremental additions can be made to a logic-oriented knowledge base, as well as the automatic facilities for inference (through theorem proving) and explanation that result from such formal descriptions. A practical example of diagnosis of digital device malfunctions is used to show how significantand complex problems can be represented in the formalism.A note to the reader who may infer that the AI community is being split into competing camps by these three methodologies: Although each provides advantages in certain specific domains (logic where the domain can be readily axiomatized and where complete causal models are available, rules where most of the knowledge can be conveniently expressed as experiential heuristics, and frames where complex structural descriptions are necessary to adequately describe the domain), the current view is one of synthesis rather than exclusivity. Both logic and rule-based systems commonly incorporate frame-like structures to facilitate the representation of large amounts of factual information, and frame-based systems like KEE allow both production rules and predicate calculus statements to be stored within and activated from frames to do inference. The next generation of knowledge representation tools may even help users to select appropriate methodologies for each particular class of knowledge, and then automatically integrate the various methodologies so selected into a consistent framework for knowledge. "
series journal paper
last changed 2003/04/23 15:14

_id 4f6f
authors Kalay, Yehuda E.
year 1985
title Knowledge-Based Computer-Aided Design to Assist Designers of Physical Artifacts
source 1985. [15] p. : ill. includes bibliography
summary The objectives of this project are to increase the productivity of physical designers, and to improve the quality of designed artifacts and environments. The means for achieving these objectives include the development, implementation and verification of a broad-based methodology to be used for building context-sensitive computer-aided design systems to facilitate the design and fabrication of physical artifacts. Such systems will extend computer aides for design over the earliest phases of the design process and thus facilitate design-capture in addition to the common design-communication utilities they currently provide. They will thus constitute intelligent design assistants that will relieve the designer from the necessity to deal with some design details, as well as the need to explicitly manage the consistency of the design database. The project employs principles developed by Artificial Intelligence methods that are used in non-deterministic problem solving processes that represent data and knowledge in distributed networks. Principles such as object-centered data factorization and message-based change propagation techniques are implemented in an existing architectural computer-aided design system and field-tested in a practicing Architectural/Engineering office
keywords CAD, knowledge base, design methods, design process, architecture
series CADline
email
last changed 2003/06/02 13:58

_id e1a8
authors Kellogg, Richard E.
year 1985
title CAD-Spreadsheet Linkages for Design and Analysis
source ACADIA Workshop ‘85 [ACADIA Conference Proceedings] Tempe (Arizona / USA) 2-3 November 1985, pp. 109-118
doi https://doi.org/10.52842/conf.acadia.1985.109
summary This paper reports on two systems under development which link a CAD system with a spreadsheet. The first extracts areas and R-values from a special AutoCAD drawing and processes the information in a Lotus 1-2-3 spreadsheet to obtain total heatloss for a building. The second is a prototype expert system which uses space labels from an AutoCAD "bubble-diagram" to print lists of design recommendations extracted from a Lotus 1-2-3 data-base. These methods emphasize drawing as the primary design activity, while providing immediate factual feedback about the design proposal.

series ACADIA
email
last changed 2022/06/07 07:52

_id 0711
authors Kunnath, S.K., Reinhorn, A.M. and Abel, J.F.
year 1990
title A Computational Tool for Evaluation of Seismic Performance of RC Buildings
source February, 1990. [1] 15 p. : ill. graphs, tables. includes bibliography: p. 10-11
summary Recent events have demonstrated the damaging power of earthquakes on structural assemblages resulting in immense loss of life and property (Mexico City, 1985; Armenia, 1988; San Francisco, 1989). While the present state-of-the-art in inelastic seismic response analysis of structures is capable of estimating response quantities in terms of deformations, stresses, etc., it has not established a physical qualification of these end-results into measures of damage sustained by the structure wherein system vulnerability is ascertained in terms of serviceability, repairability, and/or collapse. An enhanced computational tool is presented in this paper for evaluation of reinforced concrete structures (such as buildings and bridges) subjected to seismic loading. The program performs a series of tasks to enable a complete evaluation of the structural system: (a) elastic collapse- mode analysis to determine the base shear capacity of the system; (b) step-by-step time history analysis using a macromodel approach in which the inelastic behavior of RC structural components is incorporated; (c) reduction of the response quantities to damage indices so that a physical interpretation of the response is possible. The program is built around two graphical interfaces: one for preprocessing of structural and loading data; and the other for visualization of structural damage following the seismic analysis. This program can serve as an invaluable tool in estimating the seismic performance of existing RC buildings and for designing new structures within acceptable levels of damage
keywords seismic, structures, applications, evaluation, civil engineering, CAD
series CADline
last changed 2003/06/02 14:41

_id 244d
authors Monedero, J., Casaus, A. and Coll, J.
year 1992
title From Barcelona. Chronicle and Provisional Evaluation of a New Course on Architectural Solid Modelling by Computerized Means
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 351-362
doi https://doi.org/10.52842/conf.ecaade.1992.351
summary The first step made at the ETSAB in the computer field goes back to 1965, when professors Margarit and Buxade acquired an IBM computer, an electromechanical machine which used perforated cards and which was used to produce an innovative method of structural calculation. This method was incorporated in the academic courses and, at that time, this repeated question "should students learn programming?" was readily answered: the exercises required some knowledge of Fortran and every student needed this knowledge to do the exercises. This method, well known in Europe at that time, also provided a service for professional practice and marked the beginning of what is now the CC (Centro de Calculo) of our school. In 1980 the School bought a PDP1134, a computer which had 256 Kb of RAM, two disks of 5 Mb and one of lO Mb, and a multiplexor of 8 lines. Some time later the general politics of the UPC changed their course and this was related to the purchase of a VAX which is still the base of the CC and carries most of the administrative burden of the school. 1985 has probably been the first year in which we can talk of a general policy of the school directed towards computers. A report has been made that year, which includes an inquest adressed to the six Departments of the School (Graphic Expression, Projects, Structures, Construction, Composition and Urbanism) and that contains interesting data. According to the report, there were four departments which used computers in their current courses, while the two others (Projects and Composition) did not use them at all. The main user was the Department of Structures while the incidence of the remaining three was rather sporadic. The kind of problems detected in this report are very typical: lack of resources for hardware and software and for maintenance of the few computers that the school had at that moment; a demand (posed by the students) greatly exceeding the supply (computers and teachers). The main problem appeared to be the lack of computer graphic devices and proper software.

series eCAADe
email
last changed 2022/06/07 07:58

_id 00ed
authors O'Leary, Dianne and Stewart, G.W.
year 1985
title Data-Flow Algorithms for Parallel Matrix Computations
source Communications of the ACM August, 1985. vol. 28: pp. 840-853. includes bibliography.
summary In this article the authors develop some algorithms and tools for solving matrix problems on parallel processing computers. Operations are synchronized through data-flow alone, which makes global synchronization unnecessary and enables the algorithms to be implemented on machines with very simple operating systems and communication protocols. As examples, an algorithm that forms the main modules for solving Liapounov matrix equations is presented. The authors compare this approach to wave front array processors and systolic arrays, and note its advantages in handling missized problems, in evaluating variations of algorithms or architectures, in moving algorithms from system to system, and in debugging parallel algorithms on sequential machines
keywords tools, algorithms, mathematics, parallel processing
series CADline
last changed 2003/06/02 13:58

_id ee4b
id ee4b
authors Ozel, Filiz
year 1985
title Using CAD in Fire Safety Research
source ACADIA Workshop ‘85 [ACADIA Conference Proceedings] Tempe (Arizona / USA) 2-3 November 1985, pp. 142-154
doi https://doi.org/10.52842/conf.acadia.1985.142
summary While architecture offices are increasingly using CADD systems for drafting purposes, architectural schools are pursuing projects that use the CAD data base for new applications in the analysis and evaluation of buildings. This paper summarizes two studies done at the University of Michigan, Architecture Research laboratory, where the CAD system was used to develop a fire safety code evaluation program, and an emergency egress behavior simulation.

The former one takes the National Fire Protection Association (NFPA) Life safety Code 101 as a basis, and generates the code compliance requirements of a given project. The ether study accepts people as information processing beings and simulates their way finding behavior under emergency conditions. Both of these studies utilize the graphic characteristics of the CAD system, producing color displays on the CRT screen, and also outputting information in tabular form which refers to the display on the screen. Both of them also have plotting options.

series ACADIA
email
last changed 2022/06/07 08:00

For more results click below:

this is page 0show page 1HOMELOGIN (you are user _anon_737477 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002