CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 3144

_id 91c4
authors Checkland, P.
year 1981
title Systems Thinking, Systems Practice
source John Wiley & Sons, Chichester
summary Whether by design, accident or merely synchronicity, Checkland appears to have developed a habit of writing seminal publications near the start of each decade which establish the basis and framework for systems methodology research for that decade."" Hamish Rennie, Journal of the Operational Research Society, 1992 Thirty years ago Peter Checkland set out to test whether the Systems Engineering (SE) approach, highly successful in technical problems, could be used by managers coping with the unfolding complexities of organizational life. The straightforward transfer of SE to the broader situations of management was not possible, but by insisting on a combination of systems thinking strongly linked to real-world practice Checkland and his collaborators developed an alternative approach - Soft Systems Methodology (SSM) - which enables managers of all kinds and at any level to deal with the subtleties and confusions of the situations they face. This work established the now accepted distinction between hard systems thinking, in which parts of the world are taken to be systems which can be engineered, and soft systems thinking in which the focus is on making sure the process of inquiry into real-world complexity is itself a system for learning. Systems Thinking, Systems Practice (1981) and Soft Systems Methodology in Action (1990) together with an earlier paper Towards a Systems-based Methodology for Real-World Problem Solving (1972) have long been recognized as classics in the field. Now Peter Checkland has looked back over the three decades of SSM development, brought the account of it up to date, and reflected on the whole evolutionary process which has produced a mature SSM. SSM: A 30-Year Retrospective, here included with Systems Thinking, Systems Practice closes a chapter on what is undoubtedly the most significant single research programme on the use of systems ideas in problem solving. Now retired from full-time university work, Peter Checkland continues his research as a Leverhulme Emeritus Fellow. "
series other
last changed 2003/04/23 15:14

_id ddss9401
id ddss9401
authors Akin, Omer
year 1994
title Psychology of Early Design in Architecture
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary Lately there has been a good deal of emphasis on the early stages of the design process, particularly by developers of computer aids and quantitative design models for both evaluation and generation of designs in a variety of domains. Yet, there is little understanding of the early design-process. While the early design process as manifested by human designers need not be the sole basis of the description of this phase, it certainly represents and important kernel of knowledge, especially for those who are interested in developing models, systems or merely interfaces for such systems. This paper focuses on the characterization of the psychology of the early design phase in architecture. It is described in terms of the general design strategies and problem solving tactics used; and is contrasted against some of the process characteristics that
series DDSS
email
last changed 2003/08/07 16:36

_id ascaad2016_003
id ascaad2016_003
authors Al-Jokhadar, Amer; Wassim Jabi
year 2016
title Humanising the Computational Design Process - Integrating Parametric Models with Qualitative Dimensions
source Parametricism Vs. Materialism: Evolution of Digital Technologies for Development [8th ASCAAD Conference Proceedings ISBN 978-0-9955691-0-2] London (United Kingdom) 7-8 November 2016, pp. 9-18
summary Parametric design is a computational-based approach used for understanding the logic and the language embedded in the design process algorithmically and mathematically. Currently, the main focus of computational models, such as shape grammar and space syntax, is primarily limited to formal and spatial requirements of the design problem. Yet, qualitative factors, such as social, cultural and contextual aspects, are also important dimensions in solving architectural design problems. In this paper, an overview of the advantages and implications of the current methods is presented. It also puts forward a ‘structured analytical system’ that combines the formal and geometric properties of the design, with descriptions that reflect the spatial, social and environmental patterns. This syntactic-discursive model is applied for encoding vernacular courtyard houses in the hot-arid regions of the Middle East and North Africa, and utilising the potentials of these cases in reflecting the lifestyle and the cultural values of the society, such as privacy, human-spatial behaviour, the social life inside the house, the hierarchy of spaces, the segregation and seclusion of family members from visitors and the orientation of spaces. The output of this analytical phase prepares the groundwork for the development of socio-spatial grammar for contemporary tall residential buildings that gives the designer the ability to reveal logical spatial topologies based on socio-environmental restrictions, and to produce alternatives that have an identity while also respecting the context, place and needs of users.
series ASCAAD
email
last changed 2017/05/25 13:13

_id eb5f
authors Al-Sallal, Khaled A. and Degelman, Larry 0.
year 1994
title A Hypermedia Model for Supporting Energy Design in Buildings
doi https://doi.org/10.52842/conf.acadia.1994.039
source Reconnecting [ACADIA Conference Proceedings / ISBN 1-880250-03-9] Washington University (Saint Louis / USA) 1994, pp. 39-49
summary Several studies have discussed the limitations of the available CAAD tools and have proposed solutions [Brown and Novitski 1987, Brown 1990, Degelman and Kim 1988, Schuman et al 1988]. The lack of integration between the different tasks that these programs address and the design process is a major problem. Schuman et al [1988] argued that in architectural design many issues must be considered simultaneously before the synthesis of a final product can take place. Studies by Brown and Novitski [1987] and Brown [1990] discussed the difficulties involved with integrating technical considerations in the creative architectural process. One aspect of the problem is the neglect of technical factors during the initial phase of the design that, as the authors argued, results from changing the work environment and the laborious nature of the design process. Many of the current programs require the user to input a great deal of numerical values that are needed for the energy analysis. Although there are some programs that attempt to assist the user by setting default values, these programs distract the user with their extensive arrays of data. The appropriate design tool is the one that helps the user to easily view the principal components of the building design and specify their behaviors and interactions. Data abstraction and information parsimony are the key concepts in developing a successful design tool. Three different approaches for developing an appropriate CAAD tool were found in the literature. Although there are several similarities among them, each is unique in solving certain aspects of the problem. Brown and Novitski [1987] emphasize the learning factor of the tool as well as its highly graphical user interface. Degelman and Kim [1988] emphasize knowledge acquisition and the provision of simulation modules. The Windows and Daylighting Group of Lawrence Berkeley Laboratory (LBL) emphasizes the dynamic structuring of information, the intelligent linking of data, the integrity of the different issues of design and the design process, and the extensive use of images [Schuman et al 19881, these attributes incidentally define the word hypermedia. The LBL model, which uses hypermedia, seems to be the more promising direction for this type of research. However, there is still a need to establish a new model that integrates all aspects of the problem. The areas in which the present research departs from the LBL model can be listed as follows: it acknowledges the necessity of regarding the user as the center of the CAAD tool design, it develops a model that is based on one of the high level theories of human-computer interaction, and it develops a prototype tool that conforms to the model.

series ACADIA
email
last changed 2022/06/07 07:54

_id caadria2003_c2-4
id caadria2003_c2-4
authors Al-Sallal, Khaled A.
year 2003
title Integrating Energy Design Into Caad Tools: Theoretical Limits and Potentials
doi https://doi.org/10.52842/conf.caadria.2003.323
source CAADRIA 2003 [Proceedings of the 8th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 974-9584-13-9] Bangkok Thailand 18-20 October 2003, pp. 323-340
summary The study is part of a research aims to establish theoretical grounds essential for the development of user efficient design tools for energy-conscious architectural design, based on theories in human factors of intelligent interfaces, problem solving, and architectural design. It starts by reviewing the shortcomings of the current energy design tools, from both architectural design and human factor points of view. It discusses the issues of energy integration with design from three different points of view: architectural, problem-solving, and human factors. It evaluates theoretically the potentials and limitations of the current approaches and technologies in artificial intelligence toward achieving the notion "integrating energy design knowledge into the design process" in practice and education based on research in the area of problem solving and human factors and usability concerns. The study considers the user interface model that is based on the cognitive approach and can be implemented by the hierarchical structure and the object-oriented model, as a promising direction for future development. That is because this model regards the user as the center of the design tool. However, there are still limitations that require extensive research in both theoretical and implementation directions. At the end, the study concludes by discussing the important points for future research.
series CAADRIA
email
last changed 2022/06/07 07:54

_id 8629
authors Barzilay, Amos
year 1980
title Human Problem Solving on Master Mind
source Carnegie Mellon University
summary The purpose of this work is to analyze the task of playing Master Mind and to examine subjects behaviors on solving that task. The methods and the ideas that are used in the work are the same found in the references for other tasks. The author wants to show that those ideas and methods can be used for that specific task as well. In other words, subjects behave in such a domain as an information processing system. [includes bibliography]
keywords Psychology, Problem Solving
series CADline
last changed 1999/02/15 15:10

_id ascaad2021_074
id ascaad2021_074
authors Belkaid, Alia; Abdelkader Ben Saci, Ines Hassoumi
year 2021
title Human-Computer Interaction for Urban Rules Optimization
source Abdelmohsen, S, El-Khouly, T, Mallasi, Z and Bennadji, A (eds.), Architecture in the Age of Disruptive Technologies: Transformations and Challenges [9th ASCAAD Conference Proceedings ISBN 978-1-907349-20-1] Cairo (Egypt) [Virtual Conference] 2-4 March 2021, pp. 603-613
summary Faced with the complexity of manual and intuitive management of urban rules in architectural and urban design, this paper offers a collaborative and digital human-computer approach. It aims to have an Authorized Bounding Volume (ABV) which uses the best target values of urban rules. It is a distributed constraint optimization problem. The ABV Generative Model uses multi-agent systems. It offers an intelligent system of urban morphology able to transform the urban rules, on a given plot, into a morphological delimitation permitted by the planning regulations of a city. The overall functioning of this system is based on two approaches: construction and supervision. The first is conducted entirely by the machine and the second requires the intervention of the designer to collaborate with the machine. The morphological translation of urban rules is sometimes contradictory and may require additional external relevance to urban rules. Designer arbitration assists the artificial intelligence in accomplishing this task and solving the problem. The Human-Computer collaboration is achieved at the appropriate time and relies on the degree of constraint satisfaction with fitness function. The resolution of the distributed constraint optimization problem is not limited to an automatic generation of urban rules, but involves also the production of multiple optimal-ABV conditioned both by urban constraints as well as relevance, chosen by the designer.
series ASCAAD
email
last changed 2021/08/09 13:13

_id c088
authors Biermann, Alan W., Rodman, Robert D. and Rubin, David C. (et al)
year 1985
title Natural Language with Discrete Speech as a Mode for Human- to-Machine Communication
source Communications of the ACM June, 1985. vol. 28: pp. 628-636 : ill. includes bibliography.
summary A voice interactive natural language system, which allows users to solve problems with spoken English commands, has been constructed. The system utilizes a commercially available discrete speech recognizer which requires that each word be followed by approximately a 300 millisecond pause. In a test of the system, subjects were able to learn its use after about two hours of training. The system correctly processed about 77 percent of the over 6000 input sentences spoken in problem-solving sessions. Subjects spoke at the rate of about three sentences per minute and were able to effectively use the system to complete the given tasks. Subjects found the system relatively easy to learn and use, and gave a generally positive report of their experience
keywords user interface, natural languages, speech recognition, AI
series CADline
last changed 2003/06/02 13:58

_id 4202
authors Brown, Michael E. and Gallimore, Jennie J.
year 1995
title Visualization of Three-Dimensional Structure During Computer-Aided Design
source International Journal of Human-Computer Interaction 1995 v.7 n.1 pp. 37-56
summary The visual image presented to an engineer using a computer-aided design (CAD) system influences design activities such as decision making, problem solving, cognizance of complex relationships, and error correction. Because of the three-dimensional (3-D) nature of the object being created, an important attribute of the CAD visual interface concerns the various methods of presenting depth on the display's two-dimensional (2-D) surface. The objective of this research is to examine the effects of stereopsis on subjects' ability to (a) accurately transfer to, and retrieve from, long-term memory spatial information about 3-D objects; and (b) visualize spatial characteristics in a quick and direct manner. Subjects were instructed to memorize the shape of a 3-D object presented on a stereoscopic CRT during a study period. Following the study period, a series of static trial stimuli were shown. Each trial stimulus was rotated (relative to the original) about the vertical axis in one of six 36° increments between 0° and 180°. In each trial, the subject's task was to determine, as quickly and as accurately as possible, whether the trial object was the same shape as the memorized object or its mirrored image. One of the two cases was always true. To assess the relative merits associated with disparity and interposition, the two depth cues were manipulated in a within-subject manner during the study period and during the trials that followed. Subject response time and error rate were evaluated. Improved performance due to hidden surface is the most convincing experimental finding. Interposition is a powerful cue to object structure and should not be limited to late stages of design. The study also found a significant, albeit limited, effect of stereopsis. Under specific study object conditions, adding disparity to monocular trial objects significantly decreased response time. Response latency was also decreased by adding disparity information to stimuli in the study session.
series journal paper
last changed 2003/05/15 21:45

_id 235d
authors Catalano, Fernando
year 1990
title The Computerized Design Firm
source The Electronic Design Studio: Architectural Knowledge and Media in the Computer Era [CAAD Futures ‘89 Conference Proceedings / ISBN 0-262-13254-0] Cambridge (Massachusetts / USA), 1989, pp. 317-332
summary This paper is not just about the future of computerized design practice. It is about what to do today in contemplation of tomorrow-the issues of computercentered practice and the courses of action open to us can be discerned by the careful observer. The realities of computerized design practice are different from the issues on which design education still fixes its attention. To educators, the present paper recommends further clinical research on computerized design firms and suggests that case studies on the matter be developed and utilized as teaching material. Research conducted by the author of this paper indicates that a new form of design firm is emerging-the computerized design firm-totally supported and augmented by the new information technology. The present paper proceeds by introducing an abridged case study of an actual totally electronic, computerized design practice. Then, the paper concentrates on modelling the computerized design firm as an intelligent system, indicating non-trivial changes in its structure and strategy brought about by the introduction of the new information technology into its operations - among other considerations, different strategies and diverse conceptions of management and workgroup roles are highlighted. In particular, this paper points out that these structural and strategic changes reflect back on the technology of information with pressures to redirect present emphasis on the individual designer, working alone in an isolated workstation, to a more realistic conception of the designer as a member of an electronic workgroup. Finally, the paper underlines that this non-trivial conception demands that new hardware and software be developed to meet the needs of the electronic workgroup - which raises issues of human-machine interface. Further, it raises the key issues of how to represent and expose knowledge to users in intelligent information - sharing systems, designed to include not only good user interfaces for supporting problem-solving activities of individuals, but also good organizational interfaces for supporting the problem-solving activities of groups. The paper closes by charting promising directions for further research and with a few remarks about the computerized design firm's (near) future.
series CAAD Futures
last changed 1999/04/03 17:58

_id 00bc
authors Chen, Chen-Cheng
year 1991
title Analogical and inductive reasoning in architectural design computation
source Swiss Federal Institute of Technology, ETH Zurich
summary Computer-aided architectural design technology is now a crucial tool of modern architecture, from the viewpoint of higher productivity and better products. As technologies advance, the amount of information and knowledge that designers can apply to a project is constantly increasing. This requires development of more advanced knowledge acquisition technology to achieve higher functionality, flexibility, and efficient performance of the knowledge-based design systems in architecture. Human designers do not solve design problems from scratch, they utilize previous problem solving episodes for similar design problems as a basis for developmental decision making. This observation leads to the starting point of this research: First, we can utilize past experience to solve a new problem by detecting the similarities between the past problem and the new problem. Second, we can identify constraints and general rules implied by those similarities and the similar parts of similar situations. That is, by applying analogical and inductive reasoning we can advance the problem solving process. The main objective of this research is to establish the theory that (1) design process can be viewed as a learning process, (2) design innovation involves analogical and inductive reasoning, and (3) learning from a designer's previous design cases is necessary for the development of the next generation in a knowledge-based design system. This thesis draws upon results from several disciplines, including knowledge representation and machine learning in artificial intelligence, and knowledge acquisition in knowledge engineering, to investigate a potential design environment for future developments in computer-aided architectural design. This thesis contains three parts which correspond to the different steps of this research. Part I, discusses three different ways - problem solving, learning and creativity - of generating new thoughts based on old ones. In Part II, the problem statement of the thesis is made and a conceptual model of analogical and inductive reasoning in design is proposed. In Part III, three different methods of building design systems for solving an architectural design problem are compared rule-based, example-based, and case-based. Finally, conclusions are made based on the current implementation of the work, and possible future extensions of this research are described. It reveals new approaches for knowledge acquisition, machine learning, and knowledge-based design systems in architecture.
series thesis:PhD
email
last changed 2003/05/10 05:42

_id 4a30
authors Chiu, Mao-Lin
year 1997
title Analogical Reasoning in Architectural Design: Comparison of Human Designers and Computers in Case Adaptation
doi https://doi.org/10.52842/conf.caadria.1997.205
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 205-215
summary Design cases were considered as the design solution or condensed knowledge of design experience. In the analogical reasoning process, case adaptation is the fundamental task for solving the problem. This paper is aimed to study the difference between human designers and computers in case adaptation. Two design experiments are undertaken for examining how designers apply dimensional and topological adaptation, exploring the difference of case adaptation by novice and experienced designers, and examining the difference between human judgement in case adaptation and the evaluation mechanism by providing similarity assessment. In conclusion, this study provides the comparative analysis from the above observation and implications on the development of case-based reasoning systems for designers.
series CAADRIA
email
last changed 2022/06/07 07:56

_id ga0007
id ga0007
authors Coates, Paul and Miranda, Pablo
year 2000
title Swarm modelling. The use of Swarm Intelligence to generate architectural form
source International Conference on Generative Art
summary .neither the human purposes nor the architect's method are fully known in advance. Consequently, if this interpretation of the architectural problem situation is accepted, any problem-solving technique that relies on explicit problem definition, on distinct goal orientation, on data collection, or even on non-adaptive algorithms will distort the design process and the human purposes involved.' Stanford Anderson, "Problem-Solving and Problem-Worrying". The works concentrates in the use of the computer as a perceptive device, a sort of virtual hand or "sense", capable of prompting an environment. From a set of data that conforms the environment (in this case the geometrical representation of the form of the site) this perceptive device is capable of differentiating and generating distinct patterns in its behavior, patterns that an observer has to interpret as meaningful information. As Nicholas Negroponte explains referring to the project GROPE in his Architecture Machine: 'In contrast to describing criteria and asking the machine to generate physical form, this exercise focuses on generating criteria from physical form.' 'The onlooking human or architecture machine observes what is "interesting" by observing GROPE's behavior rather than by receiving the testimony that this or that is "interesting".' The swarm as a learning device. In this case the work implements a Swarm as a perceptive device. Swarms constitute a paradigm of parallel systems: a multitude of simple individuals aggregate in colonies or groups, giving rise to collaborative behaviors. The individual sensors can't learn, but the swarm as a system can evolve in to more stable states. These states generate distinct patterns, a result of the inner mechanics of the swarm and of the particularities of the environment. The dynamics of the system allows it to learn and adapt to the environment; information is stored in the speed of the sensors (the more collisions, the slower) that acts as a memory. The speed increases in the absence of collisions and so providing the system with the ability to forget, indispensable for differentiation of information and emergence of patterns. The swarm is both a perceptive and a spatial phenomenon. For being able to Interact with an environment an observer requires some sort of embodiment. In the case of the swarm, its algorithms for moving, collision detection, and swarm mechanics conform its perceptive body. The way this body interacts with its environment in the process of learning and differentiation of spatial patterns constitutes also a spatial phenomenon. The enactive space of the Swarm. Enaction, a concept developed by Maturana and Varela for the description of perception in biological terms, is the understanding of perception as the result of the structural coupling of an environment and an observer. Enaction does not address cognition in the currently conventional sense as an internal manipulation of extrinsic 'information' or 'signals', but as the relation between environment and observer and the blurring of their identities. Thus, the space generated by the swarm is an enactive space, a space without explicit description, and an invention of the swarm-environment structural coupling. If we consider a gestalt as 'Some property -such as roundness- common to a set of sense data and appreciated by organisms or artefacts' (Gordon Pask), the swarm is also able to differentiate space 'gestalts' or spaces of some characteristics, such as 'narrowness', or 'fluidness' etc. Implicit surfaces and the wrapping algorithm. One of the many ways of describing this space is through the use of implicit surfaces. An implicit surface may be imagined as an infinitesimally thin band of some measurable quantity such as color, density, temperature, pressure, etc. Thus, an implicit surface consists of those points in three-space that satisfy some particular requirement. This allows as to wrap the regions of space where a difference of quantity has been produced, enclosing the spaces in which some particular events in the history of the Swarm have occurred. The wrapping method allows complex topologies, such as manifoldness in one continuous surface. It is possible to transform the information generated by the swarm in to a landscape that is the result of the particular reading of the site by the swarm. Working in real time. Because of the complex nature of the machine, the only possible way to evaluate the resulting behavior is in real time. For this purpose specific applications had to be developed, using OpenGL for the Windows programming environment. The package consisted on translators from DXF format to a specific format used by these applications and viceversa, the Swarm "engine", a simulated parallel environment, and the Wrapping programs, to generate the implicit surfaces. Different versions of each had been produced, in different stages of development of the work.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id a718
authors Cuomo, Donna L. and Sharit, Joseph
year 1989
title A Study of Human Performance in Computer-Aided Architectural Design
source International Journal of Human-Computer Interaction. 1989. vol. 1: pp. 69-107 : ill. includes bibliography
summary This paper describes the development and application of a cognitively-based performance methodology for assessing human performance on computer-aided architectural design (CAAD) tasks. Two CAAD tasks were employed that were hypothesized to be different in terms of the underlying cognitive processes required for these tasks to be performed. Methods of manipulating task complexity within each of these tasks were then developed. Six architectural graduate students were trained on a commercially available CAAD system. Each student performed the two experimental design tasks at one of three levels of complexity. The data collected included protocols, video recordings of the computer screen, and an interactive script (time-stamped record of every command input and the computers textual response). Performance measures and methods of analysis were developed which reflected the cognitive processes used by the human during design (including problem- solving techniques, planning times, heuristics employed, etc.) and the role of the computer as a design aid. The analysis techniques used included graphical techniques, Markov process analysis, protocol analysis, and error classification and analysis. The results of the study indicated that some measures more directly reflected human design activity while others more directly reflected the efficiency of interaction between the computer and the human. The discussion of the results focuses primarily on the usefulness of the various measures comprising the performance methodology, the usefulness of the tasks employed including methods for manipulating task complexity, and the effectiveness of this system as well as CAAD systems in general for aiding human design processes
keywords protocol analysis, problem solving, planning, CAD, design process, performance, architecture
series CADline
last changed 2003/06/02 13:58

_id 56be
authors Dillon, Andrew and Marian, Sweeney
year 1988
title The Application of Cognitive Psychology to CAD Input/Output
source Proceedings of the HCI'88 Conference on People and Computers IV 1988 p.477-488
summary The design of usable human-computer interfaces is one of the primary goals of the HCI specialist. To date however interest has focussed mainly on office or text based systems such as word processors or databases. Computer aided design (CAD) represents a major challenge to the human factors community to provide suitable input and expertise in an area where the users goals and requirements are cognitively distinct from more typical HCI. The present paper is based on psychological investigations of the engineering domain, involving an experimental comparison of designers using CAD and the more traditional drawing board. By employing protocol analytic techniques it is possible to shed light on the complex problem-solving nature of design and to demonstrate the crucial role of human factors in the development of interfaces which facilitate the designers in their task. A model of the cognition of design is proposed which indicates that available knowledge and guidelines alone are not sufficient to aid CAD developers and the distinct nature of the engineering designer's task merits specific attention.
keywords Cognitive Psychology; Interface Design; Protocol Analysis
series other
last changed 2002/07/07 16:01

_id ab23
authors Dromey, Geoff R.
year 1983
title Before Programming : On Teaching Introductory Computing
source 1983? 10 p. includes bibliography
summary In comparison with most other human intellectual activities, computing is in its infancy despite the progress we seem to have made in such a short time. Consequently, there has been insufficient time for the evolution of 'best ways' to transmit computing concepts and skills. It is therefore prudent to look to more mature disciplines for some guidelines on effective ways to introduce computing to beginners. In this respect the discipline of teaching people to read and write in a natural language is highly relevant. A fundamental characteristic of this latter discipline is that a substantial amount of time is devoted to teaching people to read long before they are asked to write stories, essays, etc. In teaching computing people seem to have overlooked or neglected what corresponds to the reading stage in the process of learning to read and write. In the discussion which follows the author looks at ways of economically giving students the 'computer-reading experience' and preparing them for the more difficult tasks of algorithm design and computer problem-solving
keywords programming, education,
series CADline
last changed 2003/06/02 13:58

_id 270d
authors Elezkurtaj, Tomor and Franck, Georg
year 2001
title Evolutionary Algorithms in Urban Planning
source CORP 2001, Vienna, pp. 269-272
summary The functions supported by commercial CAD software are drawing, construction and presentation. Until now, no programssupporting the creative part of architectural and urban problem solving are on the market. The grand hopes of symbolic AI ofprogramming creative architectural and urban design have been disappointed. In the meantime, methods called New AI are available.Among these methods, evolutionary algorithms are particularly promising for solving design problems. The paper presents anapproach to town panning and architectural problem solving that combines an evolutionary strategy (ES), a genetic algorithm (GA)and a Particle System. The problem that remains incapable of being solved algorithmically has to do with the fact that in architectureand urbanizm form as well as function count. Because function relates to comfort, easiness of use, and aesthetics as well, it ishopeless to fully specify the fitness function of architecture. The approach presented circumvents a full specification through dividinglabor between the software and its user. The fitness function of town plans is defined in terms only of proportions of the shapes, areasand buildings to be accommodated and topological relations between them. The rest is left to the human designer who interactivelyintervenes in the evolution game as displayed on the screen.
series other
email
more www.corp.at
last changed 2002/12/19 12:17

_id 5007
authors Elezkurtaj, Tomor and Franck, Georg
year 1999
title Genetic Algorithms in Support of Creative Architectural Design
doi https://doi.org/10.52842/conf.ecaade.1999.645
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 645-651
summary The functions supported by commercial CAAD software are drawing, construction and presentation. Up to now few programs supporting the creative part of architectural problem solving have become available. The grand hopes of symbolic AI to program creative architectural design have been disappointing. In the meantime, methods called referred to as New AI have become available. Such methods includegenetic algorithms (GA). But GA, though successfully applied in other fields of engineering, still waits to be applied broadly in architectural design. A main problem lies in defining function in architecture. It is much harder to define the function of a building than that of a machine. Without specifying the function of the artifact, the fitness function of the design variants participating in the survival game of artificial evolution remains undetermined. It is impossible to fully specify the fitness function of architecture. The approach presented is one of circumventing a full specification through dividing labor between the GA software and its user. The fitness function of architectural ground plans is typically defined in terms only of the proportions of the room to be accommodated and certain topological relations between them. The rest is left to the human designer who interactively intervenes in the evolution game as displayed on the screen.
keywords Genetic Algorithms, Creative Architectural Design
series eCAADe
email
last changed 2022/06/07 07:55

_id 4086
authors Ervin, Stephen M.
year 1988
title Computer-Aided Diagramming and the `Generator-Test' Cycle
source 1988. 22 p.: ill. includes bibliography
summary Simon's `generator-test' model is both a metaphor and a literal prescription for the organization of computer systems for designing. In most approaches to computer-aided design, one side of the cycle - generating or testing - is reserved to the human designer, the other side delegated to the computer. A more comfortable and comprehensive approach is to support switching these roles between designer and computer. This approach underlies a prototype system for computer-aided diagramming, the CBD (Constraint-Based Diagrammer). Diagramming is an important design activity, especially in preliminary design, as diagrams play a pivotal role between graphic and symbolic knowledge. Diagrams as a medium of knowledge representation and as means of inference have an ambivalent status in the generator-test model; they may serve either purpose. Examination of CBD sheds some light on Simon's model and on the requirements for sharing generating and testing with computational design tools
keywords problem solving, CAD, constraints, evaluation, synthesis
series CADline
last changed 2003/06/02 13:58

_id 78ca
authors Friedland, P. (Ed.)
year 1985
title Special Section on Architectures for Knowledge-Based Systems
source CACM (28), 9, September
summary A fundamental shift in the preferred approach to building applied artificial intelligence (AI) systems has taken place since the late 1960s. Previous work focused on the construction of general-purpose intelligent systems; the emphasis was on powerful inference methods that could function efficiently even when the available domain-specific knowledge was relatively meager. Today the emphasis is on the role of specific and detailed knowledge, rather than on reasoning methods.The first successful application of this method, which goes by the name of knowledge-based or expert-system research, was the DENDRAL program at Stanford, a long-term collaboration between chemists and computer scientists for automating the determination of molecular structure from empirical formulas and mass spectral data. The key idea is that knowledge is power, for experts, be they human or machine, are often those who know more facts and heuristics about a domain than lesser problem solvers. The task of building an expert system, therefore, is predominantly one of teaching" a system enough of these facts and heuristics to enable it to perform competently in a particular problem-solving context. Such a collection of facts and heuristics is commonly called a knowledge base. Knowledge-based systems are still dependent on inference methods that perform reasoning on the knowledge base, but experience has shown that simple inference methods like generate and test, backward-chaining, and forward-chaining are very effective in a wide variety of problem domains when they are coupled with powerful knowledge bases. If this methodology remains preeminent, then the task of constructing knowledge bases becomes the rate-limiting factor in expert-system development. Indeed, a major portion of the applied AI research in the last decade has been directed at developing techniques and tools for knowledge representation. We are now in the third generation of such efforts. The first generation was marked by the development of enhanced AI languages like Interlisp and PROLOG. The second generation saw the development of knowledge representation tools at AI research institutions; Stanford, for instance, produced EMYCIN, The Unit System, and MRS. The third generation is now producing fully supported commercial tools like KEE and S.1. Each generation has seen a substantial decrease in the amount of time needed to build significant expert systems. Ten years ago prototype systems commonly took on the order of two years to show proof of concept; today such systems are routinely built in a few months. Three basic methodologies-frames, rules, and logic-have emerged to support the complex task of storing human knowledge in an expert system. Each of the articles in this Special Section describes and illustrates one of these methodologies. "The Role of Frame-Based Representation in Reasoning," by Richard Fikes and Tom Kehler, describes an object-centered view of knowledge representation, whereby all knowldge is partitioned into discrete structures (frames) having individual properties (slots). Frames can be used to represent broad concepts, classes of objects, or individual instances or components of objects. They are joined together in an inheritance hierarchy that provides for the transmission of common properties among the frames without multiple specification of those properties. The authors use the KEE knowledge representation and manipulation tool to illustrate the characteristics of frame-based representation for a variety of domain examples. They also show how frame-based systems can be used to incorporate a range of inference methods common to both logic and rule-based systems.""Rule-Based Systems," by Frederick Hayes-Roth, chronicles the history and describes the implementation of production rules as a framework for knowledge representation. In essence, production rules use IF conditions THEN conclusions and IF conditions THEN actions structures to construct a knowledge base. The autor catalogs a wide range of applications for which this methodology has proved natural and (at least partially) successful for replicating intelligent behavior. The article also surveys some already-available computational tools for facilitating the construction of rule-based knowledge bases and discusses the inference methods (particularly backward- and forward-chaining) that are provided as part of these tools. The article concludes with a consideration of the future improvement and expansion of such tools.The third article, "Logic Programming, " by Michael Genesereth and Matthew Ginsberg, provides a tutorial introduction to the formal method of programming by description in the predicate calculus. Unlike traditional programming, which emphasizes how computations are to be performed, logic programming focuses on the what of objects and their behavior. The article illustrates the ease with which incremental additions can be made to a logic-oriented knowledge base, as well as the automatic facilities for inference (through theorem proving) and explanation that result from such formal descriptions. A practical example of diagnosis of digital device malfunctions is used to show how significantand complex problems can be represented in the formalism.A note to the reader who may infer that the AI community is being split into competing camps by these three methodologies: Although each provides advantages in certain specific domains (logic where the domain can be readily axiomatized and where complete causal models are available, rules where most of the knowledge can be conveniently expressed as experiential heuristics, and frames where complex structural descriptions are necessary to adequately describe the domain), the current view is one of synthesis rather than exclusivity. Both logic and rule-based systems commonly incorporate frame-like structures to facilitate the representation of large amounts of factual information, and frame-based systems like KEE allow both production rules and predicate calculus statements to be stored within and activated from frames to do inference. The next generation of knowledge representation tools may even help users to select appropriate methodologies for each particular class of knowledge, and then automatically integrate the various methodologies so selected into a consistent framework for knowledge. "
series journal paper
last changed 2003/04/23 15:14

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 157HOMELOGIN (you are user _anon_740878 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002