CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 346

_id 6838
authors Berberidou-Kallivoka, Liana
year 1994
title An Open Daylighting Simulation Environment
source Carnegie Mellon University, Pittsburgh
summary Various studies have shown that performance simulation tools have not been integrated effec- tively in the architectural design process. The conventional lighting simulation tools have been characterized as decision verification tools rather than design support tools. Particularly in the early design stage, when crucial and often irreversible decisions are made, this evident lack of appropriate lighting simulation environments represents a serious drawback. The "mono-directionality" of the conventional simulation tools can be identified as one of the factors responsi- ble for insufficient integration of computational lighting modeling tools in the design process. In response to this circumstance, this thesis presents the conceptual background and the proto- , typical realization of an "open" daylighting simulation environment (GESTALT) to support architectural lighting design and education. Open simulation environments aim at extension (and inversion) of the design-to-performance mapping mechanisms of the conventional build- ing performance simulation tools. Toward this end, two fully operational versions of GESTALT have been implemented. GESTALT-01 is an explicit implementation based on invertible "fast-response" computational modules. GESTALT-02 is an implicit version that uses a comprehensive computational daylight simulator and investigative projection technique for performance-driven design exploration. Concepts, implementations, case studies, contributions and future directions are presented.
series thesis:PhD
last changed 2003/02/12 22:42

_id ddss9414
id ddss9414
authors Bright, Elise N.
year 1994
title THe "Allots" Model: A PC-Based Approach to Demand Distribution for Siting and Planning
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary This paper reports on the development and application of ALLOT: a user-friendly, flexible computer model which has been designed to help governmental jurisdictions and private landowners throughout the world to achieve more economically efficient and environmentally sound land use and development patterns in a short period of time. ALLOT has the potential to drastically change the way that land use planning is conducted, since it has the capability to allow theincorporation of a wide variety of previously ignored environmental characteristics and up-to-date land use patterns. ALLOT, which is written in the SAS programming language, contains twomajor parts. The first part employs a GIS database to conduct land suitability analyses for the area. It then produces maps showing the most suitable areas for various land use types. The second part appears to be unique in the field of computerized land use planning models. It combines the results of the suitability analysis with forecasted demand for various land use types to produce "optimum" future land use patterns. The model is capable of quickly analyzing a wide variety of forecasts, allowing easy comparison of different growth scenarios; and it can also be modified to reflect community goals and objectives, such as protection of wildlife habitat orattraction of industry. The flexibility, combined with the fact that it runs on any IBM-compatible PC (286 or higher), make it a powerful land use planning tool. The model has been successfully applied in two "real world" situations. First, three alternative future land use patterns were developed for a rural lakeside area. The area had rural characteristics and was lacking infrastructure, but a large influx of people was expected as the lake was filled. The success of this effort led to decision to test it´s use as a method for facility siting (using landfill siting as an example).
series DDSS
last changed 2003/08/07 16:36

_id ddss2004_ra-33
id ddss2004_ra-33
authors Diappi, L., P. Bolchim, and M. Buscema
year 2004
title Improved Understanding of Urban Sprawl Using Neural Networks
source Van Leeuwen, J.P. and H.J.P. Timmermans (eds.) Recent Advances in Design & Decision Support Systems in Architecture and Urban Planning, Dordrecht: Kluwer Academic Publishers, ISBN: 14020-2408-8, p. 33-49
summary It is widely accepted that the spatial pattern of settlements is a crucial factor affecting quality of life and environmental sustainability, but few recent studies have attempted to examine the phenomenon of sprawl by modelling the process rather than adopting a descriptive approach. The issue was partly addressed by models of land use and transportation which were mainly developed in the UK and US in the 1970s and 1980s, but the major advances were made in the area of modelling transportation, while very little was achieved in the area of spatial and temporal land use. Models of land use and transportation are well-established tools, based on explicit, exogenouslyformulated rules within a theoretical framework. The new approaches of artificial intelligence, and in particular, systems involving parallel processing, (Neural Networks, Cellular Automata and Multi-Agent Systems) defined by the expression “Neurocomputing”, allow problems to be approached in the reverse, bottom-up, direction by discovering rules, relationships and scenarios from a database. In this article we examine the hypothesis that territorial micro-transformations occur according to a local logic, i.e. according to use, accessibility, the presence of services and conditions of centrality, periphericity or isolation of each territorial “cell” relative to its surroundings. The prediction capabilities of different architectures of supervised Neural networks are implemented to the south Metropolitan area of Milan at two different temporal thresholds and discussed. Starting from data on land use in 1980 and 1994 and by subdividing the area into square cells on an orthogonal grid, the model produces a spatial and functional map of urbanisation in 2008. An implementation of the SOM (Self Organizing Map) processing to the Data Base allows the typologies of transformation to be identified, i.e. the classes of area which are transformed in the same way and which give rise to territorial morphologies; this is an interesting by-product of the approach.
keywords Neural Networks, Self-Organizing Maps, Land-Use Dynamics, Supervised Networks
series DDSS
last changed 2004/07/03 22:13

_id ddss9446
id ddss9446
authors Horgen, Turid
year 1994
title Post Occupancy Evaluation as a Strategy to Develop an Improved Work Environment
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary A post-occupancy evaluation is a formal way of finding out whether a recently occupied, remodelled, or built environment is performing, as was intended in its programming or design, and a term which has been developed in the professional field in the United States over the last 20 years. The Scandinavian approach to the same question has emphasised surfacing the values of the users of the work environment as a tool for a more comprehensive approach to space planning and design. A recent case-study of the Taubman Building at Harvard University's John F. Kennedy School of Government was aimed at blending the two strategies for evaluation, defined postoccupancy evaluation as a dialogue with the client, as a process to help the client reflect on spatial and technological improvements, or alternate strategies for organisational locations in buildings, and offers an interesting example of a possible future direction for POE's. Sheila Sheridan, Director of Facilities and Services at the Kennedy School, commissioned the case-study, and has been using it result in her daily work. Jacqueline Vischer, who has developed a survey of seven key dimensions of work-place comfort for commercial office buildings throughout eastern North America, and Turid Horgen, who has developed tools for participatory environmental evaluation and programming, widely used in Scandinavia, carried out the study and facilitated the evaluation process. The study is also done in the context of the ongoing research on these issues in the design Inquiry Group at the School of Architecture and Planning at MIT, which is involved in a larger program for developing strategies and tools for more effective programming and management of corporate space. This research defines the workplace environment as the interaction between four dimensions: space, technology, organisation and finance. Our approach is to integrate programming and evaluation with organisational planning and organisational transformation.Post occupancy evaluation is seen as a way to inform the client about his organisational culture as he manages the fit between a facility and its uses, and as one of several tools to bridge the frameworks and viewpoints and the many "languages" which are brought into the decision making process of designing the built environment.
series DDSS
last changed 2003/08/07 16:36

_id 480c
authors Hornyánszky Dalholm, Elisabeth and Rydberg Mitchell, Birgitta
year 1994
title Full-Scale Modelling - A Tool with Many Forms and Applications
source Beyond Tools for Architecture [Proceedings of the 5th European Full-scale Modeling Association Conference / ISBN 90-6754-375-6] Wageningen (The Netherlands) 6-9 September 1994, pp. 59-70
summary The significance of the full-scale mock-up as a tool depends, among other things, on the type and finish of the mock-up, the purpose of its use and the user. The qualities of the tool effect the way it can be used. By working with a new group of users, architecture students, and by supplementing our building system with blocks we now have gained new experience. In the first part of this paper we present the projects that we carried out in teaching, partly inspired by the collaboration with EFA-members. In the second part, we try to compare this experience with our previous work with lay-people. Since the outcome of full-scale modelling means different things to these two categories of users, it affects their relationship to the mock-up. A consequence of this is that the mock-up has to fulfil various demands and it is important to be aware of these and adjust the mock-up and the full-scale modelling procedure according to them.
keywords Model Simulation, Real Environments
series other
email
last changed 2003/08/25 10:12

_id 401c
authors Hornyánszky Dalholm, Elisabeth and Rydberg Mitchell, Birgitta
year 1994
title FULL-SCALE MODELLING - A TOOL WITH MANY FORMS AND APPLICATIONS
source Beyond Tools for Architecture [Proceedings of the 5th European Full-scale Modeling Association Conference / ISBN 90-6754-375-6] Wageningen (The Netherlands) 6-9 September 1994, pp. 83-94
summary The significance of the full-scale mock-up as a tool depends, among other things, on the type and finish of the mock-up, the purpose of its use and the user. The qualities of the tool effect the way it can be used. By working with a new group of users, architecture students, and by supplementing our building system with blocks we now have gained new experience. In the first part of this paper we present the projects that we carried out in teaching, partly inspired by the collaboration with EFA-members. In the second part, we try to compare this experience with our previous work with lay-people. Since the outcome of full-scale modelling means different things to these two categories of users, it affects their relationship to the mock-up. A consequence of this is that the mock-up has to fulfil various demands and it is important to be aware of these and adjust the mock-up and the full-scale modelling procedure according to them.
keywords Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 11:01

_id ddssup9610
id ddssup9610
authors Krafta, Romulo
year 1996
title Built form and urban configuration development simulation
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part two: Urban Planning Proceedings (Spa, Belgium), August 18-21, 1996
summary The "centrality/potential" model, proposed by Krafta (1994), for configurational development, aims at the simulation of inner city built form growth. This is generally achieved by simulating the uneven distribution of floor area increments, resulting from replacement of old buildings, considered "devalued capital" form new ones. The model considers two main variables - public urban space system and built form - and treats them unevenly; the former is extensively disaggregated whereas the latter is not. This feature enables the model to make just a rough account of intra-urban built form development. The issue of built form simulation is then taken further in the following way: a) Urban built form is disaggregated by types. Buildings are classified by a cross combination of scale, purpose, age and quality standard; b) The city is itself considered as a set of intertwined typologic cities. This means that each unit of public space is identified by its dominant built form type, producing a multilayered-discontinuous city. Each one has its own market characteristics: rentability, technological availability and demand size; c) The market constraints determine which layer-city has priority over the others, as well as each one's size of growth. References to rentability and demand size gives each built form type priorities for development d) Spatial conditions, in the form of particular evaluation of centrality and spatial opportunity measures, regulates the distribution of built form increments and typological succession. Locational values, denoted by centrality and spatial opportunity measures, area differently accounted for in each layer-city simulation. e) Simulation is obtained by "running" the model recursively. Each built form type is simulated separately and in hyerarquical order, so that priority and replacement of built form types is acknowledged properly.
series DDSS
email
last changed 2003/08/07 16:36

_id 4604
authors Laveau, S. and Faugeras, O.
year 1994
title 3D Scene Representation as a Collection of Images and Fundamental Matrices
source INRIA Report
summary The problem we solve in this paper is the following. Suppose we are given N views of a static scene obtained from different viewpoints, perhaps with different cameras. These viewpoints we call reference viewpoints since they are all we know of the scene. We would like to decide if it is possible to predict ano- ther view of the scene taken by a camera from a viewpoint which is arbitrary and a priori di erent from all the reference viewpoints. One method for doing this would be to use these viewpoints to construct a three-dimensional repre- sentation of the scene and reproject this representation on the retinal plane of the virtual camera. In order to achieve this goal, we would have to establish some sort of calibration of our system of cameras, fuse the three-dimensional representations obtained from, say, pairs of cameras thereby obtaining a set of 3-D points, the scene. We would then have to approximate this set of points by surfaces, a segmentation problem which is still mostly unsolved, and then intersect the optical rays from the virtual camera with these sur- faces. This is the most straightforward way of going from a set of images to a new image using the current computer vision paradigm of rst building a three-dimensional representation of the environment from which the rest is derived. We do not claim that there does not exist any simpler way of using the three-dimensional representation than the one we just sketched, but this is just simply not our point. Our point is that it is possible to avoid entirely the explicit three-dimensional reconstruction process: the scene is represented by its images and by some ba- sically linear relations that govern the way points can be put in correspondence between views when they are the images of the same scene-point. These images and their algebraic relations are all we need for predicting a new image. This approach is similar in spirit to the one that has been used in trinocular stereo. Hypotheses of correspondences between two of the images are used to predict features in the third. These predictions can then be checked to validate or inva- lidate the initial correspondence. This approach has proved to be quite e cient and accurate. Related to these ideas are those develo- ped in the photogrammetric community under the name of transfer methods which nd for one or more image points in a given image set, the corresponding points in some new image set.
series report
last changed 2003/04/23 15:50

_id ddss9476
id ddss9476
authors Porada, Mikhael and Porada, Sabine
year 1994
title "To See Ideas" or The Visualizing of Programmatic Data Reading Examples in Architecture and Town Planning
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary Whether images are still in the mind, metaphors, sketches or icons, they play a crucial role. They have always been the heuristic pivot around which the process of artefact design organizes itself, particularly in architecture and town-planning. "To see ideas" through computer ideograms is to experiment an interesting and new direction for "pictural approach" supported design. Cognitive psychology emphasizes the important part played by mental images in reasoning, imagination in the working of human intelligence and the construction of mental images as cognitive factors underlying reasoning. It also points out how close computerized objects and mental schemata are. "To reason over a situation is first to remember or build some mental models of this situation; second to make those models work or simulate them in order to observe what would happen in different circumstances and then verify whether they fit the experiment data; third to select the best model, a tool meant to sustain and amplify the elaboration of mental models, which is a spontaneous activity". We introduce our exploration of the direct transmission of mental models through computer ideograms. We study the "operative" and the "expressive" aspects, and this allows us to analyze how some aspects in a field of knowledge are represented by ideograms, schemata, icons, etc. Aid to imagination, reasoning and communication by means of a graphic language must be limited to some figurative relevant aspects of the domain considered; it should not aim at a realistic simulation. Therefore, the important role played by icons and the spatial schematic representation of knowledge is emphasized. Our hypothesis is that an architectural concept does not result from an inductive process, but rather is built to solve problems through the direct representation of ideas with ideograms. An experiment was conducted with a graphic language, a dynamic scenography and actor-objects. The language allows one to build and visualize models from the various domains of knowledge of the object. The dynamic scenography can explore and simulate kinetically those models by means of staging various narrations and visual scenarios. The actor-objects play various and complementary parts in order to make the image explicit and link it with the concept. We distinguish between two parallel levels of reality in computer ideographics: one concerns the model, it represents the visualization of a graphic model at a particular moment and according to a particular representation, the other concerns the ideogram.
series DDSS
last changed 2003/08/07 16:36

_id e1a1
authors Rodriguez, G.
year 1996
title REAL SCALE MODEL VS. COMPUTER GENERATED MODEL
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary Advances in electronic design and communication are already reshaping the way architecture is done. The development of more sophisticated and user-friendly Computer Aided Design (CAD) software and of cheaper and more powerful hardware is making computers more and more accessible to architects, planners and designers. These professionals are not only using them as a drafting tool but also as a instrument for visualization. Designers are "building" digital models of their designs and producing photo-like renderings of spaces that do not exist in the dimensional world.

The problem resides in how realistic these Computer Generated Models (CGM) are. Moss & Banks (1958) considered realism “the capacity to reproduce as exactly as possible the object of study without actually using it”. He considers that realism depends on: 1)The number of elements that are reproduced; 2) The quality of those elements; 3) The similarity of replication and 4) Replication of the situation. CGM respond well to these considerations, they can be very realistic. But, are they capable of reproducing the same impressions on people as a real space?

Research has debated about the problems of the mode of representation and its influence on the judgement which is made. Wools (1970), Lau (1970) and Canter, Benyon & West (1973) have demonstrated that the perception of a space is influenced by the mode of presentation. CGM are two-dimensional representations of three-dimensional space. Canter (1973) considers the three-dimensionality of the stimuli as crucial for its perception. So, can a CGM afford as much as a three-dimensional model?

The “Laboratorio de Experimentacion Espacial” (LEE) has been concerned with the problem of reality of the models used by architects. We have studied the degree in which models can be used as reliable and representative of real situations analyzing the Ecological Validity of several of them, specially the Real-Scale Model (Abadi & Cavallin, 1994). This kind of model has been found to be ecologically valid to represent real space. This research has two objectives: 1) to study the Ecological Validity of a Computer Generated Model; and 2) compare it with the Ecological Validity of a Real Scale Model in representing a real space.

keywords Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:42

_id ddss9507
id ddss9507
authors Zimring, C., Do, E., Domeshek, E. and Kolodner, J.
year 1994
title Using Post-Occupancy Evaluation To AID Reflection in ConceptualDesign: Creating a Case-Based Design Aid For Architecture
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary The design of large complex "real-world" objects such as buildings requires that the intentions of many potentially competing stakeholders be understood and reconciled. The process of conceptual design itself can be understood as a set of discourses among design team participants and between the designer and the design that gradually reveal these intentions and their relationships to design moves. Our goal is to aid this discourse by creating a Case-based Design Aid (CBDA) that provides design team participants access to specific evaluated cases of experience with previous buildings. This represents a merger of two sets of theories and methodologies: case-based reasoning (CBR) in artificial intelligence; and, post-occupancy evaluation (POE) in architectural research. In developing our CBDA, we have focused on several problems in architectural design: understanding the interactions between intentions, and making links between various modes of understanding and communication, and particularly between verbal description and visual representation. This has led to a particular way of parsing experience, and to several modes of entering and browsing the system. For instance, each case is accessible as a specific building, such as the Santa Clara County Hall of Justice, that can be explored much as an architect might browse a magazine article about the building, looking at a brief text description of the building, photos, and plans. However, each plan is annotated with "problematic situations" that are actually hypertext links into the discursive part of the program. By clicking on the button, the users reaches a "story" screen that lists the intentions of various stakeholders relevant to the problematic situation, a fuller text description of the general problematic situation with a diagram, text and diagram for a specific problematic situation as it operates in a specific building, several general design responses showing how one might respond to the problematic situations, and specific design responses from specific buildings. In addition, the user can browse the system by listing his or her interests and moving directly to stories about a given space type such as "courtroom" or issue such as "way finding." In addition, the designer can access brief synopses of key issues in a building type, for a space type, or for an issue. We are currently implementing the system on the Macintosh using Common Lisp and are focusing on libraries and courthouses as initial building types. Initial feedback from designers has been encouraging. We believe that this approach provides a useful alternative to design guidelines, that often tend to be too prescriptive, and the entirely inductive approach of many designers that may miss critical intentions.
series DDSS
email
last changed 2003/08/07 16:36

_id eb5f
authors Al-Sallal, Khaled A. and Degelman, Larry 0.
year 1994
title A Hypermedia Model for Supporting Energy Design in Buildings
doi https://doi.org/10.52842/conf.acadia.1994.039
source Reconnecting [ACADIA Conference Proceedings / ISBN 1-880250-03-9] Washington University (Saint Louis / USA) 1994, pp. 39-49
summary Several studies have discussed the limitations of the available CAAD tools and have proposed solutions [Brown and Novitski 1987, Brown 1990, Degelman and Kim 1988, Schuman et al 1988]. The lack of integration between the different tasks that these programs address and the design process is a major problem. Schuman et al [1988] argued that in architectural design many issues must be considered simultaneously before the synthesis of a final product can take place. Studies by Brown and Novitski [1987] and Brown [1990] discussed the difficulties involved with integrating technical considerations in the creative architectural process. One aspect of the problem is the neglect of technical factors during the initial phase of the design that, as the authors argued, results from changing the work environment and the laborious nature of the design process. Many of the current programs require the user to input a great deal of numerical values that are needed for the energy analysis. Although there are some programs that attempt to assist the user by setting default values, these programs distract the user with their extensive arrays of data. The appropriate design tool is the one that helps the user to easily view the principal components of the building design and specify their behaviors and interactions. Data abstraction and information parsimony are the key concepts in developing a successful design tool. Three different approaches for developing an appropriate CAAD tool were found in the literature. Although there are several similarities among them, each is unique in solving certain aspects of the problem. Brown and Novitski [1987] emphasize the learning factor of the tool as well as its highly graphical user interface. Degelman and Kim [1988] emphasize knowledge acquisition and the provision of simulation modules. The Windows and Daylighting Group of Lawrence Berkeley Laboratory (LBL) emphasizes the dynamic structuring of information, the intelligent linking of data, the integrity of the different issues of design and the design process, and the extensive use of images [Schuman et al 19881, these attributes incidentally define the word hypermedia. The LBL model, which uses hypermedia, seems to be the more promising direction for this type of research. However, there is still a need to establish a new model that integrates all aspects of the problem. The areas in which the present research departs from the LBL model can be listed as follows: it acknowledges the necessity of regarding the user as the center of the CAAD tool design, it develops a model that is based on one of the high level theories of human-computer interaction, and it develops a prototype tool that conforms to the model.

series ACADIA
email
last changed 2022/06/07 07:54

_id 0ecc
authors Anh, Tran Hoai
year 1994
title APPLICATION OF FULL-SCALE MODELLING IN VIETNAM: AN OUTLINE FOR DISCUSSION
source Beyond Tools for Architecture [Proceedings of the 5th European Full-scale Modeling Association Conference / ISBN 90-6754-375-6] Wageningen (The Netherlands) 6-9 September 1994, pp. 59-70
summary This paper discusses the possibility of applying full-scale modelling in Vietnam, a non-western so called developing country. It deals with two main questions: 1) Is the application of full-scale modelling to be restricted to the West only? 2) what are the possibilities, constraints and fields of application - with attention to the methodological validity and technical solution for full-scale modelling in Vietnam? It is argued that since full-scale modelling is based on people-environment interaction, it should, in principle, apply to studies about people–environment relation anywhere on earth. On the methodological validity, it is discussed that application of full-scale modelling in Vietnam faces similar methodological problems as encountered in European applications (such as people's behaviour in experiment, ability to understand the abstraction of models, etc.) although at another level as this paper will make clear. However, it would be needed to design a modelling kit that is of low costs and adapted to the availability of local materials and suitable for the climatic condition of Vietnam. Two fields of application are projected as most applicable in Vietnam: modelling in architectural education and research investigation. Application for user's participation in the design process will depend on the development of building policy in the country.
keywords Model Simulation, Real Environments
series other
type normal paper
last changed 2004/05/04 11:00

_id sigradi2008_049
id sigradi2008_049
authors Benamy, Turkienicz ; Beck Mateus, Mayer Rosirene
year 2008
title Computing And Manipulation In Design - A Pedagogical Experience Using Symmetry
source SIGraDi 2008 - [Proceedings of the 12th Iberoamerican Congress of Digital Graphics] La Habana - Cuba 1-5 December 2008
summary The concept of symmetry has been usually restricted to bilateral symmetry, though in an extended sense it refers to any isometric transformation that maintains a certain shape invariant. Groups of operations such as translation, rotation, reflection and combinations of these originate patterns classified by modern mathematics as point groups, friezes and wallpapers (March and Steadman, 1974). This extended notion represents a tool for the recognition and reproduction of patterns, a primal aspect of the perception, comprehension and description of everything that we see. Another aspect of this process is the perception of shapes, primary and emergent. Primary shapes are the ones explicitly represented and emergent shapes are the ones implicit in the others (Gero and Yan, 1994). Some groups of shapes known as Semantic Shapes are especially meaningful in architecture, expressing visual features so as symmetry, rhythm, movement and balance. The extended understanding of the concept of symmetry might improve the development of cognitive abilities concerning the creation, recognition and meaning of forms and shapes, aspects of visual reasoning involved in the design process. This paper discusses the development of a pedagogical experience concerned with the application of the concept of symmetry in the creative generation of forms using computational tools and manipulation. The experience has been carried out since 1995 with 3rd year architectural design students. For the exploration of compositions based on symmetry operations with computational support we followed a method developed by Celani (2003) comprising the automatic generation and update of symmetry patterns using AutoCAD. The exercises with computational support were combined with other different exercises in each semester. The first approach combined the creation of two-dimensional patterns to their application and to their modeling into three-dimensions. The second approach combined the work with computational support with work with physical models and mirrors and the analysis of the created patterns. And the third approach combined the computational tasks with work with two-dimensional physical shapes and mirrors. The student’s work was analyzed under aspects such as Discretion/ Continuity –the creation of isolated groups of shapes or continuous overlapped patterns; Generation of Meta-Shapes –the emergence of new shapes from the geometrical relation between the generative shape and the structure of the symmetrical arrangement; Modes of Representation –the visual aspects of the generative shape such as color and shading; Visual Reasoning –the derivation of 3D compositions from 2D patterns by their progressive analysis and recognition; Conscious Interaction –the simultaneous creation and analysis of symmetry compositions, whether with computational support or with physical shapes and mirrors. The combined work with computational support and with physical models and mirrors enhanced the students understanding on the extended concept of symmetry. The conscious creation and analysis of the patterns also stimulated the student’s understanding over the different semantic possibilities involved in the exploration of forms and shapes in two or three dimensions. The method allowed the development of both syntactic and semantic aspects of visual reasoning, enhancing the students’ visual repertoire. This constitutes an important strategy in the building of the cognitive abilities used in the architectural design process.
keywords Symmetry, Cognition, Computing, Visual reasoning, Design teaching
series SIGRADI
email
last changed 2016/03/10 09:47

_id cf2011_p127
id cf2011_p127
authors Benros, Deborah; Granadeiro Vasco, Duarte Jose, Knight Terry
year 2011
title Integrated Design and Building System for the Provision of Customized Housing: the Case of Post-Earthquake Haiti
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 247-264.
summary The paper proposes integrated design and building systems for the provision of sustainable customized housing. It advances previous work by applying a methodology to generate these systems from vernacular precedents. The methodology is based on the use of shape grammars to derive and encode a contemporary system from the precedents. The combined set of rules can be applied to generate housing solutions tailored to specific user and site contexts. The provision of housing to shelter the population affected by the 2010 Haiti earthquake illustrates the application of the methodology. A computer implementation is currently under development in C# using the BIM platform provided by Revit. The world experiences a sharp increase in population and a strong urbanization process. These phenomena call for the development of effective means to solve the resulting housing deficit. The response of the informal sector to the problem, which relies mainly on handcrafted processes, has resulted in an increase of urban slums in many of the big cities, which lack sanitary and spatial conditions. The formal sector has produced monotonous environments based on the idea of mass production that one size fits all, which fails to meet individual and cultural needs. We propose an alternative approach in which mass customization is used to produce planed environments that possess qualities found in historical settlements. Mass customization, a new paradigm emerging due to the technological developments of the last decades, combines the economy of scale of mass production and the aesthetics and functional qualities of customization. Mass customization of housing is defined as the provision of houses that respond to the context in which they are built. The conceptual model for the mass customization of housing used departs from the idea of a housing type, which is the combined result of three systems (Habraken, 1988) -- spatial, building system, and stylistic -- and it includes a design system, a production system, and a computer system (Duarte, 2001). In previous work, this conceptual model was tested by developing a computer system for existing design and building systems (Benr__s and Duarte, 2009). The current work advances it by developing new and original design, building, and computer systems for a particular context. The urgent need to build fast in the aftermath of catastrophes quite often overrides any cultural concerns. As a result, the shelters provided in such circumstances are indistinct and impersonal. However, taking individual and cultural aspects into account might lead to a better identification of the population with their new environment, thereby minimizing the rupture caused in their lives. As the methodology to develop new housing systems is based on the idea of architectural precedents, choosing existing vernacular housing as a precedent permits the incorporation of cultural aspects and facilitates an identification of people with the new housing. In the Haiti case study, we chose as a precedent a housetype called ‚Äúgingerbread houses‚Äů, which includes a wide range of houses from wealthy to very humble ones. Although the proposed design system was inspired by these houses, it was decided to adopt a contemporary take. The methodology to devise the new type was based on two ideas: precedents and transformations in design. In architecture, the use of precedents provides designers with typical solutions for particular problems and it constitutes a departing point for a new design. In our case, the precedent is an existing housetype. It has been shown (Duarte, 2001) that a particular housetype can be encoded by a shape grammar (Stiny, 1980) forming a design system. Studies in shape grammars have shown that the evolution of one style into another can be described as the transformation of one shape grammar into another (Knight, 1994). The used methodology departs takes off from these ideas and it comprises the following steps (Duarte, 2008): (1) Selection of precedents, (2) Derivation of an archetype; (3) Listing of rules; (4) Derivation of designs; (5) Cataloguing of solutions; (6) Derivation of tailored solution.
keywords Mass customization, Housing, Building system, Sustainable construction, Life cycle energy consumption, Shape grammar
series CAAD Futures
email
last changed 2012/02/11 19:21

_id ddss9411
id ddss9411
authors Bouillé, Francois
year 1994
title Mastering Urban Network Intersection And Superimposition, in an Object-oriented Knowledge System Integrating Rules, Neurons and Processes
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary Many networks cover the urban texture, either superimposed at a variable distance, or really intersecting, or even in interconnection. We briefly recall the HBDS model, working on persistent abstract data types associated to graphical representations and carrying algorithms expressing conditions to be verified and/or actions to be performed. HBDS is an integrated system too, including database, expert system dealing with fuzzy rules and facts, discrete simulation engine, and neural engine; it has a general purpose programming language. Any urban network is associated to a given prototype, according to the same scheme named prototype with more specific components. These prototypes allow to build the different thematic structures instantiations of the prototypes. All possible cases of arc intersection or "pseudo-intersection" (simple superimposition)or interconnection are obtained by, owing to new prototypes. Moreover, such (pseudo)-intersections are automatically recognized and processed without a human intervention, owing to classes ofconstraints and classes of rules. They deal with particular constraints concerning the location of some urban furniture, and rules concerning the way a cable or a pipe must follow according to thepre-existing other networks in a given area, the minimal distances, minimal or maximal depths, and some required equipments. Urban classes of (pseudo-)intersections inserted in the hyperciass"neuron", inheriting of neural features, may be used for automated learning of urban knowledge; owing to their "behavior", these neurons can communicate and perform actions on other components. Urban classes inserted in the hyperciass "process" may be used for building very large models simulating complex urban phenomenons, thus allowing a better understanding of the real phenomenons. As a conclusion, we emphasize the methodological aspects of object-oriented integration for an efficient processing of the urban context, based on prototyping and mixing rules, neurons and processes.
series DDSS
last changed 2003/08/07 16:36

_id diss_brewster
id diss_brewster
authors Brewster, S.A.
year 1994
title Providing a Structured Method for Integrating Non-Speech Audio into Human-Computer Interfaces
source Heslington, York: University of York
summary This thesis provides a framework for integrating non-speech sound into human-computer interfaces. Previously there was no structured way of doing this, it was done in an ad hoc manner by individual designers. This led to ineffective uses of sound. In order to add sounds to improve usability two questions must be answered: What sounds should be used and where is it best to use them? With these answers a structured method for adding sound can be created. An investigation of earcons as a means of presenting information in sound was undertaken. A series of detailed experiments showed that earcons were effective, especially if musical timbres were used. Parallel earcons were also investigated (where two earcons are played simultaneously) and an experiment showed that they could increase sound presentation rates. From these results guidelines were drawn up for designers to use when creating usable earcons. These formed the first half of the structured method for integrating sound into interfaces. An informal analysis technique was designed to investigate interactions to identify situations where hidden information existed and where non-speech sound could be used to overcome the associated problems. Interactions were considered in terms of events, status and modes to find hidden information. This information was then categorised in terms of the feedback needed to present it. Several examples of the use of the technique were presented. This technique formed the second half of the structured method. The structured method was evaluated by testing sonically-enhanced scrollbars, buttons and windows. Experimental results showed that sound could improve usability by increasing performance, reducing time to recover from errors and reducing workload. There was also no increased annoyance due to the sound. Thus the structured method for integrating sound into interfaces was shown to be effective when applied to existing interface widgets.
series thesis:PhD
email
more http://www.dcs.gla.ac.uk/~stephen/publications.shtml
last changed 2003/11/28 07:34

_id ddss9416
id ddss9416
authors Campbell, Noel and O'Reilly, Thomas
year 1994
title GIS: Science or Tool - The Built Environment Perspective
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary This paper attempts to locate GIS in the context of the built environment professions, rather than in the context of computer science, recognizing the integrated but limiting approach of viewingGIS from a strictly computer / spatial science perspective. The paper reviews the conflicts and tensions appearing in the GIS debate seeing them as reflecting the differences between the perceptions and interests of software developers and those of the professions. The "spatial science versus professional tool" dilemma is therefore critically assessed. Science is identified as the dominant paradigm within which GIS development has taken place. This encompasses the emphasis on GIS as spatial science; the interest in particular forms of spatial analysis; a narrow approach to the idea of information; the debate about the appropriate emphasis on the location for GIS in undergraduate education. The interests and activities of the professions cannot be encompassed within the pre-existing science paradigm. The paper identifies the interest the professions have had in broad geographical issues (as distinct from narrow spatial issues). It recognizes the different conventions and procedures used in recording and using geographical information, not all of them objective or scientific. It views the computer, not as a "scientific engine", but as a modern medium for representing and analyzing information. This includes storage and analysis, both internally (algorithmic manipulation) and outside (qualitative manipulation, beyond formal -"computer"- logic). This approach suggests a framework for research of a nature more sympathetic to the needs of the built environment professions in particular and an agenda which would include an examination of: (i) the conventions and procedures used in the professions to collect, store and process information and how these translate to computer technology; (ii) the types of software used and the way procedures may be accommodated by combining and integrating packages; (iii) the dynamism of GIS development (terms such as "dedicated", "mainframe", "PC-based", "distributed", "pseudo-", etc. are identified as indicativeof the need for professions-based approaches to GIS development); (iv) a critique of "information" (modelling of information flows within the professions, may yield valuable insights into the (modelling of information flows within the professions , may yield valuable insights into the similarity of requirements for a variety of "workplace scenarios").
series DDSS
email
last changed 2003/08/07 16:36

_id 7ed5
authors Corne, D., Smithers, T. and Ross, P.
year 1994
title Solving design problems by computational exploration
source J. S. Gero and E. Tyugu (eds), Formal Design Methods for CAD, NorthHolland, Amsterdam, pp. 249-270
summary Most real-world problems, especially design problems, are ill-structured, but formal approaches to problem-solving in AI have only really made progress into techniques for solving well-structured problems. Nevertheless, such research contains clues which illuminate the way towards formal approaches to solving ill-structured problems. This paper presents the foundations of an approach towards developing a better computational understanding of ill-structured problems and how to solve them computationally, with the eventual aim of giving AI problems a much greater and more useful role in the design process. The main issues which come up in this endeavour are the notions of different kinds of ill-structuredness, and the meaning of a 'solution' to an ill-structured (and hence possibly insoluble) problem. Some basic algorithmic recipes are proposed for dealing with the main kinds of ill-structuredness, and the initial design of a general computational technique which deals with general ill-structuredness is discussed.
series other
last changed 2003/04/23 15:14

_id ddss9422
id ddss9422
authors Daru, Roel and Snijder, Philip
year 1994
title Sketch-Trigger: A Specification for a Form Generator and Design Analysis Toolbox for Architectural Sketching
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary In order to develop design and decision support techniques in the early sketch design phases, weshould (1) experience and (2) observe real behaviour in practice, (3) transform observations intoideas for improvement, (4) develop behaviour models to explain the sketch design activities and(5) to evaluate between the proposals, (6) decide between the alternatives, (7) implement theselected option in a supporting tool. Our paper reports about the results of step 3 in particular inthe first phase of a Ph D project, started this year. Our main objective is to amplify the effects ofthe sketch as a very effective instrument to generate original forms and to stimulate the mind to discover new shapes and meanings in the roughly sketched patterns. Instead of considering the sketch only as a representation of what the designer has in mind as is usually assumed in CAD systems, we see sketching as form activation. Thus, we want also to offer triggering images to spark off the imagination of the designer while generating images which are practically impossibleto create by hand and certainly not at short notice. The main improvement proposed is the use of an evolutionary form breeding system: one or more sketched parent images (either ready-made'partis' or basic schemes drafted by the designer) presented in the centre of the screen, will generate surrounding mutated children as defined at random but constrained by default or customization of the available transformations. By selecting one or more children a next generation will be produced in the same way. At all times the designer can introduce or reduce constraints. To complete the usually offered 'classical' symmetrical, spatial and logical operations,we want to introduce dis-functional operations like dislocation, explosion, deformation, anti-logic etc, in short all kinds of antagonistic operations, among them the transformations applied indeconstructionist and post-modern design. Our expectation is that these operations will correspond roughly to the 'move' pertaining to a design entity as the operational unit most appropriate for design behaviour research, in particular the analysis of the chunking and parsing behaviour of the designer. The applicability of the 'move' approach has been shown experimentally by Habraken and others. Goldschmidt has abandoned the usual typology approach of protocolanalysis based on moves and concentrated on the linking of moves, but has been hampered by the lack of a good representational instrument. This brings us to the representation of moves and linkages as a research instrument. The 'linkograph' approach as proposed by Goldschmidt is a first step towards a graphical representation of the designers associative reasoning mode, necessary for tracking the heuristics of designers at the most basic level, but its practical implementation remained as yet incredibly laborious. What is proposed here is an instrument and approach which makes such registration and analysis possible within a structured software environment.
series DDSS
email
last changed 2003/08/07 16:36

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 17HOMELOGIN (you are user _anon_818371 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002