CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 79

_id 4990
authors Gips, J. and Stiny, G.
year 1980
title Production System and Grammars: a Uniform Characterization
source Environment and Planning B. 1980. vol. 7: pp. 399-408 : tables. includes bibliography
summary The common structure underlying production system formalisms is developed. A variety of production system formalisms are summarized in terms of this structure. The structure is useful both for understanding existing systems and for developing new ones
keywords systems, shape grammars, theory
series CADline
last changed 2003/06/02 13:58

_id cf2011_p170
id cf2011_p170
authors Barros, Mário; Duarte José, Chaparro Bruno
year 2011
title Thonet Chairs Design Grammar: a Step Towards the Mass Customization of Furniture
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 181-200.
summary The paper presents the first phase of research currently under development that is focused on encoding Thonet design style into a generative design system using a shape grammar. The ultimate goal of the work is the design and production of customizable chairs using computer assisted tools, establishing a feasible practical model of the paradigm of mass customization (Davis, 1987). The current research step encompasses the following three steps: (1) codification of the rules describing Thonet design style into a shape grammar; (2) implementing the grammar into a computer tool as parametric design; and (3) rapid prototyping of customized chair designs within the style. Future phases will address the transformation of the Thonet’s grammar to create a new style and the production of real chair designs in this style using computer aided manufacturing. Beginning in the 1830’s, Austrian furniture designer Michael Thonet began experimenting with forming steam beech, in order to produce lighter furniture using fewer components, when compared with the standards of the time. Using the same construction principles and standardized elements, Thonet produced different chairs designs with a strong formal resemblance, creating his own design language. The kit assembly principle, the reduced number of elements, industrial efficiency, and the modular approach to furniture design as a system of interchangeable elements that may be used to assemble different objects enable him to become a pioneer of mass production (Noblet, 1993). The most paradigmatic example of the described vision of furniture design is the chair No. 14 produced in 1858, composed of six structural elements. Due to its simplicity, lightness, ability to be stored in flat and cubic packaging for individual of collective transportation, respectively, No. 14 became one of the most sold chairs worldwide, and it is still in production nowadays. Iconic examples of mass production are formally studied to provide insights to mass customization studies. The study of the shape grammar for the generation of Thonet chairs aimed to ensure rules that would make possible the reproduction of the selected corpus, as well as allow for the generation of new chairs within the developed grammar. Due to the wide variety of Thonet chairs, six chairs were randomly chosen to infer the grammar and then this was fine tuned by checking whether it could account for the generation of other designs not in the original corpus. Shape grammars (Stiny and Gips, 1972) have been used with sucesss both in the analysis as in the synthesis of designs at different scales, from product design to building and urban design. In particular, the use of shape grammars has been efficient in the characterization of objects’ styles and in the generation of new designs within the analyzed style, and it makes design rules amenable to computers implementation (Duarte, 2005). The literature includes one other example of a grammar for chair design by Knight (1980). In the second step of the current research phase, the outlined shape grammar was implemented into a computer program, to assist the designer in conceiving and producing customized chairs using a digital design process. This implementation was developed in Catia by converting the grammar into an equivalent parametric design model. In the third phase, physical models of existing and new chair designs were produced using rapid prototyping. The paper describes the grammar, its computer implementation as a parametric model, and the rapid prototyping of physical models. The generative potential of the proposed digital process is discussed in the context of enabling the mass customization of furniture. The role of the furniture designer in the new paradigm and ideas for further work also are discussed.
keywords Thonet; furniture design; chair; digital design process; parametric design; shape grammar
series CAAD Futures
email
last changed 2012/02/11 19:21

_id cf2011_p127
id cf2011_p127
authors Benros, Deborah; Granadeiro Vasco, Duarte Jose, Knight Terry
year 2011
title Integrated Design and Building System for the Provision of Customized Housing: the Case of Post-Earthquake Haiti
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 247-264.
summary The paper proposes integrated design and building systems for the provision of sustainable customized housing. It advances previous work by applying a methodology to generate these systems from vernacular precedents. The methodology is based on the use of shape grammars to derive and encode a contemporary system from the precedents. The combined set of rules can be applied to generate housing solutions tailored to specific user and site contexts. The provision of housing to shelter the population affected by the 2010 Haiti earthquake illustrates the application of the methodology. A computer implementation is currently under development in C# using the BIM platform provided by Revit. The world experiences a sharp increase in population and a strong urbanization process. These phenomena call for the development of effective means to solve the resulting housing deficit. The response of the informal sector to the problem, which relies mainly on handcrafted processes, has resulted in an increase of urban slums in many of the big cities, which lack sanitary and spatial conditions. The formal sector has produced monotonous environments based on the idea of mass production that one size fits all, which fails to meet individual and cultural needs. We propose an alternative approach in which mass customization is used to produce planed environments that possess qualities found in historical settlements. Mass customization, a new paradigm emerging due to the technological developments of the last decades, combines the economy of scale of mass production and the aesthetics and functional qualities of customization. Mass customization of housing is defined as the provision of houses that respond to the context in which they are built. The conceptual model for the mass customization of housing used departs from the idea of a housing type, which is the combined result of three systems (Habraken, 1988) -- spatial, building system, and stylistic -- and it includes a design system, a production system, and a computer system (Duarte, 2001). In previous work, this conceptual model was tested by developing a computer system for existing design and building systems (Benr__s and Duarte, 2009). The current work advances it by developing new and original design, building, and computer systems for a particular context. The urgent need to build fast in the aftermath of catastrophes quite often overrides any cultural concerns. As a result, the shelters provided in such circumstances are indistinct and impersonal. However, taking individual and cultural aspects into account might lead to a better identification of the population with their new environment, thereby minimizing the rupture caused in their lives. As the methodology to develop new housing systems is based on the idea of architectural precedents, choosing existing vernacular housing as a precedent permits the incorporation of cultural aspects and facilitates an identification of people with the new housing. In the Haiti case study, we chose as a precedent a housetype called “gingerbread houses”, which includes a wide range of houses from wealthy to very humble ones. Although the proposed design system was inspired by these houses, it was decided to adopt a contemporary take. The methodology to devise the new type was based on two ideas: precedents and transformations in design. In architecture, the use of precedents provides designers with typical solutions for particular problems and it constitutes a departing point for a new design. In our case, the precedent is an existing housetype. It has been shown (Duarte, 2001) that a particular housetype can be encoded by a shape grammar (Stiny, 1980) forming a design system. Studies in shape grammars have shown that the evolution of one style into another can be described as the transformation of one shape grammar into another (Knight, 1994). The used methodology departs takes off from these ideas and it comprises the following steps (Duarte, 2008): (1) Selection of precedents, (2) Derivation of an archetype; (3) Listing of rules; (4) Derivation of designs; (5) Cataloguing of solutions; (6) Derivation of tailored solution.
keywords Mass customization, Housing, Building system, Sustainable construction, Life cycle energy consumption, Shape grammar
series CAAD Futures
email
last changed 2012/02/11 19:21

_id 66eb
authors Grayer, Alan R.
year 1980
title Alternative Approaches in Geometric Modelling
source Computer Aided Design. July, 1980. vol. 12: pp. 189-192 : ill. includes bibliography
summary As systems for computer-aided design and production of mechanical parts have developed there has arisen a need for techniques for the comprehensive description of the desired part, including its three-dimensional shape. The creation and manipulation of shapes is generally known as Geometric Modelling, but some misconceptions have arisen as to the true meaning and import of the term. The paper argues for a broad, flexible approach to the subject, allowing the use of many techniques suited to particular applications, unifying them through common data structures
keywords data structures, geometric modeling, solids
series CADline
last changed 2003/06/02 10:24

_id acadia21_76
id acadia21_76
authors Smith, Rebecca
year 2021
title Passive Listening and Evidence Collection
source ACADIA 2021: Realignments: Toward Critical Computation [Proceedings of the 41st Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 979-8-986-08056-7]. Online and Global. 3-6 November 2021. edited by B. Bogosian, K. Dörfler, B. Farahi, J. Garcia del Castillo y López, J. Grant, V. Noel, S. Parascho, and J. Scott. 76-81.
doi https://doi.org/10.52842/conf.acadia.2021.076
summary In this paper, I present the commercial, urban-scale gunshot detection system ShotSpotter in contrast with a range of ecological sensing examples which monitor animal vocalizations. Gunshot detection sensors are used to alert law enforcement that a gunshot has occurred and to collect evidence. They are intertwined with processes of criminalization, in which the individual, rather than the collective, is targeted for punishment. Ecological sensors are used as a “passive” practice of information gathering which seeks to understand the health of a given ecosystem through monitoring population demographics, and to document the collective harms of anthropogenic change (Stowell and Sueur 2020). In both examples, the ability of sensing infrastructures to “join up and speed up” (Gabrys 2019, 1) is increasing with the use of machine learning to identify patterns and objects: a new form of expertise through which the differential agendas of these systems are implemented and made visible. I trace the differential agendas of these systems as they manifest through varied components: the spatial distribution of hardware in the existing urban environment and / or landscape; the software and other informational processes that organize and translate the data; the visualization of acoustical sensing data; the commercial factors surrounding the production of material components; and the apps, platforms, and other forms of media through which information is made available to different stakeholders. I take an interpretive and qualitative approach to the analysis of these systems as cultural artifacts (Winner 1980), to demonstrate how the political and social stakes of the technology are embedded throughout them.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id 0565
authors Oxman, Robert and Oxman, Rivka
year 1990
title The Computability of Architectural Knowledge
source The Electronic Design Studio: Architectural Knowledge and Media in the Computer Era [CAAD Futures ‘89 Conference Proceedings / ISBN 0-262-13254-0] Cambridge (Massachusetts / USA), 1989, pp. 171-185
summary In an important contribution to the theoretical foundation of design computing, Mitchell noted "an increasingly urgent need to establish a demonstrably sound, comprehensive, rigorously formalized theoretical foundation upon which to base practical software development efforts" (Mitchell, 1986). In this paper we propose such a theoretical framework. A basic assumption of this work is that the advancement of design computing is dependent upon the emergence of a rigorous formulation of knowledge in design. We present a model of knowledge in architectural design which suggests a promising conceptual basis for dealing with knowledge in computer-aided design systems. We require models which can represent the formal knowledge and manipulative operations of the designer in all of their complexity-that is formal models rather than just geometric models. Shape Grammars (Stiny,1980) represent an example of such models, and constitute a relatively high level of design knowledge as compared to, for example, use of symmetry operations to generate simple formal configurations. Building upon an understanding of the classes of design knowledge as the conceptual basis for formal modeling systems may contribute a new realization of the potential of the medium for design. This will require a comprehensive approach to the definition of architectural and design knowledge. We consider here the implications of a well-defined body of architectural and design knowledge for design education and the potential mutual interaction-in a knowledge-rich environment-of design learning and CAAD learning. The computational factors connected with the representation of design knowledge and its integration in design systems are among the key problems of CAAD. Mitchell's model of knowledge in design incorporates formal knowledge in a comprehensive, multi-level, hierarchical structure in which types of knowledge are correlated with computational concepts. In the main focus of this paper we present a structured, multi-level model of design knowledge which we discuss with respect to current architectural theoretical considerations. Finally, we analyze the computational and educational relevance of such models.
series CAAD Futures
email
last changed 2003/05/16 20:58

_id ga9809
id ga9809
authors Kälviäinen, Mirja
year 1998
title The ideological basis of generative expression in design
source International Conference on Generative Art
summary This paper will discuss issues concerning the design ideology supporting the use and development of generative design. This design ideology is based on the unique qualities of craft production and on the forms or ideas from nature or the natural characteristics of materials. The main ideology presented here is the ideology of the 1980´s art craft production in Finland. It is connected with the general Finnish design ideology and with the design ideology of other western countries. The ideology for these professions is based on the common background of design principles stated in 19th century England. The early principles developed through the Arts and Crafts tradition which had a great impact on design thinking in Europe and in the United States. The strong continuity of this design ideology from 19th century England to the present computerized age can be detected. The application of these design principles through different eras shows the difference in the interpretations and in the permission of natural decorative forms. The ideology of the 1980ïs art craft in Finland supports the ideas and fulfilment of generative design in many ways. The reasons often given as the basis for making generative design with computers are in very many respects the same as the ideology for art craft. In Finland there is a strong connection between art craft and design ideology. The characteristics of craft have often been seen as the basis for industrial design skills. The main themes in the ideology of the 1980´s art craft in Finland can be compared to the ideas of generative design. The main issues in which the generative approach reflects a distinctive ideological thinking are: Way of Life: The work is the communication of the maker´s inner ideas. The concrete relationship with the environment, personality, uniqueness, communication, visionary qualities, development and growth of the maker are important. The experiments serve as a media for learning. Taste and Aesthetic Education: The real love affair is created by the non living object with the help of memories and thought. At their best objects create the basis in their stability and communication for durable human relationships. People have warm relationships especially with handmade products in which they can detect unique qualities and the feeling that the product has been made solely for them. Counter-culture: The aim of the work is to produce alternatives for technoburocracy and mechanical production and to bring subjective and unique experiences into the customerïs monotonious life. This ideology rejects the usual standardized mass production of our times. Mythical character: There is a metamorphosis in the birth of the product. In many ways the design process is about birth and growth. The creative process is a development story of the maker. The complexity of communication is the expression of the moments that have been lived. If you can sense the process of making in the product it makes it more real and nearer to life. Each piece of wood has its own beauty. Before you can work with it you must find the deep soul of its quality. The distinctive traits of the material, technique and the object are an essential part of the metamorphosis which brings the product into life. The form is not only for formïs sake but for other purposes, too. You cannot find loose forms in nature. Products have their beginnings in the material and are a part of the nature. This art craft ideology that supports the ideas of generative design can be applied either to the hand made crafts production or to the production exploiting new technology. The unique characteristics of craft and the expression of the material based development are a way to broaden the expression and forms of industrial products. However, for a crafts person it is not meaningful to fill the world with objects. In generative, computer based production this is possible. But maybe the production of unique pieces is still slower and makes the industrial production in that sense more ecological. People will be more attached to personal and unique objects, and thus the life cycle of the objects produced will be longer.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id cdc2008_243
id cdc2008_243
authors Loukissas, Yanni
year 2008
title Keepers of the Geometry: Architects in a Culture of Simulation
source First International Conference on Critical Digital: What Matters(s)? - 18-19 April 2008, Harvard University Graduate School of Design, Cambridge (USA), pp. 243-244
summary “Why do we have to change? We’ve been building buildings for years without CATIA?” Roger Norfleet, a practicing architect in his thirties poses this question to Tim Quix, a generation older and an expert in CATIA, a computer-aided design tool developed by Dassault Systemes in the early 1980’s for use by aerospace engineers. It is 2005 and CATIA has just come into use at Paul Morris Associates, the thirty-person architecture firm where Norfleet works; he is struggling with what it will mean for him, for his firm, for his profession. Computer-aided design is about creativity, but also about jurisdiction, about who controls the design process. In Architecture: The Story of Practice, Architectural theorist Dana Cuff writes that each generation of architects is educated to understand what constitutes a creative act and who in the system of their profession is empowered to use it and at what time. Creativity is socially constructed and Norfleet is coming of age as an architect in a time of technological but also social transition. He must come to terms with the increasingly complex computeraided design tools that have changed both creativity and the rules by which it can operate. In today’s practices, architects use computer-aided design software to produce threedimensional geometric models. Sometimes they use off-the-shelf commercial software like CATIA, sometimes they customize this software through plug-ins and macros, sometimes they work with software that they have themselves programmed. And yet, conforming to Larson’s ideas that they claim the higher ground by identifying with art and not with science, contemporary architects do not often use the term “simulation.” Rather, they have held onto traditional terms such as “modeling” to describe the buzz of new activity with digital technology. But whether or not they use the term, simulation is creating new architectural identities and transforming relationships among a range of design collaborators: masters and apprentices, students and teachers, technical experts and virtuoso programmers. These days, constructing an identity as an architect requires that one define oneself in relation to simulation. Case studies, primarily from two architectural firms, illustrate the transformation of traditional relationships, in particular that of master and apprentice, and the emergence of new roles, including a new professional identity, “keeper of the geometry,” defined by the fusion of person and machine. Like any profession, architecture may be seen as a system in flux. However, with their new roles and relationships, architects are learning that the fight for professional jurisdiction is increasingly for jurisdiction over simulation. Computer-aided design is changing professional patterns of production in architecture, the very way in which professionals compete with each other by making new claims to knowledge. Even today, employees at Paul Morris squabble about the role that simulation software should play in the office. Among other things, they fight about the role it should play in promotion and firm hierarchy. They bicker about the selection of new simulation software, knowing that choosing software implies greater power for those who are expert in it. Architects and their collaborators are in a continual struggle to define the creative roles that can bring them professional acceptance and greater control over design. New technologies for computer-aided design do not change this reality, they become players in it.
email
last changed 2009/01/07 08:05

_id acadia19_392
id acadia19_392
authors Steinfeld, Kyle
year 2019
title GAN Loci
source ACADIA 19:UBIQUITY AND AUTONOMY [Proceedings of the 39th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-59179-7] (The University of Texas at Austin School of Architecture, Austin, Texas 21-26 October, 2019) pp. 392-403
doi https://doi.org/10.52842/conf.acadia.2019.392
summary This project applies techniques in machine learning, specifically generative adversarial networks (or GANs), to produce synthetic images intended to capture the predominant visual properties of urban places. We propose that imaging cities in this manner represents the first computational approach to documenting the Genius Loci of a city (Norberg-Schulz, 1980), which is understood to include those forms, textures, colors, and qualities of light that exemplify a particular urban location and that set it apart from similar places. Presented here are methods for the collection of urban image data, for the necessary processing and formatting of this data, and for the training of two known computational statistical models (StyleGAN (Karras et al., 2018) and Pix2Pix (Isola et al., 2016)) that identify visual patterns distinct to a given site and that reproduce these patterns to generate new images. These methods have been applied to image nine distinct urban contexts across six cities in the US and Europe, the results of which are presented here. While the product of this work is not a tool for the design of cities or building forms, but rather a method for the synthetic imaging of existing places, we nevertheless seek to situate the work in terms of computer-assisted design (CAD). In this regard, the project is demonstrative of a new approach to CAD tools. In contrast with existing tools that seek to capture the explicit intention of their user (Aish, Glynn, Sheil 2017), in applying computational statistical methods to the production of images that speak to the implicit qualities that constitute a place, this project demonstrates the unique advantages offered by such methods in capturing and expressing the tacit.
series ACADIA
type normal paper
email
last changed 2022/06/07 07:56

_id 8a27
authors Bentley, Jon L. and Carruthers, Wendy
year 1980
title Algorithms for Testing the Inclusion of Points in Polygons
source Allertorn Conference on Communication, Control and Computing (18th : 1980). (10) p. includes bibliography
summary Determining whether a given point lies inside or outside a simple polygon is an important problem in many applications, including computer vision systems and computer-assisted political redistricting systems. In this paper the authors give algorithms for inclusion problems that are efficient for polygons that are 'close to convex' in a certain precise sense. An empirical study of polygons that arise in several applications shows that typical polygons are indeed 'close to convex,' and a program implementing the algorithm shows that is extremely efficient on point sets of practical sizes
keywords point inclusion, polygons, algorithms, computational geometry
series CADline
last changed 2003/06/02 13:58

_id 4580
authors Borgerson, B. R. and Johnson, Robert H.
year 1980
title Beyond CAD to Computer Aided Engineering
source (8) p. : ill. Manufacturing Data Systems Incorporated, 1980? includes bibliography
summary Current CAD systems significantly aid the drafting function and many provide some aid to selected design activities. For the development of mechanical systems, much more can be done. Future systems will aid the interactive engineering process of design, analysis, control, documentation, and manufacturing engineering. Computer based systems which address this broader spectrum of engineering activities are referred to as `Computer Aided Engineering,' or `CAE,' systems. CAE systems will use volumetric techniques to create and evaluate the individual components of a machine design in conjunction with data base management schemas to support the interrelationships of the components of machines. This paper focuses on computer assistance to the engineering of mechanical systems
keywords mechanical engineering, CAE, solid modeling, objects
series CADline
last changed 2003/06/02 13:58

_id 0105
authors Bossan, Mario and Ronchi, Alfredo M.
year 1989
title Presentazione Esperienza Didattica del Dipartimento di Ingegneria dei Sistemi Edilizi e Territoriali - Politecnico di Milano
source CAAD: Education - Research and Practice [eCAADe Conference Proceedings / ISBN 87-982875-2-4] Aarhus (Denmark) 21-23 September 1989, pp. 9.8.1-9.8.19
doi https://doi.org/10.52842/conf.ecaade.1989.x.x4i
summary Didactic and research experience developed at the "Dipartimento di Ingegneria dei Sistemi Edilizi e Territoriali del Politecnico di Milano" in the environment of Computer Aided Architectural Design (CAAD). From the early part of the 1980's, using initially at an experimental level the resources available at the departmental centre of calculation various applications of CAD techniques in the building sector have been effected at DISET (Dipartimento di Ingegneria del Politecnico di Milano). During 1983, after a three year period of experimenting with these systems, it was decided to organise and activate a small computer aided design centre, within the department, the use of which was reserved for dissertation and research students.

series eCAADe
email
last changed 2022/06/07 07:50

_id cff2
authors Callen, John N.
year 1980
title Man-Machine Interfaces and their Constraints within Interactive CAD
source June, 1980. 18, [16] p. includes bibliography
summary As CAD systems becomes more widely accepted in the design environment, the interaction between the designer and the system as supported by interfaces should be analyzed to insure the most natural means of communication. The purpose of this paper is to present various inherent constraints in users, CAD software/hardware and in the design process so that designers of CAD systems could more clearly recognize these constraints and produce a more suitable interface design
keywords CAD, user interface, design, constraints
series CADline
last changed 2003/06/02 10:24

_id e952
authors Carrara, Gianfranco and Paoluzzi, Alberto
year 1980
title A Systems Approach to Building Program Planning
source computer Aided Building Design Laboratory Research Report. 80 p. : ill. Rome, Italy: December, 1980. CABD LAB RR. 80-02. includes bibliography
summary In this paper problems of design performance and of building program planning are considered from the view point of the general system theory. After having formalized the concept of requirement, performance and performance specification, it is shown that a set of building objects (spaces and constructive elements) foreseeable within a program is a semilattice, and that therefore the ordering of constructive elements and spaces corresponds to an ordering of relations among feasible 'behaviors.' A set of feasible behaviors is then presented as an abstract system, eventually discussing some assumptions on which to base an input-state-output representation of it
keywords theory, methods, problem solving, architecture, design, knowledge
series CADline
last changed 2003/06/02 13:58

_id ddss2004_ra-33
id ddss2004_ra-33
authors Diappi, L., P. Bolchim, and M. Buscema
year 2004
title Improved Understanding of Urban Sprawl Using Neural Networks
source Van Leeuwen, J.P. and H.J.P. Timmermans (eds.) Recent Advances in Design & Decision Support Systems in Architecture and Urban Planning, Dordrecht: Kluwer Academic Publishers, ISBN: 14020-2408-8, p. 33-49
summary It is widely accepted that the spatial pattern of settlements is a crucial factor affecting quality of life and environmental sustainability, but few recent studies have attempted to examine the phenomenon of sprawl by modelling the process rather than adopting a descriptive approach. The issue was partly addressed by models of land use and transportation which were mainly developed in the UK and US in the 1970s and 1980s, but the major advances were made in the area of modelling transportation, while very little was achieved in the area of spatial and temporal land use. Models of land use and transportation are well-established tools, based on explicit, exogenouslyformulated rules within a theoretical framework. The new approaches of artificial intelligence, and in particular, systems involving parallel processing, (Neural Networks, Cellular Automata and Multi-Agent Systems) defined by the expression “Neurocomputing”, allow problems to be approached in the reverse, bottom-up, direction by discovering rules, relationships and scenarios from a database. In this article we examine the hypothesis that territorial micro-transformations occur according to a local logic, i.e. according to use, accessibility, the presence of services and conditions of centrality, periphericity or isolation of each territorial “cell” relative to its surroundings. The prediction capabilities of different architectures of supervised Neural networks are implemented to the south Metropolitan area of Milan at two different temporal thresholds and discussed. Starting from data on land use in 1980 and 1994 and by subdividing the area into square cells on an orthogonal grid, the model produces a spatial and functional map of urbanisation in 2008. An implementation of the SOM (Self Organizing Map) processing to the Data Base allows the typologies of transformation to be identified, i.e. the classes of area which are transformed in the same way and which give rise to territorial morphologies; this is an interesting by-product of the approach.
keywords Neural Networks, Self-Organizing Maps, Land-Use Dynamics, Supervised Networks
series DDSS
last changed 2004/07/03 22:13

_id d22c
authors Eastman, C.M.
year 1980
title System Facilities for CAD Databases
source 17th Design Automation Conference Proceedings
summary In this paper, an attempt is made to lay out the special needs of design databases, as compared to the facilities provided in conventional database systems now commercially available. The paper starts from a point of commonality and focusses on the limitations and shortcomings commonly found in current database systems. It is impossible and unwise to make universal statements about DBMS capabilities. Instead, the goal is to identify those special features that, by their capability, provide distinctions beyond the general notions of speed and ratio of logical size to physical size.
series journal paper
email
last changed 2003/05/15 21:22

_id 076e
authors Ennis, G. and Lindsay, M.
year 1999
title VRML Possibilities: The evolution of the Glasgow Model
source Proceedings of International Conference on Virtual Systems and MultiMedia. University of Abertay. Dundee
summary During the 1980's, ABACUS, a research unit at the University of Strathclyde developed an interest in the ability to model and manipulate large geometrical databases of urban topography. Initially, this interest lay solely in the ability to source, capture and store the relevant data. However, once constructed, these models proved genuinely useful to a wide range of users and there was soon a demand for more functionality relating to the manipulation not just of the graphics, but also the range of urban attributes. Although a number of improvements were implemented there were drawbacks to the wide adoption of the software produced. The problems were almost all due to deficiencies in the then current hardware and software system available to the professions, and although this strand of research continued to be pursued, most of the development had to be focused on research applications and deployment. However, the recent advent of the Virtual Reality Modelling Language (VRML) standards have rekindled interest in this field since this language enables many of the issues that have proved problematic in the past to be addressed and solved. The potential now exists to provide wide access to large scale urban models. This paper focuses on the application of VRML as applied to the 'Glasgow Model'.
series other
email
last changed 2003/04/23 15:50

_id b190
authors Goldberg, Adele and Robson, David
year 1983
title Smalltalk-80: The language and its implementation
source New York, NY: Addison Wesley Co
summary Smalltalk-80 is the classic standard Smalltalk language as described in Smalltalk-80: The Language and Its Implementation by Goldberg and Robson. This book is commonly called "the Blue Book". Squeak implements the dialect of Smalltalk described in this book, but has a different implementation. Overview of the Smalltalk Language Smalltalk is a general purpose, high level programming language. It was the first original "pure" object oriented language, but not the first to use the object oriented concept, which is credited to Simula 67. The explosive growth of Object Oriented Programming (OOP) technologies began in the early 1980's, with Smalltalk's introduction. Behind it was the idea that the individual human user should be the most important component of any computing system, and that programming should be a natural extension of thinking, and also a dynamic and evolutionary process consistent with the model of human learning activity. In Smalltalk, these ideas are embodied in a framework for human-computer communication. In a sense, Smalltalk is yet another language like C and Pascal, and programs can be written in Smalltalk that have the look and feel of such conventional languages. The difference lies * in the amount of code that can be reduced, * less cryptic syntax, * and code that is easier to handle for application maintenance and enhancement. But Smalltalk's most powerful feature is easy code reuse. Smalltalk makes reuse of programs, routines, and subroutines (methods) far easier. Though procedural languages allow reuse too, it is harder to do, and much easier to cheat. It is no surprise that Smalltalk is relatively easy to learn, mainly due to its simple syntax and semantics, as well as few concepts. Objects, classes, messages, and methods form the basis of programming in Smalltalk. The general methodology to use Smalltalk The notion of human-computer interface also results in Smalltalk promoting the development of safer systems. Errors in Smalltalk may be viewed as objects telling users that confusion exists as to how to perform a desired function.
series other
last changed 2003/04/23 15:14

_id 0439
authors Kant, Elaine
year 1980
title A Knowledge-Based Approach to Using Efficiency Estimation in Program Synthesis
source 1980? pp. 457-462. includes bibliography
summary This paper describes a system for using efficiency knowledge in program synthesis. The system, called LIBRA, uses a combination of knowledge-based rules and algebraic cost estimates to compare potential program implementations. Efficiency knowledge is used to control the selection of algorithm and data structure implementations and the application of optimizing transformations. Prototypes of programming constructs and of cost estimation techniques are used to simplify the efficiency analysis process and to assist in the acquisition of efficiency knowledge associated with new coding knowledge. LIBRA has been used to guide the selection of implementations for several programs that classify, retrieve information, sort, and generate prime numbers
keywords knowledge base, systems, programming, performance, synthesis, evaluation
series CADline
last changed 1999/02/12 15:08

_id ddss2006-hb-187
id DDSS2006-HB-187
authors Lidia Diappi and Paola Bolchi
year 2006
title Gentrification Waves in the Inner-City of Milan - A multi agent / cellular automata model based on Smith's Rent Gap theory
source Van Leeuwen, J.P. and H.J.P. Timmermans (eds.) 2006, Innovations in Design & Decision Support Systems in Architecture and Urban Planning, Dordrecht: Springer, ISBN-10: 1-4020-5059-3, ISBN-13: 978-1-4020-5059-6, p. 187-201
summary The aim of this paper is to investigate the gentrification process by applying an urban spatial model of gentrification, based on Smith's (1979; 1987; 1996) Rent Gap theory. The rich sociological literature on the topic mainly assumes gentrification to be a cultural phenomenon, namely the result of a demand pressure of the suburban middle and upper class, willing to return to the city (Ley, 1980; Lipton, 1977, May, 1996). Little attempt has been made to investigate and build a sound economic explanation on the causes of the process. The Rent Gap theory (RGT) of Neil Smith still represents an important contribution in this direction. At the heart of Smith's argument there is the assumption that gentrification takes place because capitals return to the inner city, creating opportunities for residential relocation and profit. This paper illustrates a dynamic model of Smith's theory through a multi-agent/ cellular automata system approach (Batty, 2005) developed on a Netlogo platform. A set of behavioural rules for each agent involved (homeowner, landlord, tenant and developer, and the passive 'dwelling' agent with their rent and level of decay) are formalised. The simulations show the surge of neighbouring degradation or renovation and population turn over, starting with different initial states of decay and estate rent values. Consistent with a Self Organized Criticality approach, the model shows that non linear interactions at local level may produce different configurations of the system at macro level. This paper represents a further development of a previous version of the model (Diappi, Bolchi, 2005). The model proposed here includes some more realistic factors inspired by the features of housing market dynamics in the city of Milan. It includes the shape of the potential rent according to city form and functions, the subdivision in areal submarkets according to the current rents, and their maintenance levels. The model has a more realistic visualisation of the city and its form, and is able to show the different dynamics of the emergent neighbourhoods in the last ten years in Milan.
keywords Multi agent systems, Housing market, Gentrification, Emergent systems
series DDSS
last changed 2006/08/29 12:55

For more results click below:

this is page 0show page 1show page 2show page 3HOMELOGIN (you are user _anon_472648 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002