CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 1537

_id 092b
authors Burton, Warren
year 1977
title Representation of Many-Sided Polygons and Polygonal Lines for Rapid Processing
source communications of the ACMò. March, 1977. vol. 20: pp. 166-171 : ill. includes bibliography
summary A representation for polygons and polygonal lines is described which allows sets of consecutive sides to be collectively examined. The set of sides are arranged in a binary tree hierarchy by inclusion. A fast algorithm for testing the inclusion of a point in a many-sided polygon is given. The speed of the algorithm is discussed for both ideal and practical examples. It is shown that the points of intersection of two polygonal lines can be located by what is essentially a binary tree search. The algorithm and a practical example are discussed. The representation overcomes many of the disadvantages associated with the various fixed- grid methods for representing curves and regions
keywords representation, GIS, mapping, computer graphics, algorithms, information, intersection, curves, polygons, B-rep
series CADline
last changed 1999/02/12 15:07

_id sigradi2006_e028c
id sigradi2006_e028c
authors Griffith, Kenfield; Sass, Larry and Michaud, Dennis
year 2006
title A strategy for complex-curved building design:Design structure with Bi-lateral contouring as integrally connected ribs
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 465-469
summary Shapes in designs created by architects such as Gehry Partners (Shelden, 2002), Foster and Partners, and Kohn Peterson and Fox rely on computational processes for rationalizing complex geometry for building construction. Rationalization is the reduction of a complete geometric shape into discrete components. Unfortunately, for many architects the rationalization is limited reducing solid models to surfaces or data on spread sheets for contractors to follow. Rationalized models produced by the firms listed above do not offer strategies for construction or digital fabrication. For the physical production of CAD description an alternative to the rationalized description is needed. This paper examines the coupling of digital rationalization and digital fabrication with physical mockups (Rich, 1989). Our aim is to explore complex relationships found in early and mid stage design phases when digital fabrication is used to produce design outcomes. Results of our investigation will aid architects and engineers in addressing the complications found in the translation of design models embedded with precision to constructible geometries. We present an algorithmically based approach to design rationalization that supports physical production as well as surface production of desktop models. Our approach is an alternative to conventional rapid prototyping that builds objects by assembly of laterally sliced contours from a solid model. We explored an improved product description for rapid manufacture as bilateral contouring for structure and panelling for strength (Kolarevic, 2003). Infrastructure typically found within aerospace, automotive, and shipbuilding industries, bilateral contouring is an organized matrix of horizontal and vertical interlocking ribs evenly distributed along a surface. These structures are monocoque and semi-monocoque assemblies composed of structural ribs and skinning attached by rivets and adhesives. Alternative, bi-lateral contouring discussed is an interlocking matrix of plywood strips having integral joinery for assembly. Unlike traditional methods of building representations through malleable materials for creating tangible objects (Friedman, 2002), this approach constructs with the implication for building life-size solutions. Three algorithms are presented as examples of rationalized design production with physical results. The first algorithm [Figure 1] deconstructs an initial 2D curved form into ribbed slices to be assembled through integral connections constructed as part of the rib solution. The second algorithm [Figure 2] deconstructs curved forms of greater complexity. The algorithm walks along the surface extracting surface information along horizontal and vertical axes saving surface information resulting in a ribbed structure of slight double curvature. The final algorithm [Figure 3] is expressed as plug-in software for Rhino that deconstructs a design to components for assembly as rib structures. The plug-in also translates geometries to a flatten position for 2D fabrication. The software demonstrates the full scope of the research exploration. Studies published by Dodgson argued that innovation technology (IvT) (Dodgson, Gann, Salter, 2004) helped in solving projects like the Guggenheim in Bilbao, the leaning Tower of Pisa in Italy, and the Millennium Bridge in London. Similarly, the method discussed in this paper will aid in solving physical production problems with complex building forms. References Bentley, P.J. (Ed.). Evolutionary Design by Computers. Morgan Kaufman Publishers Inc. San Francisco, CA, 1-73 Celani, G, (2004) “From simple to complex: using AutoCAD to build generative design systems” in: L. Caldas and J. Duarte (org.) Implementations issues in generative design systems. First Intl. Conference on Design Computing and Cognition, July 2004 Dodgson M, Gann D.M., Salter A, (2004), “Impact of Innovation Technology on Engineering Problem Solving: Lessons from High Profile Public Projects,” Industrial Dynamics, Innovation and Development, 2004 Dristas, (2004) “Design Operators.” Thesis. Massachusetts Institute of Technology, Cambridge, MA, 2004 Friedman, M, (2002), Gehry Talks: Architecture + Practice, Universe Publishing, New York, NY, 2002 Kolarevic, B, (2003), Architecture in the Digital Age: Design and Manufacturing, Spon Press, London, UK, 2003 Opas J, Bochnick H, Tuomi J, (1994), “Manufacturability Analysis as a Part of CAD/CAM Integration”, Intelligent Systems in Design and Manufacturing, 261-292 Rudolph S, Alber R, (2002), “An Evolutionary Approach to the Inverse Problem in Rule-Based Design Representations”, Artificial Intelligence in Design ’02, 329-350 Rich M, (1989), Digital Mockup, American Institute of Aeronautics and Astronautics, Reston, VA, 1989 Schön, D., The Reflective Practitioner: How Professional Think in Action. Basic Books. 1983 Shelden, D, (2003), “Digital Surface Representation and the Constructability of Gehry’s Architecture.” Diss. Massachusetts Institute of Technology, Cambridge, MA, 2003 Smithers T, Conkie A, Doheny J, Logan B, Millington K, (1989), “Design as Intelligent Behaviour: An AI in Design Thesis Programme”, Artificial Intelligence in Design, 293-334 Smithers T, (2002), “Synthesis in Designing”, Artificial Intelligence in Design ’02, 3-24 Stiny, G, (1977), “Ice-ray: a note on the generation of Chinese lattice designs” Environmental and Planning B, volume 4, pp. 89-98
keywords Digital fabrication; bilateral contouring; integral connection; complex-curve
series SIGRADI
email
last changed 2016/03/10 09:52

_id 20a5
authors Kieburtz, Richard B.
year 1977
title Structured Programming and Problem- Solving with PASCAL
source xiii, 348 p. : ill. Englewood cliffs, New Jersey: Prentice-Hall, Inc., 1977. includes index
summary An introduction emphasizing the problem-solving approach to computing, progressing from the development of a systematic and disciplined approach to the discovery of algorithms. Includes examples and exercises
keywords PASCAL, programming, languages, problem solving, education
series CADline
last changed 2003/06/02 13:58

_id e5a1
authors Korf, R.E.
year 1977
title A Shape Independent Theory of Space Allocation
source Environment and Planning B. 1977. vol. 4: pp. 37-50 : ill. includes bibliography
summary A theory of space allocation in architectural design is presented. The theory is completely independent of the shapes of the spaces. The problem is broken down into four hierarchical levels of abstraction. The top level is the number of spaces. The second level consists of the adjacencies between the spaces, represented as abstract graphs. The third level is concerned with the different planar embeddings or geometries of the adjacency graphs. The bottom level is represented by labelled bubble diagrams. At each level, the number of design alternatives is finite and it is shown how they can be systematically enumerated
keywords space allocation, synthesis, architecture, design, graphs, layout, algorithms
series CADline
last changed 2003/06/02 13:58

_id b0e0
authors Martens, Bob
year 1991
title THE ERECTION OF A FULL-SCALE LABORATORY AT THE TECHNICAL UNIVERSITY OF VIENNA
source Proceedings of the 3rd European Full-Scale Modelling Conference / ISBN 91-7740044-5 / Lund (Sweden) 13-16 September 1990, pp. 44-52
summary Since 1977 the Institut für Raumgestaltung ('Architectural Styling of Space') had been trying to set up a full-scale laboratory designed for teaching and research purposes. The aim was even more so invigorated by the International Architecture Symposion "Man and Architectural Space" organized by our institute (1984).
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 15:17

_id ecec
authors Requicha, Aristides A.G. and Voelcker, H.B.
year 1977
title Constructive Solid Geometry
source November, 1977. [3] 36 p. : ill. includes bibliography: p. 31-33
summary The term 'constructive solid geometry' denotes a class of schemes for describing solid objects as compositions (usually 'additions' and 'subtractions') of primitive solid 'building blocks.' The notion of adding and subtracting solids has been used by mechanical designers and others for generations, but attempts to embody it in computer-based modelling systems have been hindered by the absence of a firm mathematical foundation. This paper provides such a foundation by drawing on established results in modern axiomatic geometry and point set topology. The paper also initiates a broader discussion, to be continued in subsequent papers, of three seminal topics: mathematical modelling of solids, representation of solids, and calculation of geometrical properties of solids
keywords solid modeling, computational geometry, geometric modeling, CSG, topology, mathematics, representation
series CADline
last changed 2003/06/02 13:58

_id 05f0
authors Ball, A.A.
year 1977
title CONSURF Part 3 : How the Program Is Used
source computer Aided Design. January, 1977. vol. 9: pp. 9-12 : ill. includes bibliography
summary This paper is the last of a series describing the surface lofting program CONSURF, and outlines how the program is used. The overall approach is geometrical and is modeled closely on manual lofting. The program user must have a practical understanding of shape and be able to visualize the surfaces he defines. He must also be numerate, but he does not need to understand the surface mathematics which is confined to the software. In this paper CONSURF, is considered as a production program and the contribution to the user are described
keywords mechanical engineering, curved surfaces, lofting
series CADline
last changed 2003/06/02 13:58

_id ddssar0206
id ddssar0206
authors Bax, M.F.Th. and Trum, H.M.G.J.
year 2002
title Faculties of Architecture
source Timmermans, Harry (Ed.), Sixth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings Avegoor, the Netherlands), 2002
summary In order to be inscribed in the European Architect’s register the study program leading to the diploma ‘Architect’ has to meet the criteria of the EC Architect’s Directive (1985). The criteria are enumerated in 11 principles of Article 3 of the Directive. The Advisory Committee, established by the European Council got the task to examine such diplomas in the case some doubts are raised by other Member States. To carry out this task a matrix was designed, as an independent interpreting framework that mediates between the principles of Article 3 and the actual study program of a faculty. Such a tool was needed because of inconsistencies in the list of principles, differences between linguistic versions ofthe Directive, and quantification problems with time, devoted to the principles in the study programs. The core of the matrix, its headings, is a categorisation of the principles on a higher level of abstractionin the form of a taxonomy of domains and corresponding concepts. Filling in the matrix means that each study element of the study programs is analysed according to their content in terms of domains; thesummation of study time devoted to the various domains results in a so-called ‘profile of a faculty’. Judgement of that profile takes place by committee of peers. The domains of the taxonomy are intrinsically the same as the concepts and categories, needed for the description of an architectural design object: the faculties of architecture. This correspondence relates the taxonomy to the field of design theory and philosophy. The taxonomy is an application of Domain theory. This theory,developed by the authors since 1977, takes as a view that the architectural object only can be described fully as an integration of all types of domains. The theory supports the idea of a participatory andinterdisciplinary approach to design, which proved to be awarding both from a scientific and a social point of view. All types of domains have in common that they are measured in three dimensions: form, function and process, connecting the material aspects of the object with its social and proceduralaspects. In the taxonomy the function dimension is emphasised. It will be argued in the paper that the taxonomy is a categorisation following the pragmatistic philosophy of Charles Sanders Peirce. It will bedemonstrated as well that the taxonomy is easy to handle by giving examples of its application in various countries in the last 5 years. The taxonomy proved to be an adequate tool for judgement ofstudy programs and their subsequent improvement, as constituted by the faculties of a Faculty of Architecture. The matrix is described as the result of theoretical reflection and practical application of a matrix, already in use since 1995. The major improvement of the matrix is its direct connection with Peirce’s universal categories and the self-explanatory character of its structure. The connection with Peirce’s categories gave the matrix a more universal character, which enables application in other fieldswhere the term ‘architecture’ is used as a metaphor for artefacts.
series DDSS
last changed 2003/11/21 15:16

_id ddssar0003
id ddssar0003
authors Bax, Th., Trum, H. and Nauta, D.jr.
year 2000
title Implications of the philosophy of Ch. S. Peirce for interdisciplinary design: developments in domain theory
source Timmermans, Harry (Ed.), Fifth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Nijkerk, the Netherlands)
summary Subject of this paper is the establishment of a connection between categorical pragmatism, developed by Charles Sanders Peirce (1839-1914) through phenomenological analysis, and Domain Theory, developed by Thijs Bax and Henk Trum since 1977. The first is a phenomenological branch of philosophy, the second a theory of interdisciplinary design. A connection seems possible because of similarity in form (three-partitions with an anarcho-hierarchical character), the not-absolute conception of functionality and the interdisciplinary and procedural (participation based action) character of both theories.
series DDSS
last changed 2003/11/21 15:16

_id ed51
authors Bergeron, Philippe
year 1986
title A General Version of Crow's Shadow Volumes
source IEEE Computer Graphics and Applications September, 1986. vol. 6: pp. 17-28 : col. ill. includes bibliography.
summary In 1977 Frank Crow introduced a new class of algorithms for the generation of shadows. His technique, based on the concept of shadow volumes, assumes a polygonal database and a constrained environment. For example, polyhedrons must be closed, and polygons must be planar. This article presents a new version of Crow's algorithm, developed at the Universite de Montreal, which attempts a less constrained environment. The method has allowed the handling of both open and closed models and nonplanar polygons with the viewpoint anywhere, including any shadow volume. It does not, however, sacrifice the essential features of Crow's original version: penetration between polygons is allowed, and any number of light sources can be defined anywhere in 3D space, including the view volume and any shadow volume. The method has been used successfully in the film Tony de Peltrie and is easily incorporated into an existing scan-line, hidden-surface algorithm
keywords algorithms, shadowing, polygons, computer graphics
series CADline
last changed 1999/02/12 15:07

_id 4489
authors Blinn, J.F.
year 1977
title Models of light reflection for computer synthesised pictures
source Computer Graphics, 11 2, 192-198
summary Bui-Tuong Phong published his illumination model in 1973, in the paper titled "Illumination for Computer-Generated Images". Phong's model is a local illumination model, which means only direct reflections are taken into account. Light that bounces off more than one surface before reaching the eye is not accounted for. While this may not be very realistic, it allows the lighting to be computed efficiently. To properly handle indirect lighting, a global illumination method such as radiosity is required, which is much more expensive. In addition to Phong's basic lighting equation, we will look at a variation invented by Jim Blinn. Blinn changed the way specular is calculated, making the computations slightly cheaper. Blinn published his approach in his paper "Models of Light Reflection for Computer Synthesised Pictures" in 1977.
series journal paper
last changed 2003/04/23 15:14

_id 2168
authors Bobrow, Daniel G. and Winograd, Terry
year 1977
title An Overview of KRL, a Knowledge Representation Language
source Cognitive Science. 1977. vol. 1: pp. 3-46. includes bibliography
summary This paper describes KRL, a Knowledge Representation Language designed for use in understander systems. It outlines both the general concepts which underlie the research and the details of KRL-O, an experimental implementation of some of these concepts. KRL is an attempt to integrate procedural knowledge with a broad base of declarative forms. These forms provide a variety of ways to express the logical structure of the knowledge, in order to give flexibility in associating procedures (for memory and reasoning) with specific pieces of knowledge, and to control the relative accessibility of different facts and descriptions. The formalism for declarative knowledge is based on structured conceptual objects with associated descriptions. These objects form a network of memory units with several different sorts of linkages, each having well-specified implications for the retrieval process. Procedures can be associated directly with the internal structure of a conceptual object. This procedural attachment allows the steps for a particular operation to be determined by characteristics of the specific entities involved. The control structure of KRL is based on the belief that the next generation of intelligent programs will integrate data-directed and goal-directed processing by using multiprocessing. It provides for a priority-ordered multiprocess agenda with explicit (user-provided) strategies for scheduling and resource allocation. It provides procedure directories which operate along with process frameworks to allow procedural parametrization of the fundamental system processes for building, comparing, and retrieving memory structures. Future development of KRL will include integrating procedure definition with the descriptive formalism
keywords knowledge, representation, languages, AI
series CADline
last changed 2003/06/02 10:24

_id aef9
id aef9
authors Brown, A., Knight, M. and Berridge, P. (Eds.)
year 1999
title Architectural Computing from Turing to 2000 [Conference Proceedings]
source eCAADe Conference Proceedings / ISBN 0-9523687-5-7 / Liverpool (UK) 15-17 September 1999, 773 p.
doi https://doi.org/10.52842/conf.ecaade.1999
summary The core theme of this book is the idea of looking forward to where research and development in Computer Aided Architectural Design might be heading. The contention is that we can do so most effectively by using the developments that have taken place over the past three or four decades in Computing and Architectural Computing as our reference point; the past informing the future. The genesis of this theme is the fact that a new millennium is about to arrive. If we are ruthlessly objective the year 2000 holds no more significance than any other year; perhaps we should, instead, be preparing for the year 2048 (2k). In fact, whatever the justification, it is now timely to review where we stand in terms of the development of Architectural Computing. This book aims to do that. It is salutary to look back at what writers and researchers have said in the past about where they thought that the developments in computing were taking us. One of the common themes picked up in the sections of this book is the developments that have been spawned by the global linkup that the worldwide web offers us. In the past decade the scale and application of this new medium of communication has grown at a remarkable rate. There are few technological developments that have become so ubiquitous, so quickly. As a consequence there are particular sections in this book on Communication and the Virtual Design Studio which reflect the prominence of this new area, but examples of its application are scattered throughout the book. In 'Computer-Aided Architectural Design' (1977), Bill Mitchell did suggest that computer network accessibility from expensive centralised locations to affordable common, decentralised computing facilities would become more commonplace. But most pundits have been taken by surprise by just how powerful the explosive cocktail of networks, email and hypertext has proven to be. Each of the ingredients is interesting in its own right but together they have presented us with genuinely new ways of working. Perhaps, with foresight we can see what the next new explosive cocktail might be.
series eCAADe
email
more http://www.ecaade.org
last changed 2022/06/07 07:49

_id 22ce
authors Cahn, Deborah U., Johnston, Nancy E. and Johnston, William E.
year 1977
title A Response to the 1977 GSPC Core Graphic System
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 57-62. includes bibliography
summary This paper responds to the 1977 Core Graphics System of SIGGRAPH's Graphics Standards Planning Committee (GSPC). The authors are interested in low-level device-independent graphics for applications doing data representation and annotation. The level structure and bias in the core system toward display list processor graphics are criticized. Specific issues discussed include display contexts, attributes, current position, 3-dimensional graphics, area filling, and graphics input
keywords computer graphics, standards
series CADline
last changed 2003/06/02 13:58

_id 490d
authors De Groot, D.J.
year 1977
title Designing Curved Surfaces with Analytical Functions
source Computer Aided Design. January, 1977. vol. 9: pp. 3-8 : ill
summary Shaping and computer-interactive design of curved surfaces of industrial objects, where artistic freedom is allowed for the outward appearance, is a time-consuming job particularly when feeding the computer program with the necessary geometrical input data. A design method is presented together with practical results of designed surfaces composed of simple analytical functions. Human input of geometrical and artistic data has been minimized. Smoothness and fairness are created by the surface composing functions
keywords curved surfaces, representation, CAD, systems
series CADline
last changed 2003/06/02 13:58

_id sigradi2009_774
id sigradi2009_774
authors de Souza, Raphael Argento; André Soares Monat
year 2009
title Visualização da Informação em meio telejornalístico: Uma abordagem sob a ótica do design [Information Visualization in the news television: An approach under the design sight]
source SIGraDi 2009 - Proceedings of the 13th Congress of the Iberoamerican Society of Digital Graphics, Sao Paulo, Brazil, November 16-18, 2009
summary This article proposes a classification, under the Visualization Information point of view, of infographics broadcasted in the brazilian news television. To achieve this purpose these so called motion graphics were analised under the basis formed by three main authors: Tufte (1997), Bertin (1977) and Spence (2007), whose theories are in this article compared to the digital means of the motion graphics. With these theoretical foundation and the analisys of two hundred motion graphics broadcasted in the brazilian news television, we achieved a classification which covers every type of these motion graphics, hoping it becomes a basis for the study of these projects.
keywords Design; information visualization; television infographics, motion graphics; information design
series SIGRADI
email
last changed 2016/03/10 09:50

_id 76ce
authors Grimson, W.
year 1985
title Computational Experiments with a Feature Based Stereo Algorithm
source IEEE Trans. Pattern Anal. Machine Intell., Vol. PAMI-7, No. 1
summary Computational models of the human stereo system' can provide insight into general information processing constraints that apply to any stereo system, either artificial or biological. In 1977, Marr and Poggio proposed one such computational model, that was characterized as matching certain feature points in difference-of-Gaussian filtered images, and using the information obtained by matching coarser resolution representations to restrict the search'space for matching finer resolution representations. An implementation of the algorithm and'its testing on a range of images was reported in 1980. Since then a number of psychophysical experiments have suggested possible refinements to the model and modifications to the algorithm. As well, recent computational experiments applying the algorithm to a variety of natural images, especially aerial photographs, have led to a number of modifications. In this article, we present a version of the Marr-Poggio-Gfimson algorithm that embodies these modifications and illustrate its performance on a series of natural images.
series journal paper
last changed 2003/04/23 15:14

_id ecaade2009_177
id ecaade2009_177
authors Göttig, Roland; Braunes, Jörg
year 2009
title Building Survey in Combination with Building Information Modelling for the Architectural Planning Process
source Computation: The New Realm of Architectural Design [27th eCAADe Conference Proceedings / ISBN 978-0-9541183-8-9] Istanbul (Turkey) 16-19 September 2009, pp. 69-74
doi https://doi.org/10.52842/conf.ecaade.2009.069
wos WOS:000334282200007
summary The architectural planning process is influenced by social, cultural and technical aspects (Alexander, 1977). When focussing on computer based planning for retrofitting or modification of buildings it becomes clear that many different data formats are used depending on a great variety of planning methods. Moreover, if building information models are utilized they still lack some essential criteria. It is rarely possible to attach individual data from survey systems. This paper will show both a way to add data from building survey systems as an example for special data attachment on IFC files and how to utilize content management systems for IFC files, deviated plans, lists of building components, and other data necessary in a planning process.
keywords Planning process, building information modeling, IFC, building survey systems, content management systems
series eCAADe
email
last changed 2022/06/07 07:50

_id a4b7
authors Lee, D. T. and Preparata, Franco P.
year 1977
title Location of a Point in a Planar Subdivision and its Applications
source SIAM Journal of Computing. September, 1977. vol. 6: pp. 594-606 : ill. includes bibliography
summary Given a subdivision of the plane induced by a planar graph with n vertices, in this paper the problem of identifying which region of the subdivision contains a given test points is considered. A search algorithm, called point-location algorithm, which operates on a suitably preprocessed data structure is presented. The search runs in time at most O((log n)2), while the preprocessing task runs in time at most O(n log n) and requires O(n) storage. The methods are quite general, since an arbitrary subdivision can be transformed in time at most O(n log n) into one to which the preprocessing procedure is applicable. This solution of the point location problem yields interesting and efficient solutions of other geometric problems, such as spatial convex inclusion and inclusion in an arbitrary polygon
keywords computational geometry, algorithms, analysis, graphs, point inclusion
series CADline
last changed 2003/06/02 13:58

_id ddss2006-hb-187
id DDSS2006-HB-187
authors Lidia Diappi and Paola Bolchi
year 2006
title Gentrification Waves in the Inner-City of Milan - A multi agent / cellular automata model based on Smith's Rent Gap theory
source Van Leeuwen, J.P. and H.J.P. Timmermans (eds.) 2006, Innovations in Design & Decision Support Systems in Architecture and Urban Planning, Dordrecht: Springer, ISBN-10: 1-4020-5059-3, ISBN-13: 978-1-4020-5059-6, p. 187-201
summary The aim of this paper is to investigate the gentrification process by applying an urban spatial model of gentrification, based on Smith's (1979; 1987; 1996) Rent Gap theory. The rich sociological literature on the topic mainly assumes gentrification to be a cultural phenomenon, namely the result of a demand pressure of the suburban middle and upper class, willing to return to the city (Ley, 1980; Lipton, 1977, May, 1996). Little attempt has been made to investigate and build a sound economic explanation on the causes of the process. The Rent Gap theory (RGT) of Neil Smith still represents an important contribution in this direction. At the heart of Smith's argument there is the assumption that gentrification takes place because capitals return to the inner city, creating opportunities for residential relocation and profit. This paper illustrates a dynamic model of Smith's theory through a multi-agent/ cellular automata system approach (Batty, 2005) developed on a Netlogo platform. A set of behavioural rules for each agent involved (homeowner, landlord, tenant and developer, and the passive 'dwelling' agent with their rent and level of decay) are formalised. The simulations show the surge of neighbouring degradation or renovation and population turn over, starting with different initial states of decay and estate rent values. Consistent with a Self Organized Criticality approach, the model shows that non linear interactions at local level may produce different configurations of the system at macro level. This paper represents a further development of a previous version of the model (Diappi, Bolchi, 2005). The model proposed here includes some more realistic factors inspired by the features of housing market dynamics in the city of Milan. It includes the shape of the potential rent according to city form and functions, the subdivision in areal submarkets according to the current rents, and their maintenance levels. The model has a more realistic visualisation of the city and its form, and is able to show the different dynamics of the emergent neighbourhoods in the last ten years in Milan.
keywords Multi agent systems, Housing market, Gentrification, Emergent systems
series DDSS
last changed 2006/08/29 12:55

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 76HOMELOGIN (you are user _anon_58987 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002