CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 204

_id 6cfd
authors Harfmann, Anton C. and Majkowski, Bruce R.
year 1992
title Component-Based Spatial Reasoning
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 103-111
doi https://doi.org/10.52842/conf.acadia.1992.103
summary The design process and ordering of individual components through which architecture is realized relies on the use of abstract "models" to represent a proposed design. The emergence and use of these abstract "models" for building representation has a long history and tradition in the field of architecture. Models have been made and continue to be made for the patron, occasionally the public, and as a guide for the builders. Models have also been described as a means to reflect on the design and to allow the design to be in dialogue with the creator.

The term "model" in the above paragraph has been used in various ways and in this context is defined as any representation through which design intent is expressed. This includes accurate/ rational or abstract drawings (2- dimensional and 3-dimensional), physical models (realistic and abstract) and computer models (solid, void and virtual reality). The various models that fall within the categories above have been derived from the need to "view" the proposed design in various ways in order to support intuitive reasoning about the proposal and for evaluation purposes. For example, a 2-dimensional drawing of a floor plan is well suited to support reasoning about spatial relationships and circulation patterns while scaled 3-dimensional models facilitate reasoning about overall form, volume, light, massing etc. However, the common denominator of all architectural design projects (if the intent is to construct them in actual scale, physical form) are the discrete building elements from which the design will be constructed. It is proposed that a single computational model representing individual components supports all of the above "models" and facilitates "viewing"' the design according to the frame of reference of the viewer.

Furthermore, it is the position of the authors that all reasoning stems from this rudimentary level of modeling individual components.

The concept of component representation has been derived from the fact that a "real" building (made from individual components such as nuts, bolts and bar joists) can be "viewed" differently according to the frame of reference of the viewer. Each individual has the ability to infer and abstract from the assemblies of components a variety of different "models" ranging from a visceral, experiential understanding to a very technical, physical understanding. The component concept has already proven to be a valuable tool for reasoning about assemblies, interferences between components, tracing of load path and numerous other component related applications. In order to validate the component-based modeling concept this effort will focus on the development of spatial understanding from the component-based model. The discussions will, therefore, center about the representation of individual components and the development of spatial models and spatial reasoning from the component model. In order to frame the argument that spatial modeling and reasoning can be derived from the component representation, a review of the component-based modeling concept will precede the discussions of spatial issues.

series ACADIA
email
last changed 2022/06/07 07:49

_id 8cf3
authors Müller, Volker
year 1992
title Reint-Ops: A Tool Supporting Conceptual Design
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 221-232
doi https://doi.org/10.52842/conf.acadia.1992.221
summary Reasoning is influenced by our perception of the environment. New aspects of our environment help to provoke new thoughts. Thus, changes of what is perceived can be assumed to stimulate the generation of new ideas, as well. In CAD, computerized three-dimensional models of physical entities are produced. Their representation on the monitor is determined by our viewing position and by the rendering method used. Especially the wire-frame representations of views lend themselves to a variety of readings, due to coincident and intersecting lines. Methods by which wire-frame views can be processed to extract the shapes that they contain have been investigated and developed. The extracted shapes can be used as a base for the generation of derived entities through various operations that are called Reinterpretation Operations. They have been implemented as a prototypical extension (named Reint-Ops) to an existing modeling shell. ReintOps is a highly interactive exploratory CAD tool, which allows the user to customize criteria and factors which are used in the reinterpretation process. This tool can be regarded as having a potential to support conceptual design investigations.
keywords CAD, Three-dimensional Model, Wireframe Representation, Shape Extraction, Generation of Derived Entities, Reinterpretation, Conceptual Design
series ACADIA
email
last changed 2022/06/07 07:59

_id 2c22
authors O'Neill, Michael J.
year 1992
title Neural Network Simulation as a Computer- Aided design Tool For Predicting Wayfinding Performance
source New York: John Wiley & Sons, 1992. pp. 347-366 : ill. includes bibliography
summary Complex public facilities such as libraries, hospitals, and governmental buildings often present problems to users who must find their way through them. Research shows that difficulty in wayfinding has costs in terms of time, money, public safety, and stress that results from being lost. While a wide range of architectural research supports the notion that ease of wayfinding should be a criterion for good design, architects have no method for evaluating how well their building designs will support the wayfinding task. People store and retrieve information about the layout of the built environment in a knowledge representation known as the cognitive map. People depend on the information stored in the cognitive map to find their way through buildings. Although there are numerous simulations of the cognitive map, the mechanisms of these models are not constrained by what is known about the neurophysiology of the brain. Rather, these models incorporate search mechanisms that act on semantically encoded information about the environment. In this paper the author describes the evaluation and application of an artificial neural network simulation of the cognitive map as a means of predicting wayfinding behavior in buildings. This simulation is called NAPS-PC (Network Activity Processing Simulator--PC version). This physiologically plausible model represents knowledge about the layout of the environment through a network of inter-connected processing elements. The performance of NAPS-PC was evaluated against actual human wayfinding performance. The study found that the simulation generated behavior that matched the performance of human participants. After the validation, NAPS-PC was modified so that it could read environmental information directly from AutoCAD (a popular micro-computer-based CAD software package) drawing files, and perform 'wayfinding' tasks based on that environmental information. This prototype tool, called AutoNet, is conceptualized as a means of allowing designers to predict the wayfinding performance of users in a building before it is actually built
keywords simulation, cognition, neural networks, evaluation, floor plans, applications, wayfinding, layout, building
series CADline
last changed 2003/06/02 13:58

_id 46c7
id 46c7
authors Ozel, Filiz
year 1992
title Data Modeling Needs of Life Safety Code (LSC) Compliance Applications
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 177-185
doi https://doi.org/10.52842/conf.acadia.1992.177
summary One of the most complex code compliance issues originates from the conformance of designs to Life Safety Code (NFPA 101). The development of computer based code compliance checking programs attracted the attention of building researchers and practitioners alike. These studies represent a number of approaches ranging from CAD based procedural approaches to rule based, non graphic ones, but they do not address the interaction of the rule base of such systems with graphic data bases that define the geometry of architectural objects. Automatic extraction of the attributes and the configuration of building systems requires 11 architectural object - graphic entity" data models that allow access and retrieval of the necessary data for code compliance checking. This study aims to specifically focus on the development of such a data model through the use of AutoLISP feature of AutoCAD (Autodesk Inc.) graphic system. This data model is intended to interact with a Life Safety Code rule base created through Level5-Object (Focus Inc.) expert system.

Assuming the availability of a more general building data model, one must define life and fire safety features of a building before any automatic checking can be performed. Object oriented data structures are beginning to be applied to design objects, since they allow the type versatility demanded by design applications. As one generates a functional view of the main data model, the software user must provide domain specific information. A functional view is defined as the process of generating domain specific data structures from a more general purpose data model, such as defining egress routes from wall or room object data structure. Typically in the early design phase of a project, these are related to the emergency egress design features of a building. Certain decisions such as where to provide sprinkler protection or the location of protected egress ways must be made early in the process.

series ACADIA
email
last changed 2022/06/07 08:00

_id 1992
authors Russell, Peter
year 2002
title Using Higher Level Programming in Interdisciplinary teams as a means of training for Concurrent Engineering
source Connecting the Real and the Virtual - design e-ducation [20th eCAADe Conference Proceedings / ISBN 0-9541183-0-8] Warsaw (Poland) 18-20 September 2002, pp. 14-19
doi https://doi.org/10.52842/conf.ecaade.2002.014
summary The paper explains a didactical method for training students that has been run three times to date. The premise of the course is to combine students from different faculties into interdisciplinary teams. These teams then have a complex problem to resolve within an extremely short time span. In light of recent works from Joy and Kurzweil, the theme Robotics was chosen as an exercise that is timely, interesting and related, but not central to the studies of the various faculties. In groups of 3 to 5, students from faculties of architecture, computer science and mechanical engineering are entrusted to design, build and program a robot which must successfully execute a prescribed set of actions in a competitive atmosphere. The entire course lasts ten days and culminates with the competitive evaluation. The robots must navigate a labyrinth, communicate with on another and be able to cover longer distances with some speed. In order to simplify the resources available to the students, the Lego Mindstorms Robotic syshed backgrounds instaed of synthetic ones. The combination of digitally produced (scanned) sperical images together with the use of HDR open a wide range of new implementation in the field of architecture, especially in combining synthetic elements in existing buildings, e.g. new interior elements in an existing historical museum).ural presentations in the medium of computer animation. These new forms of expression of design thoughts and ideas go beyond mere model making, and move more towards scenemaking and storytelling. The latter represents new methods of expression within computational environments for architects and designers.its boundaries. The project was conducted using the pedagogical framework of the netzentwurf.de; a relatively well established Internet based communication platform. This means that the studio was organised in the „traditional“ structure consisting of an initial 3 day workshop, a face to face midterm review, and a collective final review, held 3,5 months later in the Museum of Communication in Frankfurt am Main, Germany. In teams of 3 (with each student from a different university and a tutor located at a fourth) the students worked over the Internet to produce collaborative design solutions. The groups ended up with designs that spanned a range of solutions between real and virtual architecture. Examples of the student’s work (which is all available online) as well as their working methods are described. It must be said that the energy invested in the studio by the organisers of the virtual campus (as well as the students who took part) was considerably higher than in normal design studios and the paper seeks to look critically at the effort in relation to the outcomes achieved. The range and depth of the student’s work was surprising to many in the project, especially considering the initial hurdles (both social and technological) that had to overcome. The self-referential nature of the theme, the method and the working environment encouraged the students to take a more philosg and programming a winning robot. These differences became apparent early in the sessions and each group had to find ways to communicate their ideas and to collectively develop them by building on the strengths of each team member.
series eCAADe
type normal paper
email
last changed 2022/06/07 07:56

_id 831d
authors Seebohm, Thomas
year 1992
title Discoursing on Urban History Through Structured Typologies
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 157-175
doi https://doi.org/10.52842/conf.acadia.1992.157
summary How can urban history be studied with the aid of three-dimensional computer modeling? One way is to model known cities at various times in history, using historical records as sources of data. While such studies greatly enhance the understanding of the form and structure of specific cities at specific points in time, it is questionable whether such studies actually provide a true understanding of history. It can be argued that they do not because such studies only show a record of one of many possible courses of action at various moments in time. To gain a true understanding of urban history one has to place oneself back in historical time to consider all of the possible courses of action which were open in the light of the then current situation of the city, to act upon a possible course of action and to view the consequences in the physical form of the city. Only such an understanding of urban history can transcend the memory of the actual and hence the behavior of the possible. Moreover, only such an understanding can overcome the limitations of historical relativism, which contends that historical fact is of value only in historical context, with the realization, due to Benedetto Croce and echoed by Rudolf Bultmann, that the horizon of "'deeper understanding" lies in "'the actuality of decision"' (Seebohm and van Pelt 1990).

One cannot conduct such studies on real cities except, perhaps, as a point of departure at some specific point in time to provide an initial layout for a city knowing that future forms derived by the studies will diverge from that recorded in history. An entirely imaginary city is therefore chosen. Although the components of this city at the level of individual buildings are taken from known cities in history, this choice does not preclude alternative forms of the city. To some degree, building types are invariants and, as argued in the Appendix, so are the urban typologies into which they may be grouped. In this imaginary city students of urban history play the role of citizens or groups of citizens. As they defend their interests and make concessions, while interacting with each other in their respective roles, they determine the nature of the city as it evolves through the major periods of Western urban history in the form of threedimensional computer models.

My colleague R.J. van Pelt and I presented this approach to the study of urban history previously at ACADIA (Seebohm and van Pelt 1990). Yet we did not pay sufficient attention to the manner in which such urban models should be structured and how the efforts of the participants should be coordinated. In the following sections I therefore review what the requirements are for three-dimensional modeling to support studies in urban history as outlined both from the viewpoint of file structure of the models and other viewpoints which have bearing on this structure. Three alternative software schemes of progressively increasing complexity are then discussed with regard to their ability to satisfy these requirements. This comparative study of software alternatives and their corresponding file structures justifies the present choice of structure in relation to the simpler and better known generic alternatives which do not have the necessary flexibility for structuring the urban model. Such flexibility means, of course, that in the first instance the modeling software is more timeconsuming to learn than a simple point and click package in accord with the now established axiom that ease of learning software tools is inversely related to the functional power of the tools. (Smith 1987).

series ACADIA
email
last changed 2022/06/07 07:56

_id 9feb
authors Turk, G.
year 1992
title Re-tiling polygonal surfaces
source E.E. Catmull, (ed) Computer Graphics (Siggraph ¥92 proc.), vol 26, pp. 55-64, July 1992
summary This paper presents an automatic method of creating surface models at several levels of detail from an original polygonal description of a given object. Representing models at various levels of detail is important for achieving high frame rates in interactive graphics applications and also for speeding-up the off-line rendering of complex scenes. Unfortunately, generating these levels of detail is a time-consuming task usually left to a human modeler. This paper shows how a new set of vertices can be distributed over the surface of a model and connected to one another to create a re-tiling of a surface that is faithful to both the geometry and the topology of the original surface. The main contributions of this paper are: 1) a robust method of connecting together new vertices over a surface, 2) a way of using an estimate of surface curvature to distribute more new vertices at regions of higher curvature and 3) a method of smoothly interpolating between models that represent the same object at different levels of detail. The key notion in the re-tiling procedure is the creation of an intermediate model called the mutual tessellation of a surface that contains both the vertices from the original model and the new points that are to become vertices in the re-tiled surface. The new model is then created by removing each original vertex and locally re-triangulating the surface in a way that matches the local connectedness of the initial surface. This technique for surface retessellation has been successfully applied to iso-surface models derived from volume data, Connolly surface molecular models and a tessellation of a minimal surface of interest to mathematicians.
series other
last changed 2003/04/23 15:50

_id 3b2a
authors Westin, S., Arvo, J. and Torrance, K.
year 1992
title Predicting reflectance functions from complex surfaces
source Computer Graphics, 26(2):255-264, July 1992
summary We describe a physically-based Monte Carlo technique for approximating bidirectional re•ectance distribution functions (BRDFs) for a large class of geometries by directly simulating optical scattering. The technique is more general than previous analytical models: it removes most restrictions on surface microgeometry. Three main points are described: a new representation of the BRDF, a Monte Carlo technique to estimate the coef•cients of the representation, and the means of creating a milliscale BRDF from microscale scattering events. These allowthe prediction of scattering from essentially arbitrary roughness geometries. The BRDF is concisely represented by a matrix of spherical harmonic coef•cients; the matrix is directly estimated from a geometric optics simulation, enforcing exact reciprocity. The method applies to roughness scales that are large with respect to the wavelength of light and small with respect to the spatial density at which the BRDF is sampled across the surface; examples include brushed metal and textiles. The method is validated by comparing with an existing scattering model and sample images are generated with a physically-based global illumination algorithm.
series journal paper
last changed 2003/04/23 15:50

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id c5d7
authors Kuffer, Monika
year 2003
title Monitoring the Dynamics of Informal Settlements in Dar Es Salaam by Remote Sensing: Exploring the Use of Spot, Ers and Small Format Aerial Photography
source CORP 2003, Vienna University of Technology, 25.2.-28.2.2003 [Proceedings on CD-Rom]
summary Dar es Salaam is exemplary for cities in the developing world facing an enormous population growth. In the last decades, unplanned settlements have tremendously expanded, causing that around 70 percent of the urban dwellers are living now-a-days in these areas. Tools for monitoring such tremendous growth are relatively weak in developing countries, thus an effective satellite based monitoring system can provide a useful instrument for monitoring the dynamics of urban development. An investigation to asses the ability of extracting reliable information on the expansion and consolidation levels (density) of urban development of the city of Dar es Salaam from SPOT-HRV and ERS-SAR images is described. The use of SPOT and ERS should provide data that is complementary to data derived from the most recent aerial photography and from digital topographic maps. In a series of experiments various classification and fusion techniques are applied to the SPOT-HRV and ERS-SAR data to extract information on building density that is comparable to that obtained from the 1992 data. Ultimately, building density is estimated by linear and non-linear regression models on the basis of an one ha kernel and further aggregation is made to the level of informal settlements for a final analysis. In order to assess the reliability, use is made of several sample areas that are relatively stable over the study period, as well as, of data derived from small format aerial photography. The experiments show a high correlation between the density data derived from the satellite images and the test areas.
series other
email
last changed 2003/03/11 20:39

_id 88ca
authors Kane, Andy and Szalapaj, Peter
year 1992
title Teaching Design By Analysis of Precedents
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 477-496
doi https://doi.org/10.52842/conf.ecaade.1992.477
summary Designers, using their intuitive understanding of the decomposition of particular design objects, whether in terms of structural, functional, or some other analytical framework, should be able to interact with computational environments such that the understanding they achieve in turn invokes changes or transformations to the spatial properties of design proposals. Decompositions and transformations of design precedents can be a very useful method of enabling design students to develop analytical strategies. The benefit of an analytical approach is that it can lead to a structured understanding of design precedents. This in turn allows students to develop their own insights and ideas which are central to the activity of designing. The creation of a 3-D library of user-defined models of precedents in a computational environment permits an under-exploited method of undertaking analysis, since by modelling design precedents through the construction of 3-D Computer-Aided Architectural Design (CAAD) models, and then analytically decomposing them in terms of relevant features, significant insights into the nature of designs can be achieved. Using CAAD systems in this way, therefore, runs counter to the more common approach of detailed modelling, rendering and animation; which produces realistic pictures that do not reflect the design thinking that went into their production. The significance of the analytical approach to design teaching is that it encourages students to represent design ideas, but not necessarily the final form of design objects. The analytical approach therefore, allows students to depict features and execute tasks that are meaningful with respect to design students' own knowledge of particular domains. Such computational interaction can also be useful in helping students explore the consequences of proposed actions in actual design contexts.
series eCAADe
last changed 2022/06/07 07:52

_id cef3
authors Bridges, Alan H.
year 1992
title Computing and Problem Based Learning at Delft University of Technology Faculty of Architecture
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 289-294
doi https://doi.org/10.52842/conf.ecaade.1992.289
summary Delft University of Technology, founded in 1842, is the oldest and largest technical university in the Netherlands. It provides education for more than 13,000 students in fifteen main subject areas. The Faculty of Architecture, Housing, Urban Design and Planning is one of the largest faculties of the DUT with some 2000 students and over 500 staff members. The course of study takes four academic years: a first year (Propaedeuse) and a further three years (Doctoraal) leading to the "ingenieur" qualification. The basic course material is delivered in the first two years and is taken by all students. The third and fourth years consist of a smaller number of compulsory subjects in each of the department's specialist areas together with a wide range of option choices. The five main subject areas the students may choose from for their specialisation are Architecture, Building and Project Management, Building Technology, Urban Design and Planning, and Housing.

The curriculum of the Faculty has been radically revised over the last two years and is now based on the concept of "Problem-Based Learning". The subject matter taught is divided thematically into specific issues that are taught in six week blocks. The vehicles for these blocks are specially selected and adapted case studies prepared by teams of staff members. These provide a focus for integrating specialist subjects around a studio based design theme. In the case of second year this studio is largely computer-based: many drawings are produced by computer and several specially written computer applications are used in association with the specialist inputs.

This paper describes the "block structure" used in second year, giving examples of the special computer programs used, but also raises a number of broader educational issues. Introduction of the block system arose as a method of curriculum integration in response to difficulties emerging from the independent functioning of strong discipline areas in the traditional work groups. The need for a greater level of selfdirected learning was recognised as opposed to the "passive information model" of student learning in which the students are seen as empty vessels to be filled with knowledge - which they are then usually unable to apply in design related contexts in the studio. Furthermore, the value of electives had been questioned: whilst enabling some diversity of choice, they may also be seen as diverting attention and resources from the real problems of teaching architecture.

series eCAADe
email
last changed 2022/06/07 07:54

_id 6ef4
authors Carrara, Gianfranco and Kalay, Yehuda E.
year 1992
title Multi-Model Representation of Design Knowledge
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 77-88
doi https://doi.org/10.52842/conf.acadia.1992.077
summary Explicit representation of design knowledge is needed if scientific methods are to be applied in design research, and if comPuters are to be used in the aid of design education and practice. The representation of knowledge in general, and design knowledge in particular, have been the subject matter of computer science, design methods, and computer- aided design research for quite some time. Several models of design knowledge representation have been developed over the last 30 years, addressing specific aspects of the problem. This paper describes a different approach to design knowledge representation that recognizes the Multi-modal nature of design knowledge. It uses a variety of computational tools to encode different kinds of design knowledge, including the descriptive (objects), the prescriptive (goals) and the operational (methods) kinds. The representation is intended to form a parsimonious, communicable and presentable knowledge-base that can be used as a tool for design research and education as well as for CAAD.
keywords Design Methods, Design Process, Goals, Knowledge Representation, Semantic Networks
series ACADIA
email
last changed 2022/06/07 07:55

_id 56de
authors Handa, M., Hasegawa, Y., Matsuda, H., Tamaki, K., Kojima, S., Matsueda, K., Takakuwa, T. and Onoda, T.
year 1996
title Development of interior finishing unit assembly system with robot: WASCOR IV research project report
source Automation in Construction 5 (1) (1996) pp. 31-38
summary The WASCOR (WASeda Construction Robot) research project was organized in 1982 by Waseda University, Tokyo, Japan, aiming at automatizing building construction with a robot. This project is collaborated by nine general contractors and a construction machinery manufacturer. The WASCOR research project has been divided into four phases with the development of the study and called WASCOR I, II, III, and IV respectively. WASCOR I, II, and III finished during the time from 1982 to 1992 in a row with having 3-4 years for each phase, and WASCOR IV has been continued since 1993. WASCOR IV has been working on a automatized building interior finishing system. This system consists of following three parts. (1) Development of building system and construction method for automated interior finishing system. (2) Design of hardware system applied to automated interior finishing system. (3) Design of information management system in automated construction. As the research project has been developing, this paper describes the interim report of (1) Development of building system and construction method for automated interior finishing system, and (2) Design of hardware system applied to automated interior finishing system.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id ab4d
authors Huang, Tao-Kuang, Degelman, Larry O., and Larsen, Terry R.
year 1992
title A Visualization Model for Computerized Energy Evaluation During the Conceptual Design Stage (ENERGRAPH)
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 195-206
doi https://doi.org/10.52842/conf.acadia.1992.195
summary Energy performance is a crucial step toward responsible design. Currently there are many tools that can be applied to reach this goal with reasonable accuracy. Often times, however, major flaws are not discovered until the final stage of design when it is too late to change. Not only are existing simulation models complicated to apply at the conceptual design stage, but energy principles and their applications are also abstract and hard to visualize. Because of the lack of suitable tools to visualize energy analysis output, energy conservation concepts fail to be integrated into the building design. For these reasons, designers tend not to apply energy conservation concepts at the early design stage. However, since computer graphics is a new phase of visual communication in design process, the above problems might be solved properly through a computerized graphical interface in the conceptual design stage.

The research described in this paper is the result of exploring the concept of using computer graphics to support energy efficient building designs. It focuses on the visualization of building energy through a highly interactive graphical interface in the early design stage.

series ACADIA
email
last changed 2022/06/07 07:50

_id ed78
authors Jog, Bharati
year 1993
title Integration of Computer Applications in the Practice of Architecture
source Education and Practice: The Critical Interface [ACADIA Conference Proceedings / ISBN 1-880250-02-0] Texas (Texas / USA) 1993, pp. 89-97
doi https://doi.org/10.52842/conf.acadia.1993.089
summary Computer Applications in Architecture is emerging as an important aspect of our profession. The field, which is often referred to as Computer-Aided Architectural Design (CAAD) has had a notable impact on the profession and academia in recent years. A few professionals have predicted that as slide rules were replaced by calculators, in the coming years drafting boards and parallel bars will be replaced by computers. On the other hand, many architects do not anticipate such a drastic change in the coming decade as present CAD systems are supporting only a few integral aspects of architectural design. However, all agree that architecture curricula should be modified to integrate CAAD education.

In 1992-93, in the Department of Architecture of the 'School of Architecture and interior Design' at the University of Cincinnati, a curriculum committee was formed to review and modify the entire architecture curriculum. Since our profession and academia relate directly to each other, the author felt that while revising the curriculum, the committee should have factual information about CAD usage in the industry. Three ways to obtain such information were thought of, namely (1) conducting person to person or telephone interviews with the practitioners (2) requesting firms to give open- ended feed back and (3) surveying firms by sending a questionnaire. Of these three, the most effective, efficient and suitable method to obtain such information was an organized survey through a questionnaire. In mid December 1992, a survey was organized which was sponsored by the School of Architecture and Interior Design, the Center for the Study of the Practice of Architecture (CSPA) and the University Division of Professional Practice, all from the University of Cincinnati.

This chapter focuses on the results of this survey. A brief description of the survey design is also given. In the next section a few surveys organized in recent years are listed. In the third section the design of this survey is presented. The survey questions and their responses are given in the fourth section. The last section presents the conclusions and brief recommendations regarding computer curriculum in architecture.

series ACADIA
last changed 2022/06/07 07:52

_id ca47
authors Lee, Shu Wan
year 1996
title A Cognitive Approach to Architectural Style Several Characteristics of Design Thinking in Architecture
source CAADRIA ‘96 [Proceedings of The First Conference on Computer Aided Architectural Design Research in Asia / ISBN 9627-75-703-9] Hong Kong (Hong Kong) 25-27 April 1996, pp. 223-226
doi https://doi.org/10.52842/conf.caadria.1996.223
summary Designing is a complicated human behaviour and method, and is often treated as a mysterious "black box” operation in human mind. In the early period as for theory-studying of design thinking, the way of thinking that the researchers took were mostly descriptive discussions. Therefore, they lacked direct and empirical evidence although those studies provided significant exploration of design thinking (Wang, 1995). In recent years as for the study of cognitive science, they have tried to make design "glass box”. That is to try to make the thinking processes embedded in designers publicized. That is also to externalize the design procedure which provided the design studies another theoretical basis of more accurate and deeply researched procedure (Jones, 1992). Hence the studying of design thinking has become more important and the method of designing has also progressed a lot. For example, the classification of the nature of design problem such as ill-defined and well-defined (Newell, Shaw, and Simon, 1967), and different theoretical procedure modes for different disciplines, such as viewing architectural models as conjecture-analysis models and viewing engineering models as analysis-synthesis (Cross, 1991).
series CAADRIA
last changed 2022/06/07 07:52

_id 244d
authors Monedero, J., Casaus, A. and Coll, J.
year 1992
title From Barcelona. Chronicle and Provisional Evaluation of a New Course on Architectural Solid Modelling by Computerized Means
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 351-362
doi https://doi.org/10.52842/conf.ecaade.1992.351
summary The first step made at the ETSAB in the computer field goes back to 1965, when professors Margarit and Buxade acquired an IBM computer, an electromechanical machine which used perforated cards and which was used to produce an innovative method of structural calculation. This method was incorporated in the academic courses and, at that time, this repeated question "should students learn programming?" was readily answered: the exercises required some knowledge of Fortran and every student needed this knowledge to do the exercises. This method, well known in Europe at that time, also provided a service for professional practice and marked the beginning of what is now the CC (Centro de Calculo) of our school. In 1980 the School bought a PDP1134, a computer which had 256 Kb of RAM, two disks of 5 Mb and one of lO Mb, and a multiplexor of 8 lines. Some time later the general politics of the UPC changed their course and this was related to the purchase of a VAX which is still the base of the CC and carries most of the administrative burden of the school. 1985 has probably been the first year in which we can talk of a general policy of the school directed towards computers. A report has been made that year, which includes an inquest adressed to the six Departments of the School (Graphic Expression, Projects, Structures, Construction, Composition and Urbanism) and that contains interesting data. According to the report, there were four departments which used computers in their current courses, while the two others (Projects and Composition) did not use them at all. The main user was the Department of Structures while the incidence of the remaining three was rather sporadic. The kind of problems detected in this report are very typical: lack of resources for hardware and software and for maintenance of the few computers that the school had at that moment; a demand (posed by the students) greatly exceeding the supply (computers and teachers). The main problem appeared to be the lack of computer graphic devices and proper software.

series eCAADe
email
last changed 2022/06/07 07:58

_id c93a
authors Saggio, Antonino
year 1992
title Object Based Modeling and Concept-Testing: A Framework for Studio Teaching
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 49-63
doi https://doi.org/10.52842/conf.acadia.1992.049
summary This chapter concludes with a proposal for a studio structure that incorporates computers as a creative stimulus in the design process. Three related experiences support this hypothesis: the role played in concrete designs by an Object Based Modeling environment, teaching with Computer Aided Architectural Design and OBM in the realm of documentation and analysis of architecture, previous applications of the Concept-Testing methodology in design studios. Examples from these three areas provide the framework for mutual support between OBM and a C-T approach for studio teaching. The central sections of the chapter focus on the analysis of these experiences, while the last section provides a 15 week, semester based, studio structure that incorporates OBM in the overall calendar and in key assignments.

series ACADIA
email
last changed 2022/06/07 07:56

_id a3f5
authors Zandi-Nia, Abolfazl
year 1992
title Topgene: An artificial Intelligence Approach to a Design Process
source Delft University of Technology
summary This work deals with two architectural design (AD) problems at the topological level and in presence of the social norms community, privacy, circulation-cost, and intervening opportunity. The first problem concerns generating a design with respect to a set of above mentioned norms, and the second problem requires evaluation of existing designs with respect to the same set of norms. Both problems are based on the structural-behavioral relationship in buildings. This work has challenged above problems in the following senses: (1) A working system, called TOPGENE (The TOpological Pattern GENErator) has been developed. (2) Both problems may be vague and may lack enough information in their statement. For example, an AD in the presence of the social norms requires the degrees of interactions between the location pairs in the building. This information is not always implicitly available, and must be explicated from the design data. (3) An AD problem at topological level is intractable with no fast and efficient algorithm for its solution. To reduce the search efforts in the process of design generation, TOPGENE uses a heuristic hill climbing strategy that takes advantage of domain specific rules of thumbs to choose a path in the search space of a design. (4) TOPGENE uses the Q-analysis method for explication of hidden information, also hierarchical clustering of location-pairs with respect to their flow generation potential as a prerequisite information for the heuristic reasoning process. (5) To deal with a design of a building at topological level TOPGENE takes advantage of existing graph algorithms such as path-finding and planarity testing during its reasoning process. This work also presents a new efficient algorithm for keeping track of distances in a growing graph. (6) This work also presents a neural net implementation of a special case of the design generation problem. This approach is based on the Hopfield model of neural networks. The result of this approach has been used test TOPGENE approach in generating designs. A comparison of these two approaches shows that the neural network provides mathematically more optimal designs, while TOPGENE produces more realistic designs. These two systems may be integrated to create a hybrid system.
series thesis:PhD
last changed 2003/02/12 22:37

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 10HOMELOGIN (you are user _anon_996629 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002