CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 236

_id 6208
authors Abou-Jaoude, Georges
year 1992
title To Master a Tool
source Proceedings of the 4rd European Full-Scale Modelling Conference / Lausanne (Switzerland) 9-12 September 1992, Part B, p. 15
summary The tool here is the computer or to be precise, a unit that includes the computer, the peripherals and the software needed to fulfill a task. These tools are getting very sophisticated and user interfaces extremly friendly, therefore it is very easy to become the slave of such electronic tools and reach self satisfaction with strait forward results and attractive images. In order to master and not to become slaves of sophisticated tools, a very solid knowledge of related fields or domains of application becomes necessary. In the case of this seminar, full scale modelling, is a way to understand the relation between a mental model and it's full-scale modelling, it is a way of communicating what is in a designers mind. Computers and design programs can have the same goal, rather than chosing one method or the other let us try to say how important it is today to complement designing with computer with other means and media such as full scale modelling, and what computer modelling and simulation can bring to full scale modelling or other means.
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
more http://info.tuwien.ac.at/efa
last changed 2003/08/25 10:12

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id cf5c
authors Carpenter, B.
year 1992
title The logic of typed feature structures with applications to unification grammars, logic programs and constraint resolution
source Cambridge Tracts in Theoretical Computer Science, Cambridge University Press
summary This book develops the theory of typed feature structures, a new form of data structure that generalizes both the first-order terms of logic programs and feature-structures of unification-based grammars to include inheritance, typing, inequality, cycles and intensionality. It presents a synthesis of many existing ideas into a uniform framework, which serves as a logical foundation for grammars, logic programming and constraint-based reasoning systems. Throughout the text, a logical perspective is adopted that employs an attribute-value description language along with complete equational axiomatizations of the various systems of feature structures. Efficiency concerns are discussed and complexity and representability results are provided. The application of feature structures to phrase structure grammars is described and completeness results are shown for standard evaluation strategies. Definite clause logic programs are treated as a special case of phrase structure grammars. Constraint systems are introduced and an enumeration technique is given for solving arbitrary attribute-value logic constraints. This book with its innovative approach to data structures will be essential reading for researchers in computational linguistics, logic programming and knowledge representation. Its self-contained presentation makes it flexible enough to serve as both a research tool and a textbook.
series other
last changed 2003/04/23 15:14

_id 3105
authors Novak, T.P., Hoffman, D.L., and Yung, Y.-F.
year 1996
title Modeling the structure of the flow experience
source INFORMS Marketing Science and the Internet Mini-Conference, MIT
summary The flow construct (Csikszentmihalyi 1977) has recently been proposed by Hoffman and Novak (1996) as essential to understanding consumer navigation behavior in online environments such as the World Wide Web. Previous researchers (e.g. Csikszentmihalyi 1990; Ghani, Supnick and Rooney 1991; Trevino and Webster 1992; Webster, Trevino and Ryan 1993) have noted that flow is a useful construct for describing more general human-computer interactions. Hoffman and Novak define flow as the state occurring during network navigation which is: 1) characterized by a seamless sequence of responses facilitated by machine interactivity, 2) intrinsically enjoyable, 3) accompanied by a loss of self-consciousness, and 4) selfreinforcing." To experience flow while engaged in an activity, consumers must perceive a balance between their skills and the challenges of the activity, and both their skills and challenges must be above a critical threshold. Hoffman and Novak (1996) propose that flow has a number of positive consequences from a marketing perspective, including increased consumer learning, exploratory behavior, and positive affect."
series other
last changed 2003/04/23 15:50

_id 1992
authors Russell, Peter
year 2002
title Using Higher Level Programming in Interdisciplinary teams as a means of training for Concurrent Engineering
source Connecting the Real and the Virtual - design e-ducation [20th eCAADe Conference Proceedings / ISBN 0-9541183-0-8] Warsaw (Poland) 18-20 September 2002, pp. 14-19
doi https://doi.org/10.52842/conf.ecaade.2002.014
summary The paper explains a didactical method for training students that has been run three times to date. The premise of the course is to combine students from different faculties into interdisciplinary teams. These teams then have a complex problem to resolve within an extremely short time span. In light of recent works from Joy and Kurzweil, the theme Robotics was chosen as an exercise that is timely, interesting and related, but not central to the studies of the various faculties. In groups of 3 to 5, students from faculties of architecture, computer science and mechanical engineering are entrusted to design, build and program a robot which must successfully execute a prescribed set of actions in a competitive atmosphere. The entire course lasts ten days and culminates with the competitive evaluation. The robots must navigate a labyrinth, communicate with on another and be able to cover longer distances with some speed. In order to simplify the resources available to the students, the Lego Mindstorms Robotic syshed backgrounds instaed of synthetic ones. The combination of digitally produced (scanned) sperical images together with the use of HDR open a wide range of new implementation in the field of architecture, especially in combining synthetic elements in existing buildings, e.g. new interior elements in an existing historical museum).ural presentations in the medium of computer animation. These new forms of expression of design thoughts and ideas go beyond mere model making, and move more towards scenemaking and storytelling. The latter represents new methods of expression within computational environments for architects and designers.its boundaries. The project was conducted using the pedagogical framework of the netzentwurf.de; a relatively well established Internet based communication platform. This means that the studio was organised in the „traditional“ structure consisting of an initial 3 day workshop, a face to face midterm review, and a collective final review, held 3,5 months later in the Museum of Communication in Frankfurt am Main, Germany. In teams of 3 (with each student from a different university and a tutor located at a fourth) the students worked over the Internet to produce collaborative design solutions. The groups ended up with designs that spanned a range of solutions between real and virtual architecture. Examples of the student’s work (which is all available online) as well as their working methods are described. It must be said that the energy invested in the studio by the organisers of the virtual campus (as well as the students who took part) was considerably higher than in normal design studios and the paper seeks to look critically at the effort in relation to the outcomes achieved. The range and depth of the student’s work was surprising to many in the project, especially considering the initial hurdles (both social and technological) that had to overcome. The self-referential nature of the theme, the method and the working environment encouraged the students to take a more philosg and programming a winning robot. These differences became apparent early in the sessions and each group had to find ways to communicate their ideas and to collectively develop them by building on the strengths of each team member.
series eCAADe
type normal paper
email
last changed 2022/06/07 07:56

_id d9fa
authors Salomon, Gavriel
year 1990
title Effects with and of Computers and the Study of Computer-based Learning Environments
source Chapter in Computer-Based Learning Environments and Problem Solving, ed. E. De Corte, M. C. Linn, H. Mandl, and L. Verschaffel. New York: Springer-Verlag
summary Several factors have contributed to the developments in computer-based learning environments. Improvements and advances in hardware capabilities have afforded greater computing power. Advances in cognitive and instructional science have moved thinking beyond the limits of behavioural psychology. The new systems of computer-based learning environments are being designed with a view to facilitating complex problem-solving through integrating wholes of knowledge (Dijkstra, Krammer & Merriënboer, 1992). Thus, many see in the computer a means to enhance students' cognitive skills and general problem-solving ability. This is in spite of the fact that studies have failed to conclusively confirm the hypothesis that computer-based learning environments facilitate the acquisition and transfer of higher-order thinking and learning skills (Dijkstra, Krammer & Merriënboer, 1992). Salomon (1992) argues that computers make possible student involvement in higher-order thinking skills by performing many of the lower-level cognitive tasks, by providing memory support and by juggling interrelated variables. Through a partnership with the computer, the user may also benefit from the effect of cognitive residue resulting in improvement or mastery of a skill or strategy. Salomon explains: The intellectual partnership with computer tools creates a zone of proximal development whereby learners are capable of carrying out tasks they could not possible carry out without the help and support provided by the computer. This partnership can both offer guidance that might be internalized to become self-guidance and stimulate the development of yet underdeveloped skills, resulting in a higher level of skill mastery (p.252).
series other
last changed 2003/04/23 15:14

_id 6d34
authors Kensek, Karen and Noble, Doug (Eds.)
year 1992
title Mission - Method - Madness [Conference Proceedings]
source ACADIA Conference Proceedings / ISBN 1-880250-01-2 ) 1992, 232 p.
doi https://doi.org/10.52842/conf.acadia.1992
summary The papers represent a wide variety of exploration into the uses of computers in architecture. We have tried to impose order onto the collection by organizing them into six sessions: Metaphor, Mission, Method, Modeling for Visualization, Modeling, and Generative Systems. As with any ordering system for such a diverse selection, some session papers are strongly related while others are loosely grouped. Madness, an additional session not in the proceedings, will include short presentations of work in progress. Regarding the individual papers, it is particularly exciting to see research being conducted that is founded on previous work done by others. It is also interesting to note that half of the papers have been submitted by teams of authors. Whether this represents "computer supported cooperative work" remains to be seen. Certainly the work in this book represents an interesting and wide variety of explorations into computer supported design in architecture.
series ACADIA
email
more http://www.acadia.org
last changed 2022/06/07 07:49

_id c5d7
authors Kuffer, Monika
year 2003
title Monitoring the Dynamics of Informal Settlements in Dar Es Salaam by Remote Sensing: Exploring the Use of Spot, Ers and Small Format Aerial Photography
source CORP 2003, Vienna University of Technology, 25.2.-28.2.2003 [Proceedings on CD-Rom]
summary Dar es Salaam is exemplary for cities in the developing world facing an enormous population growth. In the last decades, unplanned settlements have tremendously expanded, causing that around 70 percent of the urban dwellers are living now-a-days in these areas. Tools for monitoring such tremendous growth are relatively weak in developing countries, thus an effective satellite based monitoring system can provide a useful instrument for monitoring the dynamics of urban development. An investigation to asses the ability of extracting reliable information on the expansion and consolidation levels (density) of urban development of the city of Dar es Salaam from SPOT-HRV and ERS-SAR images is described. The use of SPOT and ERS should provide data that is complementary to data derived from the most recent aerial photography and from digital topographic maps. In a series of experiments various classification and fusion techniques are applied to the SPOT-HRV and ERS-SAR data to extract information on building density that is comparable to that obtained from the 1992 data. Ultimately, building density is estimated by linear and non-linear regression models on the basis of an one ha kernel and further aggregation is made to the level of informal settlements for a final analysis. In order to assess the reliability, use is made of several sample areas that are relatively stable over the study period, as well as, of data derived from small format aerial photography. The experiments show a high correlation between the density data derived from the satellite images and the test areas.
series other
email
last changed 2003/03/11 20:39

_id avocaad_2001_20
id avocaad_2001_20
authors Shen-Kai Tang
year 2001
title Toward a procedure of computer simulation in the restoration of historical architecture
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In the field of architectural design, “visualization¨ generally refers to some media, communicating and representing the idea of designers, such as ordinary drafts, maps, perspectives, photos and physical models, etc. (Rahman, 1992; Susan, 2000). The main reason why we adopt visualization is that it enables us to understand clearly and to control complicated procedures (Gombrich, 1990). Secondly, the way we get design knowledge is more from the published visualized images and less from personal experiences (Evans, 1989). Thus the importance of the representation of visualization is manifested.Due to the developments of computer technology in recent years, various computer aided design system are invented and used in a great amount, such as image processing, computer graphic, computer modeling/rendering, animation, multimedia, virtual reality and collaboration, etc. (Lawson, 1995; Liu, 1996). The conventional media are greatly replaced by computer media, and the visualization is further brought into the computerized stage. The procedure of visual impact analysis and assessment (VIAA), addressed by Rahman (1992), is renewed and amended for the intervention of computer (Liu, 2000). Based on the procedures above, a great amount of applied researches are proceeded. Therefore it is evident that the computer visualization is helpful to the discussion and evaluation during the design process (Hall, 1988, 1990, 1992, 1995, 1996, 1997, 1998; Liu, 1997; Sasada, 1986, 1988, 1990, 1993, 1997, 1998). In addition to the process of architectural design, the computer visualization is also applied to the subject of construction, which is repeatedly amended and corrected by the images of computer simulation (Liu, 2000). Potier (2000) probes into the contextual research and restoration of historical architecture by the technology of computer simulation before the practical restoration is constructed. In this way he established a communicative mode among archeologists, architects via computer media.In the research of restoration and preservation of historical architecture in Taiwan, many scholars have been devoted into the studies of historical contextual criticism (Shi, 1988, 1990, 1991, 1992, 1995; Fu, 1995, 1997; Chiu, 2000). Clues that accompany the historical contextual criticism (such as oral information, writings, photographs, pictures, etc.) help to explore the construction and the procedure of restoration (Hung, 1995), and serve as an aid to the studies of the usage and durability of the materials in the restoration of historical architecture (Dasser, 1990; Wang, 1998). Many clues are lost, because historical architecture is often age-old (Hung, 1995). Under the circumstance, restoration of historical architecture can only be proceeded by restricted pictures, written data and oral information (Shi, 1989). Therefore, computer simulation is employed by scholars to simulate the condition of historical architecture with restricted information after restoration (Potier, 2000). Yet this is only the early stage of computer-aid restoration. The focus of the paper aims at exploring that whether visual simulation of computer can help to investigate the practice of restoration and the estimation and evaluation after restoration.By exploring the restoration of historical architecture (taking the Gigi Train Station destroyed by the earthquake in last September as the operating example), this study aims to establish a complete work on computer visualization, including the concept of restoration, the practice of restoration, and the estimation and evaluation of restoration.This research is to simulate the process of restoration by computer simulation based on visualized media (restricted pictures, restricted written data and restricted oral information) and the specialized experience of historical architects (Potier, 2000). During the process of practicing, communicates with craftsmen repeatedly with some simulated alternatives, and makes the result as the foundation of evaluating and adjusting the simulating process and outcome. In this way we address a suitable and complete process of computer visualization for historical architecture.The significance of this paper is that we are able to control every detail more exactly, and then prevent possible problems during the process of restoration of historical architecture.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id eabb
authors Boeykens, St. Geebelen, B. and Neuckermans, H.
year 2002
title Design phase transitions in object-oriented modeling of architecture
source Connecting the Real and the Virtual - design e-ducation [20th eCAADe Conference Proceedings / ISBN 0-9541183-0-8] Warsaw (Poland) 18-20 September 2002, pp. 310-313
doi https://doi.org/10.52842/conf.ecaade.2002.310
summary The project IDEA+ aims to develop an “Integrated Design Environment for Architecture”. Its goal is providing a tool for the designer-architect that can be of assistance in the early-design phases. It should provide the possibility to perform tests (like heat or cost calculations) and simple simulations in the different (early) design phases, without the need for a fully detailed design or remodeling in a different application. The test for daylighting is already in development (Geebelen, to be published). The conceptual foundation for this design environment has been laid out in a scheme in which different design phases and scales are defined, together with appropriate tests at the different levels (Neuckermans, 1992). It is a translation of the “designerly” way of thinking of the architect (Cross, 1982). This conceptual model has been translated into a “Core Object Model” (Hendricx, 2000), which defines a structured object model to describe the necessary building model. These developments form the theoretical basis for the implementation of IDEA+ (both the data structure & prototype software), which is currently in progress. The research project addresses some issues, which are at the forefront of the architect’s interest while designing with CAAD. These are treated from the point of view of a practicing architect.
series eCAADe
email
last changed 2022/06/07 07:52

_id ea96
authors Hacfoort, Eek J. and Veldhuisen, Jan K.
year 1992
title A Building Design and Evaluation System
source New York: John Wiley & Sons, 1992. pp. 195-211 : ill. table. includes bibliography
summary Within the field of architectural design there is a growing awareness of imbalance among the professionalism, the experience, and the creativity of the designers' response to the up-to-date requirements of all parties interested in the design process. The building design and evaluating system COSMOS makes it possible for various participants to work within their own domain, so that separated but coordinated work can be done. This system is meant to organize the initial stage of the design process, where user-defined functions, geometry, type of construction, and building materials are decided. It offers a tool to design a building to calculate a number of effects and for managing the information necessary to evaluate the design decisions. The system is provided with data and sets of parameters for describing the conditions, along with their properties, of the main building functions of a selection of well-known building types. The architectural design is conceptualized as being a hierarchy of spatial units, ranking from building blocks down to specific rooms or spaces. The concept of zoning is used as a means of calculating and directly evaluating the structure of the design without working out the details. A distinction is made between internal and external calculations and evaluations during the initial design process. During design on screen, an estimation can be recorded of building costs, energy costs, acoustics, lighting, construction, and utility. Furthermore, the design can be exported to a design application program, in this case AutoCAD, to make and show drawings in more detail. Through the medium of a database, external calculation and evaluation of building costs, life-cycle costs, energy costs, interior climate, acoustics, lighting, construction, and utility are possible in much more advanced application programs
keywords evaluation, applications, integration, architecture, design, construction, building, energy, cost, lighting, acoustics, performance
series CADline
last changed 2003/06/02 13:58

_id d919
authors Heckbert, P.S.
year 1992
title Discontinuity Meshing for Radiosity
source Eurographics Workshop on Rendering. May 1992, pp. 203-216
summary The radiosity method is the most popular algorithm for simulating interreflection of light between diffuse surfaces. Most existing radiosity algorithms employ simple meshes and piecewise constant approximations, thereby constraining the radiosity function to be constant across each polygonal element. Much more accurate simulations are possible if linear, quadratic, or higher degree approximations are used. In order to realize the potential accuracy of higher-degree approximations, however, it is necessary for the radiosity mesh to resolve discontinuities such as shadow edges in the radiosity function. A discontinuity meshing algorithm is presented that places mesh boundaries directly along discontinuities. Such algorithms offer the potential of faster, more accurate simulations. Results are shown for three-dimensional scenes.
series other
last changed 2003/04/23 15:14

_id 11b6
authors Kalmychkov, Vitaly A. and Smolyaninov, Alexander V.
year 1992
title Design of Object-Oriented Data Visualization System
source East-West International Conference on Human-Computer Interaction: Proceedings of the EWHCI'92 1992 pp. 463-470
summary The report is devoted to the data visualization system design and implementation, which provides the means for design of the image of the user's numeric information on the personal computer. The problems of design, architecture and operation of data visualization system which provides to user convenient means for constructing the numeric information image of required type is considered. Image constructing is executed by means of required sizes fields placing and filling of them by necessary content (coordinates system, graphs, inscriptions). User's interface with instrument system is object-oriented: after object (field or its content) choice user can manipulate of it, executing only those operations, that are determined for it as object of appointed function. Ergonomical and comfortable constructing is ensured by careful coordinated system of possible actions on each of image constructing stage and supported by icons menu and textual menu.
series other
last changed 2002/07/07 16:01

_id ddss9215
id ddss9215
authors Mortola, E. and Giangrande, A.
year 1993
title A trichotomic segmentation procedure to evaluate projects in architecture
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary This paper illustrates a model used to construct the evaluation module for An Interface for Designing (AID), a system to aid architectural design. The model can be used at the end of every cycle of analysis-synthesis-evaluation in the intermediate phases of design development. With the aid of the model it is possible to evaluate the quality of a project in overall terms to establish whether the project is acceptable, whether it should be elaborated ex-novo, or whether it is necessary to begin a new cycle to improve it. In this last case, it is also possible to evaluate the effectiveness of the possible actions and strategies for improvement. The model is based on a procedure of trichotomic segmentation, developed with MCDA (Multi-Criteria Decision Aid), which uses the outranking relation to compare the project with some evaluation profiles taken as projects of reference. An application of the model in the teaching field will also be described.
series DDSS
last changed 2003/08/07 16:36

_id cb5a
authors Oxman, Rivka E.
year 1992
title Multiple Operative and Interactive Modes in Knowledge-Based Design Systems
source New York: John Wiley & Sons, 1992. pp. 125-143 : ill. includes bibliography
summary A conceptual basis for the development of an expert system which is capable of integrating various modes of generation and evaluation in design is presented. This approach is based upon two sets of reasoning processes in the design system. The first enables a mapping between design requirements and solution descriptions in a generative mode of design; and the second enables a mapping between solution descriptions and performance evaluation in an evaluative and predictive mode. This concept supports a formal framework necessary for a knowledge-based design system to operate in a design partnership relation with the designer. Another fundamental concept in expert systems for design, dual direction interpretation between graphic and textual modes, is presented and elaborated. This encoding of knowledge behind the geometrical representation can be achieved in knowledge- based design systems by the development of a 'semantic interpreter' which supports a dual direction mapping process employing a geometrical knowledge, typological knowledge and evaluative knowledge. An implemented expert system for design, PREDIKT, demonstrates these concepts in the domain of kitchen design. It provides the user with a choice of alternative modes of interaction, such as: a 'design critic' for the evaluation of a design, a 'design generator' for the generation of a design, or a 'design critic-generator' for the completion of partial solutions
keywords architecture, knowledge base, design, systems, expert systems
series CADline
email
last changed 2003/06/02 10:24

_id 46c7
id 46c7
authors Ozel, Filiz
year 1992
title Data Modeling Needs of Life Safety Code (LSC) Compliance Applications
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 177-185
doi https://doi.org/10.52842/conf.acadia.1992.177
summary One of the most complex code compliance issues originates from the conformance of designs to Life Safety Code (NFPA 101). The development of computer based code compliance checking programs attracted the attention of building researchers and practitioners alike. These studies represent a number of approaches ranging from CAD based procedural approaches to rule based, non graphic ones, but they do not address the interaction of the rule base of such systems with graphic data bases that define the geometry of architectural objects. Automatic extraction of the attributes and the configuration of building systems requires 11 architectural object - graphic entity" data models that allow access and retrieval of the necessary data for code compliance checking. This study aims to specifically focus on the development of such a data model through the use of AutoLISP feature of AutoCAD (Autodesk Inc.) graphic system. This data model is intended to interact with a Life Safety Code rule base created through Level5-Object (Focus Inc.) expert system.

Assuming the availability of a more general building data model, one must define life and fire safety features of a building before any automatic checking can be performed. Object oriented data structures are beginning to be applied to design objects, since they allow the type versatility demanded by design applications. As one generates a functional view of the main data model, the software user must provide domain specific information. A functional view is defined as the process of generating domain specific data structures from a more general purpose data model, such as defining egress routes from wall or room object data structure. Typically in the early design phase of a project, these are related to the emergency egress design features of a building. Certain decisions such as where to provide sprinkler protection or the location of protected egress ways must be made early in the process.

series ACADIA
email
last changed 2022/06/07 08:00

_id 0a34
authors Ronchi, Alfredo M.
year 1992
title Education in Computing - Computing in Education
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 387-398
doi https://doi.org/10.52842/conf.ecaade.1992.387
summary The theme of this presentation which is entitled 'Education in Computing & Computing in Education' is certainly of great importance in the present climate characterized on the one hand by availability of highly efficient hardware, low-cost procedures and environments which are of great interest also as far as education is concerned. Within this topic it is of primary importance to ask oneself the question 'To learn architecture with computers must students learn computers?', and should the answer be 'yes', to ask 'To what extent? What level of complexity needs to be attained in order to realize this aim? What resources need to be dedicated to the learning of computer science? Should deep involvement be necessary, at what point should we refer to a computer scientist?' In an attempt to answer these questions, it is useful to examine the state of the art within computer science vs. engineering and computer science vs. education.
series eCAADe
email
last changed 2022/06/07 07:56

_id a5cc
authors Sabater, Txatxo and Gassull, Albert
year 1992
title From Notion to Motion
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 543-551
doi https://doi.org/10.52842/conf.ecaade.1992.543
summary Going from notion to motion is a way, or a working system. It means the illustration in motion of critical written topics. It's also an indirect channel to normalize the use of CAD and other kind of software and periferials in a School of Architecture held only by a user technology. We deal with texts and the choice of these is absolutely determinant. First of all because of the volition of using those which time has allowed to clearly decant and now are seen together with the answers or continuities that they have generated. That is to say, we do not write on the subjects we talk about, we illustrate, in motion, the arguments that authors have already written about them. We refer to notion in the sense that we always set off from a seminal argument. But also because we collect, if necessary, its revisions or extensions. This is to say we try to track the notion helping ourselves with the motion.
series eCAADe
last changed 2022/06/07 07:56

_id 831d
authors Seebohm, Thomas
year 1992
title Discoursing on Urban History Through Structured Typologies
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 157-175
doi https://doi.org/10.52842/conf.acadia.1992.157
summary How can urban history be studied with the aid of three-dimensional computer modeling? One way is to model known cities at various times in history, using historical records as sources of data. While such studies greatly enhance the understanding of the form and structure of specific cities at specific points in time, it is questionable whether such studies actually provide a true understanding of history. It can be argued that they do not because such studies only show a record of one of many possible courses of action at various moments in time. To gain a true understanding of urban history one has to place oneself back in historical time to consider all of the possible courses of action which were open in the light of the then current situation of the city, to act upon a possible course of action and to view the consequences in the physical form of the city. Only such an understanding of urban history can transcend the memory of the actual and hence the behavior of the possible. Moreover, only such an understanding can overcome the limitations of historical relativism, which contends that historical fact is of value only in historical context, with the realization, due to Benedetto Croce and echoed by Rudolf Bultmann, that the horizon of "'deeper understanding" lies in "'the actuality of decision"' (Seebohm and van Pelt 1990).

One cannot conduct such studies on real cities except, perhaps, as a point of departure at some specific point in time to provide an initial layout for a city knowing that future forms derived by the studies will diverge from that recorded in history. An entirely imaginary city is therefore chosen. Although the components of this city at the level of individual buildings are taken from known cities in history, this choice does not preclude alternative forms of the city. To some degree, building types are invariants and, as argued in the Appendix, so are the urban typologies into which they may be grouped. In this imaginary city students of urban history play the role of citizens or groups of citizens. As they defend their interests and make concessions, while interacting with each other in their respective roles, they determine the nature of the city as it evolves through the major periods of Western urban history in the form of threedimensional computer models.

My colleague R.J. van Pelt and I presented this approach to the study of urban history previously at ACADIA (Seebohm and van Pelt 1990). Yet we did not pay sufficient attention to the manner in which such urban models should be structured and how the efforts of the participants should be coordinated. In the following sections I therefore review what the requirements are for three-dimensional modeling to support studies in urban history as outlined both from the viewpoint of file structure of the models and other viewpoints which have bearing on this structure. Three alternative software schemes of progressively increasing complexity are then discussed with regard to their ability to satisfy these requirements. This comparative study of software alternatives and their corresponding file structures justifies the present choice of structure in relation to the simpler and better known generic alternatives which do not have the necessary flexibility for structuring the urban model. Such flexibility means, of course, that in the first instance the modeling software is more timeconsuming to learn than a simple point and click package in accord with the now established axiom that ease of learning software tools is inversely related to the functional power of the tools. (Smith 1987).

series ACADIA
email
last changed 2022/06/07 07:56

_id eaff
authors Shaviv, Edna and Kalay, Yehuda E.
year 1992
title Combined Procedural and Heuristic Method to Energy Conscious Building Design and Evaluation
source New York: John Wiley & Sons, 1992. pp. 305-325 : ill. includes bibliography
summary This paper describes a methodology that combines both procedural and heuristic methods by means of integrating a simulation model with a knowledge based system (KBS) for supporting all phases of energy conscious design and evaluation. The methodology is based on partitioning the design process into discrete phases and identifying the informational characteristics of each phase, as far as energy conscious design is concerned. These informational characteristics are expressed in the form of design variables (parameters) and the relationships between them. The expected energy performance of a design alternative is evaluated by a combination of heuristic and procedural methods, and the context-sensitive application of default values, when necessary. By virtue of combining knowledge based evaluations with procedural ones, this methodology allows for testing the applicability of heuristic rules in non-standard cases,Ô h)0*0*0*°° ÔŒ thereby improving the predictable powers of the evaluation
keywords design process, evaluation, energy, analysis, synthesis, integration, architecture, knowledge base, heuristics, simulation
series CADline
email
last changed 2003/06/02 10:24

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 11HOMELOGIN (you are user _anon_597333 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002