CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 247

_id e87d
authors Schierle, G. Goetz
year 1992
title Computer Aided Design for Wind and Seismic Forces
doi https://doi.org/10.52842/conf.acadia.1992.187
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 187-194
summary A computer program, Lateral Design Graphs (LDG), to consider lateral wind and seismic forces in the early design stages, is presented. LDG provides numeric data and graphs to visualize the effect of building height, shape, and framing system on lateral forces. Many critical decisions effecting lateral forces and elements to resist them are made at early design stages. Costly changes or reduced safety may result if they are not considered. For example, building height, shape and configuration impact lateral forces and building safety; so does the placement of shear walls in line with space needs. But the complex and time consuming nature of lateral force design by hand makes early consideration often not practical. Therefore the objectives of LDG are to: 1) visualize the cause and effect of lateral forces; 2) make the design process more transparent; 3) develop informed intuition; 4) facilitate trade-off studies at an early stage; 5) help to teach design for lateral forces.
series ACADIA
email
last changed 2022/06/07 07:57

_id 3ff5
authors Abbo, I.A., La Scalea, L., Otero, E. and Castaneda, L.
year 1992
title Full-Scale Simulations as Tool for Developing Spatial Design Ability
source Proceedings of the 4rd European Full-Scale Modelling Conference / Lausanne (Switzerland) 9-12 September 1992, Part C, pp. 7-10
summary Spatial Design Ability has been defined as the capability to anticipate effects (psychological impressions on potential observers or users) produced by mental manipulation of elements of architectural or urban spaces. This ability, of great importance in choosing the appropriate option during the design process, is not specifically developed in schools of architecture and is partially obtained as a by-product of drawing, designing or architectural criticism. We use our Laboratory as a tool to present spaces to people so that they can evaluate them. By means of a series of exercises, students confront their anticipations with the psychological impressions produced in other people. For this occasion, we present an experience in which students had to propose a space for an exhibition hag in which architectural projects (student thesis) were to be shown. Following the Spatial Design Ability Development Model which we have been using for several years, students first get acquainted with the use of evaluation instruments for psychological impressions as well as with research methodology. In this case, due to the short period available, we reduced research to investigate the effects produced by the manipulation of only 2 independents variables: students manipulated first the form of the roof, walls and interiors elements, secondly, color and texture of those elements. They evaluated spatial quality, character and the other psychological impressions that manipulations produced in people. They used three dimensional scale models 1/10 and 1/1.
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
email
more http://info.tuwien.ac.at/efa
last changed 2003/08/25 10:12

_id 735a
authors Anh, Tran Hoai
year 1992
title FULL-SCALE EXPERIMENT ON KITCHEN FUNCTION IN HANOI
source Proceedings of the 4rd European Full-Scale Modelling Conference / Lausanne (Switzerland) 9-12 September 1992, Part A, pp. 19-30
summary This study is a part of a licentiate thesis on "Functional kitchen for the Vietnamese cooking way"at the Department of Architecture and Development studies, Lund University. The issues it is dealing with are: (1) Inadequacy of kitchen design in the apartment buildings in Hanoi, where the kitchen is often designed as a mere cooking place - other parts of the food making process are not given any attention. (2) Lack of standard dimensional and planning criteria for functional kitchen which can serve as bases for kitchen design. // The thesis aims at finding out indicators on functional spatial requirements for kitchen, which can serve as guide-line for designing functional kitchen for Hanoi. One of the main propositions in the thesis is that functional kitchens for Hanoi should be organised to permit the culinary activities done according to the Vietnamese urban culinary practice. This is based on the concept that the culinary activity is an expression Of culture, thus the practice of preparing meal in the present context of the urban households in Hanoi has an established pattern, method which demand a suitable area and arrangement in the kitchen. This pattern and cooking method should make up the functional requirement for kitchen in Hanoi, and be taken in to account if functional kitchen designing is to be achieved. In the context of the space-limited apartment building of Hanoi, special focus is given to find out indicators on the minimum functional spatial requirements of the kitchen works.
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 15:29

_id aa78
authors Bayazit, Nigan
year 1992
title Requirements of an Expert System for Design Studios
doi https://doi.org/10.52842/conf.ecaade.1992.187
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 187-194
summary The goal of this paper is to study problems of the transition from traditional architectural studio teaching to CAAD studio teaching which requires a CAAD expert system as studio tutor, and to study the behavior of the student in this new environment. The differences between the traditional and computerized studio teaching and the experiences in this field are explained referring to the requirements for designing time in relation to the expertise of the student in the application of a CAD program. Learning styles and the process of design in computerized and non-computerized studio teaching are discussed. Design studio requirements of the students in traditional studio environment while doing design works are clarified depending on the results of an empirical study which explained the relations between the tutor and the student while they were doing studio critiques. Main complaints of the students raised in the empirical study were the lack of data in the specific design problem area, difficulties of realization of ideas and thoughts, not knowing the starting point of design, having no information about the references to be used for the specific design task, having difficulties in the application of presentation techniques. In the concluding parts of the paper are discussed the different styles of teaching and their relation to the CAAD environment, the transformation of the instructional programs for the new design environment, the future expectations from the CAAD programs, properties of the new teaching environment and the roles of the expert systems in design studio education.

keywords CAAD Education, Expert System, Architectural Design Studio, Knowledge Acquisition
series eCAADe
email
last changed 2022/06/07 07:54

_id 065b
authors Beitia, S.S., Zulueta, A. and Barrallo, J.
year 1995
title The Virtual Cathedral - An Essay about CAAD, History and Structure
doi https://doi.org/10.52842/conf.ecaade.1995.355
source Multimedia and Architectural Disciplines [Proceedings of the 13th European Conference on Education in Computer Aided Architectural Design in Europe / ISBN 0-9523687-1-4] Palermo (Italy) 16-18 November 1995, pp. 355-360
summary The Old Cathedral of Santa Maria in Vitoria is the most representative building of the Gothic style in the Basque Country. Built during the XIV century, it has been closed to the cult in 1994 because of the high risk of collapse that presents its structure. This closure was originated by the structural analysis that was entrusted to the University of the Basque Country in 1992. The topographic works developed in the Cathedral to elaborate the planimetry of the temple revealed that many structural elements of great importance like arches, buttresses and flying buttresses were removed, modified or added along the history of Santa Maria. The first structural analysis made in the church suggested that the huge deformations showed in the resistant elements, specially the piers, were originated by interventions made in the past. A deep historical investigation allowed us to know how the Cathedral was built and the changes executed until our days. With this information, we started the elaboration of a virtual model of the Cathedral of Santa Maria. This model was introduced into a Finite Elements Method system to study the deformations suffered in the church during its construction in the XIV century, and the intervention made later in the XV, XVI and XX centuries. The efficiency of the virtual model simulating the geometry of the Cathedral along history allowed us to detect the cause of the structural damage, that was finally found in many unfortunate interventions along time.

series eCAADe
more http://dpce.ing.unipa.it/Webshare/Wwwroot/ecaade95/Pag_43.htm
last changed 2022/06/07 07:54

_id 9b34
authors Butterworth, J. (et al.)
year 1992
title 3DM: A three-dimensional modeler using a head-mounted display
source Proceedings of the 1992 Symposium on Interactive 3D Graphics (Cambridge, Mass., March 29- April 1, 1992.), 135-138
summary 3dm is a three dimensional (3D) surface modeling program that draws techniques of model manipulation from both CAD and drawing programs and applies them to modeling in an intuitive way. 3dm uses a head-mounted display (HMD) to simplify the problem of 3D model manipulation and understanding. A HMD places the user in the modeling space, making three dimensional relationships more understandable. As a result, 3dm is easy to learn how to use and encourages experimentation with model shapes.
series other
last changed 2003/04/23 15:50

_id sigradi2015_11.166
id sigradi2015_11.166
authors Calixto, Victor; Celani, Gabriela
year 2015
title A literature review for space planning optimization using an evolutionary algorithm approach: 1992-2014
source SIGRADI 2015 [Proceedings of the 19th Conference of the Iberoamerican Society of Digital Graphics - vol. 2 - ISBN: 978-85-8039-133-6] Florianópolis, SC, Brasil 23-27 November 2015, pp. 662-671.
summary Space planning in architecture is a field of research in which the process of arranging a set of space elements is the main concern. This paper presents a survey of 31 papers among applications and reviews of space planning method using evolutionary algorithms. The objective of this work was to organize, classify and discuss about twenty-two years of SP based on an evolutionary approach to orient future research in the field.
keywords Space Planning, Evolutionary algorithms, Generative System
series SIGRADI
email
last changed 2016/03/10 09:47

_id 91c4
authors Checkland, P.
year 1981
title Systems Thinking, Systems Practice
source John Wiley & Sons, Chichester
summary Whether by design, accident or merely synchronicity, Checkland appears to have developed a habit of writing seminal publications near the start of each decade which establish the basis and framework for systems methodology research for that decade."" Hamish Rennie, Journal of the Operational Research Society, 1992 Thirty years ago Peter Checkland set out to test whether the Systems Engineering (SE) approach, highly successful in technical problems, could be used by managers coping with the unfolding complexities of organizational life. The straightforward transfer of SE to the broader situations of management was not possible, but by insisting on a combination of systems thinking strongly linked to real-world practice Checkland and his collaborators developed an alternative approach - Soft Systems Methodology (SSM) - which enables managers of all kinds and at any level to deal with the subtleties and confusions of the situations they face. This work established the now accepted distinction between hard systems thinking, in which parts of the world are taken to be systems which can be engineered, and soft systems thinking in which the focus is on making sure the process of inquiry into real-world complexity is itself a system for learning. Systems Thinking, Systems Practice (1981) and Soft Systems Methodology in Action (1990) together with an earlier paper Towards a Systems-based Methodology for Real-World Problem Solving (1972) have long been recognized as classics in the field. Now Peter Checkland has looked back over the three decades of SSM development, brought the account of it up to date, and reflected on the whole evolutionary process which has produced a mature SSM. SSM: A 30-Year Retrospective, here included with Systems Thinking, Systems Practice closes a chapter on what is undoubtedly the most significant single research programme on the use of systems ideas in problem solving. Now retired from full-time university work, Peter Checkland continues his research as a Leverhulme Emeritus Fellow. "
series other
last changed 2003/04/23 15:14

_id 9f8a
authors Davidow, William H.
year 1992
title The Virtual Corporation: Structuring and Revitalizing the Corporation for the 21St Century
source New York: Harper Collins Publishers
summary The great value of this timely, important book is that it provides an integrated picture of the customer-driven company of the future. We have begun to learn about lean production technology, stripped-down management, worker empowerment, flexible customized manufacturing, and other modern strategies, but Davidow and Malone show for the first time how these ideas are fitting together to create a new kind of corporation and a worldwide business revolution. Their research is fascinating. The authors provide illuminating case studies of American, Japanese, and European companies that have discovered the keys to improved competitiveness, redesigned their businesses and their business relationships, and made extraordinary gains. They also write bluntly and critically about a number of American corporations that are losing market share by clinging to outmoded thinking. Business success in the global marketplace of the future is going to depend upon corporations producing "virtual" products high in added value, rich in variety, and available instantly in response to customer needs. At the heart of this revolution will be fast new information technologies; increased emphasis on quality; accelerated product development; changing management practices, including new alignments between management and labor; and new linkages between company, supplier, and consumer, and between industry and government. The Virtual Corporation is an important cutting-edge book that offers a creative synthesis of the most influential ideas in modern business theory. It has already fired excitement and debate in industry, academia, and government, and it is essential reading for anyone involved in the leadership of America's business and the shaping of America's economic future.
series other
last changed 2003/04/23 15:14

_id cf73
authors Dosti, P., Martens, B. and Voigt, A.
year 1992
title Spatial Simulation In Architecture, City Development and Regional Planning
doi https://doi.org/10.52842/conf.ecaade.1992.195
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 195-200
summary The appropriate use of spatial simulation techniques considerably tends to increase the depth of evidence and the realistic content of the design and plannings to be described and moreover may encourage experimentations, trial attempts and planning variants. This means also the more frequent use of combinations between different techniques, having in mind that they are not equivalent, but making use of the respective advantages each offers. Until now the main attention of the EDP-Lab was directed on achieving quantity. For the time to come time it will be the formation of quality. The challenge in the educational system at the Vienna University of Technology is to obtain appropriate results in the frame- work of low-cost simulation. This aspect seems also to be meaningful in order to enforce the final implementation in architectural practice.

series eCAADe
email
more http://info.tuwien.ac.at/ecaade/
last changed 2022/06/07 07:55

_id e51d
authors Fazio, P., Bedard, C. and Gowri, K.
year 1992
title Constraints for Generating Building Envelope Design Alternatives
source New York: John Wiley & Sons, 1992. pp. 145-155 : charts. includes bibliography
summary The building envelope design process involves selecting materials and constructional types for envelope components. Many different materials need to be combined together for wall and roof assemblies to meet the various performance requirements such as thermal efficiency, cost, acoustic and fire resistances. The number of performance attributes to be considered in the design process is large. Lack of information, time limitations and the large number of feasible design alternatives generally force the designer to rely on past experience and practical judgement to make rapid design decisions. Current work at the Centre for Buildings Studies focuses on the development of knowledge-based synthesis and evaluation techniques for reducing the problems of information handling and decision making in building envelope design. The generation of design alternatives is viewed as a search process that identifies feasible combinations of building envelope components satisfying a set of performance requirements, material compatibility, practicality of design, etc. This paper discusses knowledge acquisition and representation issues involved in the definition of constraints to guide the generation of feasible combinations of envelope components
keywords envelope, knowledge base, knowledge acquisition, representation, performance, design, structures, architecture, evaluation
series CADline
last changed 2003/06/02 14:41

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id 067f
authors Gantt, Michelle and Nardi, Bonnie A.
year 1992
title Gardeners and Gurus: Patterns of Cooperation among CAD Users Perspectives on the Design of CollaborativeSystems
source Proceedings of ACM CHI'92 Conference on Human Factors in ComputingSystems 1992 pp. 107-117
summary We studied CAD system users to find out how they use the sophisticated customization and extension facilities offered by many CAD products. We found that users of varying levels of expertise collaborate to customize their CAD environments and to create programmatic extensions to their applications. Within a group of users, there is at least one local expert who provides support for other users. We call this person a local developer. The local developer is a fellow domain expert, not a professional programmer, outside technical consultant or MIS staff member. We found that in some CAD environments the support role has been formalized so that local developers are given official recognition, and time and resources to pursue local developer activities. In general, this formalization of the local developer role appears successful. We discuss the implications of our findings for work practices and for software design.
keywords Cooperative Work; End User Programming
series other
last changed 2002/07/07 16:01

_id a081
authors Greenberg S., Roseman M. and Webster, D.
year 1992
title Issues and Experiences Designing and Implementing Two Group Drawing Tools
source Readings in Groupware, 609-620
summary Groupware designers are now developing multi-user equivalents of popular paint and draw applications. Their job is not an easy one. First, human factors issues peculiar to group interaction appear that, if ignored, seriously limit the usability of the group tool. Second, implementation is fraught with considerable hurdles. This paper describes the issues and experiences we have met and handled in the design of two systems supporting remote real time group interaction: GroupSketch, a multi-user sketchpad; and GroupDraw, an object-based multi-user draw package. On the human factors side, we summarize empirically-derived design principles that we believe are critical to building useful and usable collaborative drawing tools. On the implementation side, we describe our experiences with replicated versus centralized architectures, schemes for participant registration, multiple cursors, network requirements, and the structure of the drawing primitives.
series other
last changed 2003/04/23 15:50

_id 56de
authors Handa, M., Hasegawa, Y., Matsuda, H., Tamaki, K., Kojima, S., Matsueda, K., Takakuwa, T. and Onoda, T.
year 1996
title Development of interior finishing unit assembly system with robot: WASCOR IV research project report
source Automation in Construction 5 (1) (1996) pp. 31-38
summary The WASCOR (WASeda Construction Robot) research project was organized in 1982 by Waseda University, Tokyo, Japan, aiming at automatizing building construction with a robot. This project is collaborated by nine general contractors and a construction machinery manufacturer. The WASCOR research project has been divided into four phases with the development of the study and called WASCOR I, II, III, and IV respectively. WASCOR I, II, and III finished during the time from 1982 to 1992 in a row with having 3-4 years for each phase, and WASCOR IV has been continued since 1993. WASCOR IV has been working on a automatized building interior finishing system. This system consists of following three parts. (1) Development of building system and construction method for automated interior finishing system. (2) Design of hardware system applied to automated interior finishing system. (3) Design of information management system in automated construction. As the research project has been developing, this paper describes the interim report of (1) Development of building system and construction method for automated interior finishing system, and (2) Design of hardware system applied to automated interior finishing system.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 5c74
authors HCIL
year 1997
title Spatial Perception in Perspective Displays
source Report Human-Computer Interaction Lab, Virginia
summary Increasingly, computer displays are being used as the interface "window" between complex systems and their users. In addition, it is becoming more common to see computer interfaces represented by spatial metaphors, allowing users to apply their vast prior knowledge and experience in dealing with the three-dimensional (3D) world (Wickens, 1992). Desktop VR or window on a world (WoW), as it is sometimes called, uses a conventional computer monitor to display the virtual environment (VE). The 3D display applies perspective geometry to provide the illusion of 3D space.
series report
last changed 2003/04/23 15:50

_id 130d
authors Hoinkes, R. and Mitchell, R.
year 1994
title Playing with Time - Continuous Temporal Mapping Strategies for Interactive Environments
source 6th Canadian GIS Conference, (Ottawa Natura Resources Canada), pp. 318-329
summary The growing acceptance of GIS technology has had far- reaching effects on many fields of research. The recent developments in the area of dynamic and temporal GIS open new possibilities within the realm of historical research where temporal relationship analysis is as important as spatial relationship analysis. While topological structures have had wide use in spatial GIS and have been the subject of most temporal GIS endeavours, the different demands of many of these temporally- oriented analytic processes questions the choice of the topological direction. In the fall of 1992 the Montreal Research Group (MRG) of the Canadian Centre for Architecture mounted an exhibition dealing with the development of the built environment in 18th- century Montreal. To aid in presenting the interpretive messages of their data, the MRG worked with the Centre for Landscape Research (CLR) to incorporate the interactive capabilities of the CLR's PolyTRIM research software with the MRG's data base to produce a research tool as well as a public- access interactive display. The interactive capabilities stemming from a real- time object- oriented structure provided an excellent environment for both researchers and the public to investigate the nature of temporal changes in such aspects as landuse, ethnicity, and fortifications of the 18th century city. This paper describes the need for interactive real- time GIS in such temporal analysis projects and the underlying need for object- oriented vs. topologically structured data access strategies to support them.
series other
last changed 2003/04/23 15:14

_id abce
authors Ishii, Hiroshi and Kobayashi, Minoru
year 1992
title ClearBoard: A Seamless Medium for Shared Drawing and Conversation with Eye Contact Systems for Media-Supported Collaboration
source Proceedings of ACM CHI'92 Conference on HumanFactors in Computing Systems 1992 pp. 525-532
summary This paper introduces a novel shared drawing medium called ClearBoard. It realizes (1) a seamless shared drawing space and (2) eye contact to support realtime and remote collaboration by two users. We devised the key metaphor: "talking through and drawing on a transparent glass window" to design ClearBoard. A prototype of ClearBoard is implemented based on the "Drafter-Mirror" architecture. This paper first reviews previous work on shared drawing support to clarify the design goals. We then examine three metaphors that fulfill these goals. The design requirements and the two possible system architectures of ClearBoard are described. Finally, some findings gained through the experimental use of the prototype, including the feature of "gaze awareness", are discussed.
series other
last changed 2002/07/07 16:01

_id caadria2004_k-1
id caadria2004_k-1
authors Kalay, Yehuda E.
year 2004
title CONTEXTUALIZATION AND EMBODIMENT IN CYBERSPACE
doi https://doi.org/10.52842/conf.caadria.2004.005
source CAADRIA 2004 [Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] Seoul Korea 28-30 April 2004, pp. 5-14
summary The introduction of VRML (Virtual Reality Markup Language) in 1994, and other similar web-enabled dynamic modeling software (such as SGI’s Open Inventor and WebSpace), have created a rush to develop on-line 3D virtual environments, with purposes ranging from art, to entertainment, to shopping, to culture and education. Some developers took their cues from the science fiction literature of Gibson (1984), Stephenson (1992), and others. Many were web-extensions to single-player video games. But most were created as a direct extension to our new-found ability to digitally model 3D spaces and to endow them with interactive control and pseudo-inhabitation. Surprisingly, this technologically-driven stampede paid little attention to the core principles of place-making and presence, derived from architecture and cognitive science, respectively: two principles that could and should inform the essence of the virtual place experience and help steer its development. Why are the principles of place-making and presence important for the development of virtual environments? Why not simply be content with our ability to create realistically-looking 3D worlds that we can visit remotely? What could we possibly learn about making these worlds better, had we understood the essence of place and presence? To answer these questions we cannot look at place-making (both physical and virtual) from a 3D space-making point of view alone, because places are not an end unto themselves. Rather, places must be considered a locus of contextualization and embodiment that ground human activities and give them meaning. In doing so, places acquire a meaning of their own, which facilitates, improves, and enriches many aspects of our lives. They provide us with a means to interpret the activities of others and to direct our own actions. Such meaning is comprised of the social and cultural conceptions and behaviors imprinted on the environment by the presence and activities of its inhabitants, who in turn, ‘read’ by them through their own corporeal embodiment of the same environment. This transactional relationship between the physical aspects of an environment, its social/cultural context, and our own embodiment of it, combine to create what is known as a sense of place: the psychological, physical, social, and cultural framework that helps us interpret the world around us, and directs our own behavior in it. In turn, it is our own (as well as others’) presence in that environment that gives it meaning, and shapes its social/cultural character. By understanding the essence of place-ness in general, and in cyberspace in particular, we can create virtual places that can better support Internet-based activities, and make them equal to, in some cases even better than their physical counterparts. One of the activities that stands to benefit most from understanding the concept of cyber-places is learning—an interpersonal activity that requires the co-presence of others (a teacher and/or fellow learners), who can point out the difference between what matters and what does not, and produce an emotional involvement that helps students learn. Thus, while many administrators and educators rush to develop webbased remote learning sites, to leverage the economic advantages of one-tomany learning modalities, these sites deprive learners of the contextualization and embodiment inherent in brick-and-mortar learning institutions, and which are needed to support the activity of learning. Can these qualities be achieved in virtual learning environments? If so, how? These are some of the questions this talk will try to answer by presenting a virtual place-making methodology and its experimental implementation, intended to create a sense of place through contextualization and embodiment in virtual learning environments.
series CAADRIA
type normal paper
last changed 2022/06/07 07:52

_id e7c8
authors Kalisperis, Loukas N., Steinman, Mitch and Summers, Luis H.
year 1992
title Design Knowledge, Environmental Complexity in Nonorthogonal Space
source New York: John Wiley & Sons, 1992. pp. 273-291 : ill. includes bibliography
summary Mechanization and industrialization of society has resulted in most people spending the greater part of their lives in enclosed environments. Optimal design of indoor artificial climates is therefore of increasing importance. Wherever artificial climates are created for human occupation, the aim is that the environment be designed so that individuals are in thermal comfort. Current design methodologies for radiant panel heating systems do not adequately account for the complexities of human thermal comfort, because they monitor air temperature alone and do not account for thermal neutrality in complex enclosures. Thermal comfort for a person is defined as that condition of mind which expresses satisfaction with the thermal environment. Thermal comfort is dependent on Mean Radiant Temperature and Operative Temperature among other factors. In designing artificial climates for human occupancy the interaction of the human with the heated surfaces as well the surface-to-surface heat exchange must be accounted for. Early work in the area provided an elaborate and difficult method for calculating radiant heat exchange for simplistic and orthogonal enclosures. A new improved method developed by the authors for designing radiant panel heating systems based on human thermal comfort and mean radiant temperature is presented. Through automation and elaboration this method overcomes the limitations of the early work. The design procedure accounts for human thermal comfort in nonorthogonal as well as orthogonal spaces based on mean radiant temperature prediction. The limitation of simplistic orthogonal geometries has been overcome with the introduction of the MRT-Correction method and inclined surface-to-person shape factor methodology. The new design method increases the accuracy of calculation and prediction of human thermal comfort and will allow designers to simulate complex enclosures utilizing the latest design knowledge of radiant heat exchange to increase human thermal comfort
keywords applications, architecture, building, energy, systems, design, knowledge
series CADline
last changed 2003/06/02 10:24

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 12HOMELOGIN (you are user _anon_174282 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002