CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 239

_id ab4d
authors Huang, Tao-Kuang, Degelman, Larry O., and Larsen, Terry R.
year 1992
title A Visualization Model for Computerized Energy Evaluation During the Conceptual Design Stage (ENERGRAPH)
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 195-206
doi https://doi.org/10.52842/conf.acadia.1992.195
summary Energy performance is a crucial step toward responsible design. Currently there are many tools that can be applied to reach this goal with reasonable accuracy. Often times, however, major flaws are not discovered until the final stage of design when it is too late to change. Not only are existing simulation models complicated to apply at the conceptual design stage, but energy principles and their applications are also abstract and hard to visualize. Because of the lack of suitable tools to visualize energy analysis output, energy conservation concepts fail to be integrated into the building design. For these reasons, designers tend not to apply energy conservation concepts at the early design stage. However, since computer graphics is a new phase of visual communication in design process, the above problems might be solved properly through a computerized graphical interface in the conceptual design stage.

The research described in this paper is the result of exploring the concept of using computer graphics to support energy efficient building designs. It focuses on the visualization of building energy through a highly interactive graphical interface in the early design stage.

series ACADIA
email l-degelman@neo.tamu.edu
last changed 2022/06/07 07:50

_id bdbb
authors Pugh, D.
year 1992
title Designing solid objects using interactive sketch interpretation
source Computer Graphics (1992 Symposium on Interactive 3D Graphics), 25(2):117-126, Mar. 1992
summary Before the introduction of Computer Aided Design and solid modeling systems, designers had developed a set of techniques for designing solid objects by sketching their ideas on pencil and paper and refining them into workable designs. Unfortunately, these techniques are different from those for designing objects using a solid modeler. Not only does this waste avast reserve of talent and experience (people typically start drawing from the moment they can hold a crayon), but it also has a more fundamental problem: designers can use their intuition more effectively when sketching than they can when using a solid modeler. Viking is a solid modeling system whose user-interface is based on interactive sketch interpretation. Interactive sketch interpretation lets the designer create a line-drawing of a de- sired object while Viking generates a three-dimensional ob- ject description. This description is consistent with both the designer's line-drawing, and a set of geometric constraints either derived from the line-drawing or placed by the de- signer. Viking's object descriptions are fully compatible with the object descriptions used by traditional solid modelers. As a result, interactive sketch interpretation can be used with traditional solid modeling techniques, combining the advan- tages of both sketching and solid modeling.
series journal paper
last changed 2003/04/23 15:50

_id 9feb
authors Turk, G.
year 1992
title Re-tiling polygonal surfaces
source E.E. Catmull, (ed) Computer Graphics (Siggraph ¥92 proc.), vol 26, pp. 55-64, July 1992
summary This paper presents an automatic method of creating surface models at several levels of detail from an original polygonal description of a given object. Representing models at various levels of detail is important for achieving high frame rates in interactive graphics applications and also for speeding-up the off-line rendering of complex scenes. Unfortunately, generating these levels of detail is a time-consuming task usually left to a human modeler. This paper shows how a new set of vertices can be distributed over the surface of a model and connected to one another to create a re-tiling of a surface that is faithful to both the geometry and the topology of the original surface. The main contributions of this paper are: 1) a robust method of connecting together new vertices over a surface, 2) a way of using an estimate of surface curvature to distribute more new vertices at regions of higher curvature and 3) a method of smoothly interpolating between models that represent the same object at different levels of detail. The key notion in the re-tiling procedure is the creation of an intermediate model called the mutual tessellation of a surface that contains both the vertices from the original model and the new points that are to become vertices in the re-tiled surface. The new model is then created by removing each original vertex and locally re-triangulating the surface in a way that matches the local connectedness of the initial surface. This technique for surface retessellation has been successfully applied to iso-surface models derived from volume data, Connolly surface molecular models and a tessellation of a minimal surface of interest to mathematicians.
series other
last changed 2003/04/23 15:50

_id acadia06_455
id acadia06_455
authors Ambach, Barbara
year 2006
title Eve’s Four Faces interactive surface configurations
source Synthetic Landscapes [Proceedings of the 25th Annual Conference of the Association for Computer-Aided Design in Architecture] pp. 455-460
doi https://doi.org/10.52842/conf.acadia.2006.455
summary Eve’s Four Faces consists of a series of digitally animated and interactive surfaces. Their content and structure are derived from a collection of sources outside the conventional boundaries of architectural research, namely psychology and the broader spectrum of arts and culture.The investigation stems from a psychological study documenting the attributes and social relationships of four distinct personality prototypes: the Individuated, the Traditional, the Conflicted, and the Assured (York and John 1992). For the purposes of this investigation, all four prototypes are assumed to be inherent, to certain degrees, in each individual. However, the propensity towards one of the prototypes forms the basis for each individual’s “personality structure.” The attributes, social implications and prospects for habitation have been translated into animations and surfaces operating within A House for Eve’s Four Faces. The presentation illustrates the potential for constructed surfaces to be configured and transformed interactively, responding to the needs and qualities associated with each prototype. The intention is to study the effects of each configuration and how each configuration may be therapeutic in supporting, challenging or altering one’s personality as it oscillates and shifts through the four prototypical conditions.
series ACADIA
email Ambachb@aol.com
last changed 2022/06/07 07:54

_id 8d37
authors Bradford, J.W., Ng, F.F. and Will, B.F.
year 1992
title Models and Hypermedia for Architectural Education
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 19-42
doi https://doi.org/10.52842/conf.ecaade.1992.019
summary Hypermedia uses the hypertext style of interactive navigation through computer-based multimedia materials to provide access to a wealth of information for use by teachers and students. Academic disciplines concerned about the enlightenment of future designers of the built environment require an additional medium not yet available in hypermedia - interactive 3-D computer models. This paper discusses a hypermedia CAI system currently being developed at the University of Hong Kong for use in architectural education. The system uses interactive 3D computer models as another medium for instructional information, and as user orientation and database access devices. An object oriented, 3-D model hierarchy is used as the organizational structure for the database. A prototype which uses the system to teach undergraduate architecture students about a traditional Chinese temple is also illustrated. The prototype demonstrates the use of a computer as the medium for bilingual English and Chinese instruction.

keywords 3-D Modelling, Architectural Education, Computer Aided Instruction, Hypermedia, Multimedia
series eCAADe
email bradford@hkucc.hku.hk, hrrbnff@hkucc.hku.hk
last changed 2022/06/07 07:54

_id 9b34
authors Butterworth, J. (et al.)
year 1992
title 3DM: A three-dimensional modeler using a head-mounted display
source Proceedings of the 1992 Symposium on Interactive 3D Graphics (Cambridge, Mass., March 29- April 1, 1992.), 135-138
summary 3dm is a three dimensional (3D) surface modeling program that draws techniques of model manipulation from both CAD and drawing programs and applies them to modeling in an intuitive way. 3dm uses a head-mounted display (HMD) to simplify the problem of 3D model manipulation and understanding. A HMD places the user in the modeling space, making three dimensional relationships more understandable. As a result, 3dm is easy to learn how to use and encourages experimentation with model shapes.
series other
last changed 2003/04/23 15:50

_id 4129
authors Fargas, Josep and Papazian, Pegor
year 1992
title Metaphors in Design: An Experiment with a Frame, Two Lines and Two Rectangles
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 13-22
doi https://doi.org/10.52842/conf.acadia.1992.013
summary The research we will discuss below originated from an attempt to examine the capacity of designers to evaluate an artifact, and to study the feasibility of replicating a designer's moves intended to make an artifact more expressive of a given quality. We will present the results of an interactive computer experiment, first developed at the MIT Design Research Seminar, which is meant to capture the subject’s actions in a simple design task as a series of successive "moves"'. We will propose that designers use metaphors in their interaction with design artifacts and we will argue that the concept of metaphors can lead to a powerful theory of design activity. Finally, we will show how such a theory can drive the project of building a design system.

When trying to understand how designers work, it is tempting to examine design products in order to come up with the principles or norms behind them. The problem with such an approach is that it may lead to a purely syntactical analysis of design artifacts, failing to capture the knowledge of the designer in an explicit way, and ignoring the interaction between the designer and the evolving design. We will present a theory about design activity based on the observation that knowledge is brought into play during a design task by a process of interpretation of the design document. By treating an evolving design in terms of the meanings and rules proper to a given way of seeing, a designer can reduce the complexity of a task by focusing on certain of its aspects, and can manipulate abstract elements in a meaningful way.

series ACADIA
email fargas@dtec.es
last changed 2022/06/07 07:55

_id 2b7a
authors Ferguson, H., Rockwood, A. and Cox, J.
year 1992
title Topological Design of Sculptured Surfaces
source Computer Graphics, no. 26, pp.149-156
summary Topology is primal geometry. Our design philosophy embodies this principle. We report on a new surface &sign perspective based on a "marked" polygon for each object. The marked polygon captures the topology of the object surface. We construct multiply periodic mappings from polygon to sculptured surface. The mappings arise naturally from the topology and other design considerations. Hence we give a single domain global parameteriration for surfaces with handles. Examples demonstrate the design of sculptured objects and their ntanufimture.
series journal paper
last changed 2003/04/23 15:50

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id ascaad2006_paper18
id ascaad2006_paper18
authors Huang, Chie-Chieh
year 2006
title An Approach to 3D Conceptual Modelling
source Computing in Architecture / Re-Thinking the Discourse: The Second International Conference of the Arab Society for Computer Aided Architectural Design (ASCAAD 2006), 25-27 April 2006, Sharjah, United Arab Emirates
summary This article presents a 3D user interface required by the development of conceptual modeling. This 3D user interface provides a new structure for solving the problems of difficult interface operations and complicated commands due to the application of CAD 2D interface for controlling 3D environment. The 3D user interface integrates the controlling actions of “seeing – moving –seeing” while designers are operating CAD (Schön and Wiggins, 1992). Simple gestures are used to control the operations instead. The interface also provides a spatial positioning method which helps designers to eliminate the commands of converting a coordinate axis. The study aims to discuss the provision of more intuitively interactive control through CAD so as to fulfil the needs of designers. In our practices and experiments, a pair of LED gloves equipped with two CCD cameras for capturing is used to sense the motions of hands and positions in 3D. In addition, circuit design is applied to convert the motions of hands including selecting, browsing, zoom in / zoom out and rotating to LED switches in different colours so as to identify images.
series ASCAAD
email scottie@arch.nctu.edu.tw
last changed 2007/04/08 19:47

_id 56e9
authors Huang, Tao-Kuang
year 1992
title A Graphical Feedback Model for Computerized Energy Analysis during the Conceptual Design Stage
source Texas A&M University
summary During the last two decades, considerable effort has been placed on the development of building design analysis tools. Architects and designers have begun to take advantage of computers to generate and examine design alternatives. However, because it has been difficult to adapt computer technologies to the visual orientation of the building designer, the majority of computer applications have been limited to numerical analysis and office automation tasks. Only recently, because of advances in hardware and software techniques, computers have entered into a new phase in the development of architectural design. haveters are now able to interactively display graphics solutions to architectural related problems, which is fundamental to the design process. The majority of research programs in energy efficient design have sharpened people's understanding of energy principles and their application of those principles. Energy conservation concepts, however, have not been widely used. A major problem in the implementation of these principles is that energy principles their applications are abstract, hard to visualize and separated from the architectural design process. Furthermore, one aspect of energy analysis may contain thousands of pieces of numerical information which often leads to confusion on the part of designers. If these difficulties can be overcome, it would bring a great benefit to the advancement of energy conservation concepts. This research explores the concept of an integrated computer graphics program to support energy efficient design. It focuses on (1) the integration of energy efficiently and architectural design, and (2) the visualization of building energy use through graphical interfaces during the conceptual design stage. It involves (1) the discussion of frameworks of computer-aided architectural design and computer-aided energy efficient building design, and (2) the development of an integrated computer prototype program with a graphical interface that helps the designer create building layouts, analyze building energy interactively and receive visual feedbacks dynamically. The goal is to apply computer graphics as an aid to visualize the effects of energy related decisions and therefore permit the designer to visualize and understand energy conservation concepts in the conceptual phase of architectural design.
series thesis:PhD
last changed 2003/02/12 22:37

_id caadria2004_k-1
id caadria2004_k-1
authors Kalay, Yehuda E.
year 2004
title CONTEXTUALIZATION AND EMBODIMENT IN CYBERSPACE
source CAADRIA 2004 [Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] Seoul Korea 28-30 April 2004, pp. 5-14
doi https://doi.org/10.52842/conf.caadria.2004.005
summary The introduction of VRML (Virtual Reality Markup Language) in 1994, and other similar web-enabled dynamic modeling software (such as SGI’s Open Inventor and WebSpace), have created a rush to develop on-line 3D virtual environments, with purposes ranging from art, to entertainment, to shopping, to culture and education. Some developers took their cues from the science fiction literature of Gibson (1984), Stephenson (1992), and others. Many were web-extensions to single-player video games. But most were created as a direct extension to our new-found ability to digitally model 3D spaces and to endow them with interactive control and pseudo-inhabitation. Surprisingly, this technologically-driven stampede paid little attention to the core principles of place-making and presence, derived from architecture and cognitive science, respectively: two principles that could and should inform the essence of the virtual place experience and help steer its development. Why are the principles of place-making and presence important for the development of virtual environments? Why not simply be content with our ability to create realistically-looking 3D worlds that we can visit remotely? What could we possibly learn about making these worlds better, had we understood the essence of place and presence? To answer these questions we cannot look at place-making (both physical and virtual) from a 3D space-making point of view alone, because places are not an end unto themselves. Rather, places must be considered a locus of contextualization and embodiment that ground human activities and give them meaning. In doing so, places acquire a meaning of their own, which facilitates, improves, and enriches many aspects of our lives. They provide us with a means to interpret the activities of others and to direct our own actions. Such meaning is comprised of the social and cultural conceptions and behaviors imprinted on the environment by the presence and activities of its inhabitants, who in turn, ‘read’ by them through their own corporeal embodiment of the same environment. This transactional relationship between the physical aspects of an environment, its social/cultural context, and our own embodiment of it, combine to create what is known as a sense of place: the psychological, physical, social, and cultural framework that helps us interpret the world around us, and directs our own behavior in it. In turn, it is our own (as well as others’) presence in that environment that gives it meaning, and shapes its social/cultural character. By understanding the essence of place-ness in general, and in cyberspace in particular, we can create virtual places that can better support Internet-based activities, and make them equal to, in some cases even better than their physical counterparts. One of the activities that stands to benefit most from understanding the concept of cyber-places is learning—an interpersonal activity that requires the co-presence of others (a teacher and/or fellow learners), who can point out the difference between what matters and what does not, and produce an emotional involvement that helps students learn. Thus, while many administrators and educators rush to develop webbased remote learning sites, to leverage the economic advantages of one-tomany learning modalities, these sites deprive learners of the contextualization and embodiment inherent in brick-and-mortar learning institutions, and which are needed to support the activity of learning. Can these qualities be achieved in virtual learning environments? If so, how? These are some of the questions this talk will try to answer by presenting a virtual place-making methodology and its experimental implementation, intended to create a sense of place through contextualization and embodiment in virtual learning environments.
series CAADRIA
type normal paper
last changed 2022/06/07 07:52

_id 8488
authors Liggett, Robin S.
year 1992
title A Designer-Automated Algorithm Partnership : An Interactive Graphic Approach to Facility Layout
source New York: John Wiley & Sons, 1992. pp. 101-123 : ill. includes bibliography
summary Automated solution technique for spatial allocation problems have long been an interest of researchers in computer-aided design. This paper describes research focusing on the use of an interactive graphic interface for the solution of facility layout problems which have quantifyable but sometimes competing criteria. The ideas presented in the paper have been implemented in a personal computer system
keywords algorithms, user interface, layout, synthesis, floor plans, architecture, facilities planning, automation, space allocation, optimization
series CADline
email rliggett@ucla.edu
last changed 2003/06/02 13:58

_id aba4
authors Lischinski, D. Tampieri, F. and Greenberg, D.P.
year 1992
title Discontinuity Meshing for Accurate Radiosity
source IEEE Computer Graphics & Applications, November 1992, pp.25-38
summary We discuss the problem of accurately computing the illumination of a diffuse polyhedral environment due to an area light source. We show how umbra and penumbra boundaries and other illumination details correspond to discontinuities in the radiance function and its derivatives. The shape, location, and order of these discontinuities is determined by the geometry of the light sources and obstacles in the environment. We describe an object-space algorithm that accurately reproduces the radiance across a surface by constructing a discontinuity mesh that explicitly represents various discontinuities in the radiance function as boundaries between mesh elements. A piecewise quadratic interpolant is used to approximate the radiance function, preserving the discontinuities associated with the edges in the mesh. This algorithm can be used in the framework of a progressive refinement radiosity system to solve the diffuse global illumination problem. Results produced by the new method are compared with ones obtained using a standard radiosity system.
series journal paper
last changed 2003/04/23 15:50

_id a582
authors Marshall, Tony B.
year 1992
title The Computer as a Graphic Medium in Conceptual Design
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 39-47
doi https://doi.org/10.52842/conf.acadia.1992.039
summary The success CAD has experienced in the architectural profession demonstrates that architects have been willing to replace traditional drafting media with computers and electronic plotters for the production of working drawings. Its expanded use in the design development phase for 3D modeling and rendering further justifies CAD's usefulness as a presentation medium. The schematic design phase however, has hardly been influenced by the evolution of CAD. Most architects simply have not come to view the computer as a viable design medium. One reason for this might be the strong correspondence between architectural CAD and plan view graphics, as used in working drawings, compared to the weak correspondence between architectural CAD and plan view graphics, as used in schematic design. The role of the actual graphic medium during schematic design should not be overlooked in the development of CAD applications.

In order to produce practical CAD applications for schematic design we must explore the computer’s potential as a form of expression and its role as a graphic medium. An examination of the use of traditional graphic media during schematic design will provide some clues regarding what capabilities CAD must provide and how a system should operate in order to be useful during conceptual design.

series ACADIA
last changed 2022/06/07 07:59

_id 054b
authors Peitgen, H.-O., Jürgens, H. and Saupe, D.
year 1992
title Fractals for the Classroom. Part 1: Introduction to Fractals and Chaos
source Springer Verlag, New York
summary Fractals for the Classroom breaks new ground as it brings an exciting branch of mathematics into the classroom. The book is a collection of independent chapters on the major concepts related to the science and mathematics of fractals. Written at the mathematical level of an advanced secondary student, Fractals for the Classroom includes many fascinating insights for the classroom teacher and integrates illustrations from a wide variety of applications with an enjoyable text to help bring the concepts alive and make them understandable to the average reader. This book will have a tremendous impact upon teachers, students, and the mathematics education of the general public. With the forthcoming companion materials, including four books on strategic classroom activities and lessons with interactive computer software, this package will be unparalleled.
series other
last changed 2003/04/23 15:14

_id a5fc
authors Shinners, Neil, D’Cruz, Neville and Marriott, Andrew
year 1992
title Multi-Faceted Architectural Visualization
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 141-153
doi https://doi.org/10.52842/conf.acadia.1992.141
summary As well as learning traditional design techniques, students in architecture courses learn how to use powerful workstations with CAD systems, color scanners and laser printers and software for the rendering, compositing and animating of their designs.

They learn to use raytracing and radiosity rendering systems to provide visual realism, alpha-channel compositing systems to put a client in the picture (literally) or the design in situ, and keyframe animation systems to allow realistic walkthroughs.

Student Presentations are now based on videos, photographic slides, slide shows or real time animation. Images (as data files) are imported into full color publishing systems for final year thesis presentation.

The architectural graphics environment at Curtin University facilitates the integration of slide and video examples of raytraced and chroma-keyed images with computer aided design techniques for architectural student presentations.

series ACADIA
email raytrace@cs.curtin.edu.au
last changed 2022/06/07 07:56

_id 592a
authors Takemura, H. and Kishino, F.
year 1992
title Cooperative work environment using virtual workspace
source Proceedings of the Conference on Computer-Supported Cooperative Work: 226-232. New York: The Association for Computing Machinery
summary A virtual environment, which is created by computer graphics and an appropriate user interface, can be used in many application fields, such as teleopetution, telecommunication and real time simulation. Furthermore, if this environment could be shared by multiple users, there would be more potential applications. Discussed in this paper is a case study of building a prototype of a cooperative work environment using a virtual environment, where more than two people can solve problemscooperatively, including design strategies and implementirig issues. An environment where two operators can directly grasp, move or release stereoscopic computer graphics images by hand is implemented. The system is built by combining head position tracking stereoscopic displays, hand gesture input devices and graphics workstations. Our design goal is to utilize this type of interface for a future teleconferencing system. In order to provide good interactivity for users, we discuss potential bottlenecks and their solutions. The system allows two users to share a virtual environment and to organize 3-D objects cooperatively.
series other
last changed 2003/04/23 15:50

_id c54a
authors Welch, W. and Witkin, A.
year 1992
title Variational surface modeling
source Computer Graphics, 26, Proceedings, SIGGRAPH 92
summary We present a newapproach to interactivemodeling of freeform surfaces. Instead of a fixed mesh of control points, the model presented to the user is that of an infinitely malleable surface, with no fixed controls. The user is free to apply control points and curves which are then available as handles for direct manipulation. The complexity of the surface's shape may be increased by adding more control points and curves, without apparent limit. Within the constraints imposed by the controls, the shape of the surface is fully determined by one or more simple criteria, such as smoothness. Our method for solving the resulting constrained variational optimization problems rests on a surface representation scheme allowing nonuniform subdivision of B-spline surfaces. Automatic subdivision is used to ensure that constraints are met, and to enforce error bounds. Efficient numerical solutions are obtained by exploiting linearities in the problem formulation and the representation.
series journal paper
last changed 2003/04/23 15:50

_id 3b2a
authors Westin, S., Arvo, J. and Torrance, K.
year 1992
title Predicting reflectance functions from complex surfaces
source Computer Graphics, 26(2):255-264, July 1992
summary We describe a physically-based Monte Carlo technique for approximating bidirectional re•ectance distribution functions (BRDFs) for a large class of geometries by directly simulating optical scattering. The technique is more general than previous analytical models: it removes most restrictions on surface microgeometry. Three main points are described: a new representation of the BRDF, a Monte Carlo technique to estimate the coef•cients of the representation, and the means of creating a milliscale BRDF from microscale scattering events. These allowthe prediction of scattering from essentially arbitrary roughness geometries. The BRDF is concisely represented by a matrix of spherical harmonic coef•cients; the matrix is directly estimated from a geometric optics simulation, enforcing exact reciprocity. The method applies to roughness scales that are large with respect to the wavelength of light and small with respect to the spatial density at which the BRDF is sampled across the surface; examples include brushed metal and textiles. The method is validated by comparing with an existing scattering model and sample images are generated with a physically-based global illumination algorithm.
series journal paper
last changed 2003/04/23 15:50

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 11HOMELOGIN (you are user _anon_981363 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002