CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 91

_id 9feb
authors Turk, G.
year 1992
title Re-tiling polygonal surfaces
source E.E. Catmull, (ed) Computer Graphics (Siggraph ¥92 proc.), vol 26, pp. 55-64, July 1992
summary This paper presents an automatic method of creating surface models at several levels of detail from an original polygonal description of a given object. Representing models at various levels of detail is important for achieving high frame rates in interactive graphics applications and also for speeding-up the off-line rendering of complex scenes. Unfortunately, generating these levels of detail is a time-consuming task usually left to a human modeler. This paper shows how a new set of vertices can be distributed over the surface of a model and connected to one another to create a re-tiling of a surface that is faithful to both the geometry and the topology of the original surface. The main contributions of this paper are: 1) a robust method of connecting together new vertices over a surface, 2) a way of using an estimate of surface curvature to distribute more new vertices at regions of higher curvature and 3) a method of smoothly interpolating between models that represent the same object at different levels of detail. The key notion in the re-tiling procedure is the creation of an intermediate model called the mutual tessellation of a surface that contains both the vertices from the original model and the new points that are to become vertices in the re-tiled surface. The new model is then created by removing each original vertex and locally re-triangulating the surface in a way that matches the local connectedness of the initial surface. This technique for surface retessellation has been successfully applied to iso-surface models derived from volume data, Connolly surface molecular models and a tessellation of a minimal surface of interest to mathematicians.
series other
last changed 2003/04/23 15:50

_id 3b2a
authors Westin, S., Arvo, J. and Torrance, K.
year 1992
title Predicting reflectance functions from complex surfaces
source Computer Graphics, 26(2):255-264, July 1992
summary We describe a physically-based Monte Carlo technique for approximating bidirectional re•ectance distribution functions (BRDFs) for a large class of geometries by directly simulating optical scattering. The technique is more general than previous analytical models: it removes most restrictions on surface microgeometry. Three main points are described: a new representation of the BRDF, a Monte Carlo technique to estimate the coef•cients of the representation, and the means of creating a milliscale BRDF from microscale scattering events. These allowthe prediction of scattering from essentially arbitrary roughness geometries. The BRDF is concisely represented by a matrix of spherical harmonic coef•cients; the matrix is directly estimated from a geometric optics simulation, enforcing exact reciprocity. The method applies to roughness scales that are large with respect to the wavelength of light and small with respect to the spatial density at which the BRDF is sampled across the surface; examples include brushed metal and textiles. The method is validated by comparing with an existing scattering model and sample images are generated with a physically-based global illumination algorithm.
series journal paper
last changed 2003/04/23 15:50

_id acadia06_455
id acadia06_455
authors Ambach, Barbara
year 2006
title Eve’s Four Faces interactive surface configurations
doi https://doi.org/10.52842/conf.acadia.2006.455
source Synthetic Landscapes [Proceedings of the 25th Annual Conference of the Association for Computer-Aided Design in Architecture] pp. 455-460
summary Eve’s Four Faces consists of a series of digitally animated and interactive surfaces. Their content and structure are derived from a collection of sources outside the conventional boundaries of architectural research, namely psychology and the broader spectrum of arts and culture.The investigation stems from a psychological study documenting the attributes and social relationships of four distinct personality prototypes: the Individuated, the Traditional, the Conflicted, and the Assured (York and John 1992). For the purposes of this investigation, all four prototypes are assumed to be inherent, to certain degrees, in each individual. However, the propensity towards one of the prototypes forms the basis for each individual’s “personality structure.” The attributes, social implications and prospects for habitation have been translated into animations and surfaces operating within A House for Eve’s Four Faces. The presentation illustrates the potential for constructed surfaces to be configured and transformed interactively, responding to the needs and qualities associated with each prototype. The intention is to study the effects of each configuration and how each configuration may be therapeutic in supporting, challenging or altering one’s personality as it oscillates and shifts through the four prototypical conditions.
series ACADIA
email
last changed 2022/06/07 07:54

_id 2006_040
id 2006_040
authors Ambach, Barbara
year 2006
title Eve’s Four Faces-Interactive surface configurations
doi https://doi.org/10.52842/conf.ecaade.2006.040
source Communicating Space(s) [24th eCAADe Conference Proceedings / ISBN 0-9541183-5-9] Volos (Greece) 6-9 September 2006, pp. 40-44
summary Eve’s Four Faces consists of a series of digitally animated and interactive surfaces. Their content and structure are derived from a collection of sources outside the conventional boundaries of architectural research, namely psychology and the broader spectrum of arts and culture. The investigation stems from a psychological study documenting the attributes and social relationships of four distinct personality prototypes; the “Individuated”, the “Traditional”, the “Conflicted” and the “Assured”. (York and John, 1992) For the purposes of this investigation, all four prototypes are assumed to be inherent, to certain degrees, in each individual; however, the propensity towards one of the prototypes forms the basis for each individual’s “personality structure”. The attributes, social implications and prospects for habitation have been translated into animations and surfaces operating within A House for Eve’s Four Faces. The presentation illustrates the potential for constructed surfaces to be configured and transformed interactively, responding to the needs and qualities associated with each prototype. The intention is to study the effects of each configuration and how it may be therapeutic in supporting, challenging or altering one’s personality as it oscillates and shifts through the four prototypical conditions.
keywords interaction; digital; environments; psychology; prototypes
series eCAADe
type normal paper
last changed 2022/06/07 07:54

_id 9b34
authors Butterworth, J. (et al.)
year 1992
title 3DM: A three-dimensional modeler using a head-mounted display
source Proceedings of the 1992 Symposium on Interactive 3D Graphics (Cambridge, Mass., March 29- April 1, 1992.), 135-138
summary 3dm is a three dimensional (3D) surface modeling program that draws techniques of model manipulation from both CAD and drawing programs and applies them to modeling in an intuitive way. 3dm uses a head-mounted display (HMD) to simplify the problem of 3D model manipulation and understanding. A HMD places the user in the modeling space, making three dimensional relationships more understandable. As a result, 3dm is easy to learn how to use and encourages experimentation with model shapes.
series other
last changed 2003/04/23 15:50

_id 2b7a
authors Ferguson, H., Rockwood, A. and Cox, J.
year 1992
title Topological Design of Sculptured Surfaces
source Computer Graphics, no. 26, pp.149-156
summary Topology is primal geometry. Our design philosophy embodies this principle. We report on a new surface &sign perspective based on a "marked" polygon for each object. The marked polygon captures the topology of the object surface. We construct multiply periodic mappings from polygon to sculptured surface. The mappings arise naturally from the topology and other design considerations. Hence we give a single domain global parameteriration for surfaces with handles. Examples demonstrate the design of sculptured objects and their ntanufimture.
series journal paper
last changed 2003/04/23 15:50

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id c54a
authors Welch, W. and Witkin, A.
year 1992
title Variational surface modeling
source Computer Graphics, 26, Proceedings, SIGGRAPH 92
summary We present a newapproach to interactivemodeling of freeform surfaces. Instead of a fixed mesh of control points, the model presented to the user is that of an infinitely malleable surface, with no fixed controls. The user is free to apply control points and curves which are then available as handles for direct manipulation. The complexity of the surface's shape may be increased by adding more control points and curves, without apparent limit. Within the constraints imposed by the controls, the shape of the surface is fully determined by one or more simple criteria, such as smoothness. Our method for solving the resulting constrained variational optimization problems rests on a surface representation scheme allowing nonuniform subdivision of B-spline surfaces. Automatic subdivision is used to ensure that constraints are met, and to enforce error bounds. Efficient numerical solutions are obtained by exploiting linearities in the problem formulation and the representation.
series journal paper
last changed 2003/04/23 15:50

_id ddss9208
id ddss9208
authors Lucardie, G.L.
year 1993
title A functional approach to realizing decision support systems in technical regulation management for design and construction
source Timmermans, Harry (Ed.), Design and Decision Support Systems in Architecture (Proceedings of a conference held in Mierlo, the Netherlands in July 1992), ISBN 0-7923-2444-7
summary Technical building standards defining the quality of buildings, building products, building materials and building processes aim to provide acceptable levels of safety, health, usefulness and energy consumption. However, the logical consistency between these goals and the set of regulations produced to achieve them is often hard to identify. Not only the large quantities of highly complex and frequently changing building regulations to be met, but also the variety of user demands and the steadily increasing technical information on (new) materials, products and buildings have produced a very complex set of knowledge and data that should be taken into account when handling technical building regulations. Integrating knowledge technology and database technology is an important step towards managing the complexity of technical regulations. Generally, two strategies can be followed to integrate knowledge and database technology. The main emphasis of the first strategy is on transferring data structures and processing techniques from one field of research to another. The second approach is concerned exclusively with the semantic structure of what is contained in the data-based or knowledge-based system. The aim of this paper is to show that the second or knowledge-level approach, in particular the theory of functional classifications, is more fundamental and more fruitful. It permits a goal-directed rationalized strategy towards analysis, use and application of regulations. Therefore, it enables the reconstruction of (deep) models of regulations, objects and of users accounting for the flexibility and dynamics that are responsible for the complexity of technical regulations. Finally, at the systems level, the theory supports an effective development of a new class of rational Decision Support Systems (DSS), which should reduce the complexity of technical regulations and restore the logical consistency between the goals of technical regulations and the technical regulations themselves.
series DDSS
last changed 2003/08/07 16:36

_id 4857
authors Escola Tecnica Superior D'arquitectura de Barcelona (Ed.)
year 1992
title CAAD Instruction: The New Teaching of an Architect?
doi https://doi.org/10.52842/conf.ecaade.1992
source eCAADe Conference Proceedings / Barcelona (Spain) 12-14 November 1992, 551 p.
summary The involvement of computer graphic systems in the transmission of knowledge in the areas of urban planning and architectural design will bring a significant change to the didactic programs and methods of those schools which have decided to adopt these new instruments. Workshops of urban planning and architectural design will have to modify their structures, and teaching teams will have to revise their current programs. Some european schools and faculties of architecture have taken steps in this direction. Others are willing to join them.

This process is only delayed by the scarcity of material resources, and by the slowness with which a sufficient number of teachers are adopting these methods.

ECAADE has set out to analyze the state of this issue during its next conference, and it will be discussed from various points of view. From this confrontation of ideas will come, surely, the guidelines for progress in the years to come.

The different sessions will be grouped together following these four themes:

(A.) Multimedia and Course Work / State of the art of the synthesis of graphical and textual information favored by new available multimedia computer programs. Their repercussions on academic programs. (B.) The New Design Studio / Physical characteristics, data concentration and accessibility of a computerized studio can be better approached in a computerized workshop. (C.) How to manage the new education system / Problems and possibilities raised, from the practical and organizational points of view, of architectural education by the introduction of computers in the classrooms. (D.) CAAI. Formal versus informal structure / How will the traditional teaching structure be affected by the incidence of these new systems in which the access to knowledge and information can be obtained in a random way and guided by personal and subjective criteria.

series eCAADe
email
last changed 2022/06/07 07:49

_id 831d
authors Seebohm, Thomas
year 1992
title Discoursing on Urban History Through Structured Typologies
doi https://doi.org/10.52842/conf.acadia.1992.157
source Mission - Method - Madness [ACADIA Conference Proceedings / ISBN 1-880250-01-2] 1992, pp. 157-175
summary How can urban history be studied with the aid of three-dimensional computer modeling? One way is to model known cities at various times in history, using historical records as sources of data. While such studies greatly enhance the understanding of the form and structure of specific cities at specific points in time, it is questionable whether such studies actually provide a true understanding of history. It can be argued that they do not because such studies only show a record of one of many possible courses of action at various moments in time. To gain a true understanding of urban history one has to place oneself back in historical time to consider all of the possible courses of action which were open in the light of the then current situation of the city, to act upon a possible course of action and to view the consequences in the physical form of the city. Only such an understanding of urban history can transcend the memory of the actual and hence the behavior of the possible. Moreover, only such an understanding can overcome the limitations of historical relativism, which contends that historical fact is of value only in historical context, with the realization, due to Benedetto Croce and echoed by Rudolf Bultmann, that the horizon of "'deeper understanding" lies in "'the actuality of decision"' (Seebohm and van Pelt 1990).

One cannot conduct such studies on real cities except, perhaps, as a point of departure at some specific point in time to provide an initial layout for a city knowing that future forms derived by the studies will diverge from that recorded in history. An entirely imaginary city is therefore chosen. Although the components of this city at the level of individual buildings are taken from known cities in history, this choice does not preclude alternative forms of the city. To some degree, building types are invariants and, as argued in the Appendix, so are the urban typologies into which they may be grouped. In this imaginary city students of urban history play the role of citizens or groups of citizens. As they defend their interests and make concessions, while interacting with each other in their respective roles, they determine the nature of the city as it evolves through the major periods of Western urban history in the form of threedimensional computer models.

My colleague R.J. van Pelt and I presented this approach to the study of urban history previously at ACADIA (Seebohm and van Pelt 1990). Yet we did not pay sufficient attention to the manner in which such urban models should be structured and how the efforts of the participants should be coordinated. In the following sections I therefore review what the requirements are for three-dimensional modeling to support studies in urban history as outlined both from the viewpoint of file structure of the models and other viewpoints which have bearing on this structure. Three alternative software schemes of progressively increasing complexity are then discussed with regard to their ability to satisfy these requirements. This comparative study of software alternatives and their corresponding file structures justifies the present choice of structure in relation to the simpler and better known generic alternatives which do not have the necessary flexibility for structuring the urban model. Such flexibility means, of course, that in the first instance the modeling software is more timeconsuming to learn than a simple point and click package in accord with the now established axiom that ease of learning software tools is inversely related to the functional power of the tools. (Smith 1987).

series ACADIA
email
last changed 2022/06/07 07:56

_id e7c8
authors Kalisperis, Loukas N., Steinman, Mitch and Summers, Luis H.
year 1992
title Design Knowledge, Environmental Complexity in Nonorthogonal Space
source New York: John Wiley & Sons, 1992. pp. 273-291 : ill. includes bibliography
summary Mechanization and industrialization of society has resulted in most people spending the greater part of their lives in enclosed environments. Optimal design of indoor artificial climates is therefore of increasing importance. Wherever artificial climates are created for human occupation, the aim is that the environment be designed so that individuals are in thermal comfort. Current design methodologies for radiant panel heating systems do not adequately account for the complexities of human thermal comfort, because they monitor air temperature alone and do not account for thermal neutrality in complex enclosures. Thermal comfort for a person is defined as that condition of mind which expresses satisfaction with the thermal environment. Thermal comfort is dependent on Mean Radiant Temperature and Operative Temperature among other factors. In designing artificial climates for human occupancy the interaction of the human with the heated surfaces as well the surface-to-surface heat exchange must be accounted for. Early work in the area provided an elaborate and difficult method for calculating radiant heat exchange for simplistic and orthogonal enclosures. A new improved method developed by the authors for designing radiant panel heating systems based on human thermal comfort and mean radiant temperature is presented. Through automation and elaboration this method overcomes the limitations of the early work. The design procedure accounts for human thermal comfort in nonorthogonal as well as orthogonal spaces based on mean radiant temperature prediction. The limitation of simplistic orthogonal geometries has been overcome with the introduction of the MRT-Correction method and inclined surface-to-person shape factor methodology. The new design method increases the accuracy of calculation and prediction of human thermal comfort and will allow designers to simulate complex enclosures utilizing the latest design knowledge of radiant heat exchange to increase human thermal comfort
keywords applications, architecture, building, energy, systems, design, knowledge
series CADline
last changed 2003/06/02 10:24

_id caadria2024_365
id caadria2024_365
authors Lahtinen, Aaro, Gardner, Nicole, Ramos Jaime, Cristina and Yu, Kuai
year 2024
title Visualising Sydney's Urban Green: A Web Interface for Monitoring Vegetation Coverage between 1992 and 2022 using Google Earth Engine
doi https://doi.org/10.52842/conf.caadria.2024.2.515
source Nicole Gardner, Christiane M. Herr, Likai Wang, Hirano Toshiki, Sumbul Ahmad Khan (eds.), ACCELERATED DESIGN - Proceedings of the 29th CAADRIA Conference, Singapore, 20-26 April 2024, Volume 2, pp. 515–524
summary With continued population growth and urban expansion, the severity of environmental concerns within cities is likely to increase without proper urban ecosystem monitoring and management. Despite this, limited efforts have been made to effectively communicate the ecological value of urban vegetation to Architecture, Engineering and Construction (AEC) professionals concerned with mitigating these effects and improving urban liveability. In response, this research project proposes a novel framework for identifying and conveying historical changes to vegetation coverage within the Greater Sydney area between 1992 and 2022. The cloud-based geo-spatial analysis platform, Google Earth Engine (GEE), was used to construct an accurate land cover classification of Landsat imagery, allowing the magnitude, spatial configuration, and period of vegetation loss to be promptly identified. The outcomes of this analysis are represented through an intuitive web platform that facilitates a thorough understanding of the complex relationships between anthropogenic activities and vegetation coverage. A key finding indicated that recent developments in the Blacktown area had directly contributed to heightened land surface temperature, suggesting a reformed approach to urban planning is required to address climatic concerns appropriately. The developed web interface provides a unique method for AEC professionals to assess the effectiveness of past planning strategies, encouraging a multi-disciplinary approach to urban ecosystem management.
keywords Urban Vegetation, Web Interface, Landsat Imagery, Land Cover Classification, Google Earth Engine
series CAADRIA
email
last changed 2024/11/17 22:05

_id aba4
authors Lischinski, D. Tampieri, F. and Greenberg, D.P.
year 1992
title Discontinuity Meshing for Accurate Radiosity
source IEEE Computer Graphics & Applications, November 1992, pp.25-38
summary We discuss the problem of accurately computing the illumination of a diffuse polyhedral environment due to an area light source. We show how umbra and penumbra boundaries and other illumination details correspond to discontinuities in the radiance function and its derivatives. The shape, location, and order of these discontinuities is determined by the geometry of the light sources and obstacles in the environment. We describe an object-space algorithm that accurately reproduces the radiance across a surface by constructing a discontinuity mesh that explicitly represents various discontinuities in the radiance function as boundaries between mesh elements. A piecewise quadratic interpolant is used to approximate the radiance function, preserving the discontinuities associated with the edges in the mesh. This algorithm can be used in the framework of a progressive refinement radiosity system to solve the diffuse global illumination problem. Results produced by the new method are compared with ones obtained using a standard radiosity system.
series journal paper
last changed 2003/04/23 15:50

_id b0f7
authors Martens, Bob
year 1992
title A FINISHING TOUCH TO THE FULL-SCALE LABORATORY AT THE UNIVERSITY OF TECHNOLOGY IN VIENNA
source Proceedings of the 4rd European Full-Scale Modelling Conference / Lausanne (Switzerland) 9-12 September 1992, Part A, pp. 7-14
summary The development planning of the full-scale laboratory at the Vienna University of Technology was already presented to the third E.F.A. Conference in Lund (1990). Exchange of experience has greatly encouraged us to take all measures necessary for an immediate provisional operation. Working experience was of considerable significance regarding reconstruction work having repeatedly been postponed ever since 1988. This paper deals with the Vienna full-scale laboratory in its ultimate form and all the equipment designed therefore. Summarizingly, the further measures for operation are being considered.
keywords Full-scale Modeling, Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 15:30

_id 7291
authors Arvesen, Liv
year 1992
title Measures and the Unmeasurable
source Proceedings of the 4rd European Full-Scale Modelling Conference / Lausanne (Switzerland) 9-12 September 1992, Part C, pp. 11-16
summary Nowhere do we ever find a similar environment as the one related to the tea ceremony. We may learn from the teamasters as we may learn from our masters of architecture. Directly and indirectly we are influenced by our surroundings which have been proved by research and which we ourselves experience in our daily life. The full scale experiments have been made on this subject. Related to the nervous mind the experiments were concentrated of form expressing safety and peace.
keywords Full-scale Modeling,Model Simulation, Real Environments
series other
more http://info.tuwien.ac.at/efa
last changed 2003/08/25 10:12

_id 6270
authors Atac, Ibrahim
year 1992
title CAAD Education and Post-Graduate Opportunities (At Mimar Sinan University)
doi https://doi.org/10.52842/conf.ecaade.1992.273
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 273-278
summary This paper addresses new design teaching strategies at an important and traditional university in Istanbul, founded as the Academy of Fine Arts 110 years ago. It will include a short review of design education before the Academy changed into a university, and a description of the present situation with regard to computers. Nearly two years ago, CAAD education was introduced as an elective subject. The students show great interest in CAD; most Turkish architects now work with computers and CAAD graphics, although automated architecture has not yet become firmly established. The aim of the CAD studio is also to establish an institute which will allow university staff to develop their own programs and to pursue scientific research in this field. On the basis of rising requests from researchers and students, rapid and healthy developments should be made to keep up with new technologies. As the improvement of the specialized involvement with CAD is the future target, MSU is attempting to broaden its horizon by including design methodologies of the last decades.

series eCAADe
last changed 2022/06/07 07:54

_id 10b7
authors Aukstakalnis, Steve and Blatner, David
year 1992
title Silicon Mirage: The Art and Science of Virtual Reality
source Peachpit Press
summary An introduction to virtual reality covers every aspect of the revolutionary new technology and its many possible applications, from computer games to air traffic control. Original. National ad/promo.
series other
last changed 2003/04/23 15:14

_id ascaad2022_043
id ascaad2022_043
authors Awan, Abeeha; Prokop, Simon; Vele, Jiri; Dounas, Theodor; Lombardi, Davide; Agkathidis, Asterios; Kurilla, Lukas
year 2022
title Qualitative Knowledge Graph for the Evaluation of Metaverse(s) - Is the Metaverse Hype or a Promising New Field for Architects?
source Hybrid Spaces of the Metaverse - Architecture in the Age of the Metaverse: Opportunities and Potentials [10th ASCAAD Conference Proceedings] Debbieh (Lebanon) [Virtual Conference] 12-13 October 2022, pp. 99-116
summary With the advancement of augmented and virtual reality technologies both in scale as well as accessibility, the Metaverse (Stephenson, 1992, Hughes, 2022) has emerged as a new digital space with potential for the application of architectural creativity and design. With blockchain integration, the concept of the Metaverse shows promise in creating a “decentralised” space for design and creativity with rewards for its participants. As a platform that incorporates these technological components, does the Metaverse have utility for architectural design? Is there something truly novel in what the Metaverse brings to architectural computing, and architectural design? The paper constructs a qualitative knowledge graph that can be used for the evaluation of various kinds of Metaverses in and for architectural design. We use Design Science Research methods to develop the knowledge graph and its evaluative capacity, stemming from our experience with two Metaverses, Decentraland and Cryptovoxels. The paper concludes with a discussion of knowledge and practice gaps that are evident, framing the opportunities that architects might have in the future in terms of developing Metaverse(s).
series ASCAAD
email
last changed 2024/02/16 13:24

_id a6d8
authors Baletic, Bojan
year 1992
title Information Codes of Mutant Forms
doi https://doi.org/10.52842/conf.ecaade.1992.173
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 173-186
summary If we assume that the statements from this quote are true, than we have to ask ourselves the question: "Should we teach architecture as we do?" This paper describes our experience in developing a knowledge base using a neural network system to serve as a "intelligent assistant" to students and other practicing architects in the conceptual phase of their work on housing design. Our approach concentrated on rising the awareness of the designer about the problem, not by building rules to guide him to a solution, but by questioning the categories and typologies by which he classifies and understands a problem. This we achieve through examples containing mutant forms, imperfect rules, gray zones between black and white, that carry the seeds of new solutions.
series eCAADe
email
last changed 2022/06/07 07:54

For more results click below:

this is page 0show page 1show page 2show page 3show page 4HOMELOGIN (you are user _anon_958660 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002