CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 499

_id c1ad
authors Cheng, Nancy Yen-wen
year 1997
title Teaching CAD with Language Learning Methods
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 173-188
doi https://doi.org/10.52842/conf.acadia.1997.173
summary By looking at computer aided design as design communication we can use pedagogical methods from the well-developed discipline of language learning. Language learning breaks down a complex field into attainable steps, showing how learning strategies and attitudes can enhance mastery. Balancing the linguistic emphases of organizational analysis, communicative intent and contextual application can address different learning styles. Guiding students in learning approaches from language study will equip them to deal with constantly changing technology.

From overall curriculum planning to specific exercises, language study provides a model for building a learner-centered education. Educating students about the learning process, such as the variety of metacognitive, cognitive and social/affective strategies can improve learning. At an introductory level, providing a conceptual framework and enhancing resource-finding, brainstorming and coping abilities can lead to threshold competence. Using kit-of-parts problems helps students to focus on technique and content in successive steps, with mimetic and generative work appealing to different learning styles.

Practicing learning strategies on realistic projects hones the ability to connect concepts to actual situations, drawing on resource-usage, task management, and problem management skills. Including collaborative aspects in these projects provides the motivation of a real audience and while linking academic study to practical concerns. Examples from architectural education illustrate how the approach can be implemented.

series ACADIA
email
last changed 2022/06/07 07:55

_id 6a37
authors Fowler, Thomas and Muller, Brook
year 2002
title Physical and Digital Media Strategies For Exploring ‘Imagined’ Realities of Space, Skin and Light
source Thresholds - Design, Research, Education and Practice, in the Space Between the Physical and the Virtual [Proceedings of the 2002 Annual Conference of the Association for Computer Aided Design In Architecture / ISBN 1-880250-11-X] Pomona (California) 24-27 October 2002, pp. 13-23
doi https://doi.org/10.52842/conf.acadia.2002.013
summary This paper will discuss an unconventional methodology for using physical and digital media strategies ina tightly structured framework for the integration of Environmental Control Systems (ECS) principles intoa third year design studio. An interchangeable use of digital media and physical material enabledarchitectural explorations of rich tactile and luminous engagement.The principles that provide the foundation for integrative strategies between a design studio and buildingtechnology course spring from the Bauhaus tradition where a systematic approach to craftsmanship andvisual perception is emphasized. Focusing particularly on color, light, texture and materials, Josef Albersexplored the assemblage of found objects, transforming these materials into unexpected dynamiccompositions. Moholy-Nagy developed a technique called the photogram or camera-less photograph torecord the temporal movements of light. Wassily Kandinsky developed a method of analytical drawingthat breaks a still life composition into diagrammatic forces to express tension and geometry. Theseschematic diagrams provide a method for students to examine and analyze the implications of elementplacements in space (Bermudez, Neiman 1997). Gyorgy Kepes's Language of Vision provides a primerfor learning basic design principles. Kepes argued that the perception of a visual image needs aprocess of organization. According to Kepes, the experience of an image is "a creative act ofintegration". All of these principles provide the framework for the studio investigation.The quarter started with a series of intense short workshops that used an interchangeable use of digitaland physical media to focus on ECS topics such as day lighting, electric lighting, and skin vocabulary tolead students to consider these components as part of their form-making inspiration.In integrating ECS components with the design studio, an nine-step methodology was established toprovide students with a compelling and tangible framework for design:Examples of student work will be presented for the two times this course was offered (2001/02) to showhow exercises were linked to allow for a clear design progression.
series ACADIA
email
last changed 2022/06/07 07:51

_id 673a
authors Fukuda, T., Nagahama, R. and Sasada, T.
year 1997
title Networked Interactive 3-D design System for Collaboration
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 429-437
doi https://doi.org/10.52842/conf.caadria.1997.429
summary The concept of ODE (Open Design Environment) and corresponding system were presented in 1991. Then the new concept of NODE. which is networked version of ODE. was generated to make wide area collaboration in 1994. The aim of our research is to facilitate the collaboration among the various people involved in the design process of an urban or architectural project. This includes various designers and engineers, the client and the citizens who may be affected by such a project. With the new technologies of hyper medium, network, and component architecture, we have developed NODE system and applied in practical use of the collaboration among the various people. This study emphasizes the interactive 3-D design tool of NODE which is able to make realistic and realtime presentation with interactive interface. In recent years, ProjectFolder of NODE system, which is a case including documents, plans, and tools to proceed project., is created in the World Wide Web (WWW) and makes hyper links between a 3-D object and a text, an image. and other digital data.
series CAADRIA
email
last changed 2022/06/07 07:50

_id 02e4
authors Groh, Paul H.
year 1997
title Computer Visualization as a Tool for the Conceptual Understanding of Architecture
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 243-248
doi https://doi.org/10.52842/conf.acadia.1997.243
summary A good piece of architecture contains many levels of interrelated complexity. Understanding these levels and their interrelationship is critical to the understanding of a building to both architects and non-architects alike. A building's form, function, structure, materials, and details all relate to and impact one another. By selectively dissecting and taking apart buildings through their representations, one can carefully examine and understand the interrelationship of these building components.

With the recent introduction of computer graphics, much attention has been given to the representation of architecture. Floor plans and elevations have remained relatively unchanged, while digital animation and photorealistic renderings have become exciting new means of representation. A problem with the majority of this work and especially photorealistic rendering is that it represents the building as a image and concentrates on how a building looks as opposed to how it works. Often times this "look" is artificial, expressing the incapacity of programs (or their users) to represent the complexities of materials, lighting, and perspective. By using digital representation in a descriptive, less realistic way, one can explore the rich complexities and interrelationships of architecture. Instead of representing architecture as a finished product, it is possible to represent the ideas and concepts of the project.

series ACADIA
email
last changed 2022/06/07 07:51

_id 041e
authors Hall, Theodore W.
year 1997
title Hand-Eye Coordination in Virtual Reality, Using a Desktop Display, Stereo Glasses and a 3-D Mouse
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 73-82
doi https://doi.org/10.52842/conf.caadria.1997.073
summary Many virtual-reality displays augment the user’s view of the real world but do not completely mask it out or replace it. Intuitive control and realistic interaction with these displays depend on accurate hand-eye coordination: the projected image of a 3-D cursor in virtual space should align visually with the real position of the 3-D input device that controls it. This paper discusses some of the considerations and presents algorithms for coordinating the physical and virtual worlds.
series CAADRIA
email
last changed 2022/06/07 07:50

_id d910
authors Kieferle, Joachim B. and Herzberger, Erwin
year 2002
title The “Digital year for Architects” Experiences with an integrated teaching concept
source Connecting the Real and the Virtual - design e-ducation [20th eCAADe Conference Proceedings / ISBN 0-9541183-0-8] Warsaw (Poland) 18-20 September 2002, pp. 88-95
doi https://doi.org/10.52842/conf.ecaade.2002.088
summary The “digital year for architects” is an integrated course for graduate architecture students, that has been held since 1997 at Stuttgart University. Its concept is to link together traditional design teaching and working with computers. Three seminars and one design project are the framework of the course, in which the students are taught in design of e.g. image and space composition, typography, video, using virtual reality, theoretical basics for the final design project like information management or working environments, approximately a dozen software packages and finally a visionary design project. It has shown that the advantage of an integrated course compared to separate courses is the more intensive dealing with the project as well as achieving better skills when learning the new media. Not only because the project topics are different from usual architecture and more abstract, the main effect is to widen the students way of thinking and designing.
series eCAADe
email
last changed 2022/06/07 07:52

_id ijac20031105
id ijac20031105
authors Kieferle, Joachim B.; Herzberger, Erwin
year 2003
title The "Digital year for Architects" - Experiences with an Integrated Teaching Concept
source International Journal of Architectural Computing vol. 1 - no. 1
summary The "digital year for architects" is an integrated course for graduate architecture students that has been running since 1997, at Stuttgart University. Its concept is to link together traditional design teaching and working with computers. Three seminar classes and one design project form the framework of the course. In it the students are taught the design of, for example, image and space composition, typography, video, and using virtual reality. Additionally we cover theoretical basics for the final design project, such as information management or working environments. The course takes in approximately a dozen software packages and ends with a visionary design project. The products have shown the advantage of an integrated course compared to separate courses. The course proves to be more intensive in dealing with the project as well as achieving better skills when learning the associated new digital media. An important feature is that because the project topics are different from conventional architectural schemes, and tend to be more abstract, a key effect is to widen the students' way of thinking about designing.
series journal
email
more http://www.multi-science.co.uk/ijac.htm
last changed 2007/03/04 07:08

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id e43a
authors Richens, P.
year 1997
title Beyond Photorealism
source Architects’ Journal, 12/6/97
summary Computer rendering has come a long way in the last twenty years. But is it going in the right direction? Is the glossy photo-realistic image the only goal worth pursuing? And does the process of making it contribute enough to the design, or the ongoing dialogue with the client? There certainly are alternative modes of image-making. Frank Lloyd Wright, according to legend, could conceive a whole building in his head, and set it down rapidly, in plan and section. He would leave these drawings overnight to his assistant, who would set up a perspective. In the morning, FLW would spend an hour or two completing the rendering, ready for a lunch-time meeting with his clients. Today, many architects use their computers in the same way as FLW used his night-staff, to set-up an outline perspective, over which a rendering is produced by hand. Students, we observe, will often attempt to complete the rendering using a paint program such as Photoshop to apply textures and entourage in a kind of electronic collage.
series journal paper
email
more http://www.arct.cam.ac.uk/research/pubs/html/rich97c/
last changed 2003/05/15 21:45

_id diss_ruhl
id diss_ruhl
authors Ruhl, Volker R.
year 1997
title Computer-Aided Design and Manufacturing of Complex Shaped Concrete Formwork
source Doctor of Design Thesis, Graduate School of Design, Harvard University, Cambridge, MA
summary The research presented in this thesis challenges the appropriateness of existing, conventional forming practices in the building construction industry--both in situ or in prefabrication--for building concrete "freeforms," as they are characterized by impracticality and limitations in achieved geometric/formal quality. The author's theory proposes the application of alternative, non-traditional construction methods derived from the integration of information technology, in the form of Computer-Aided Design (CAD), Engineering (CAE) and Manufacturing (CAM), into the concrete tooling and placing process. This concept relies on a descriptive shape model of a physically non-existent building element which serves as a central database containing all the geometric data necessary to completely and accurately inform design development activities as well as the construction process. For this purpose, the thesis orients itself on existing, functioning models in manufacturing engineering and explores the broad spectrum of computer-aided manufacturing techniques applied in this industry. A two-phase, combined method study is applied to support the theory. Part I introduces the phenomenon of "complexity" in the architectural field, defines the goal of the thesis research and gives examples of complex shape. It also presents the two analyzed technologies: concrete tooling and automation technology. For both, it establishes terminology, classifications, gives insight into the state-of-the-art, and describes limitations. For concrete tooling it develops a set of quality criteria. Part II develops a theory in the form of a series of proposed "non-traditional" forming processes and concepts that are derived through a synthesis of state-of-the-art automation with current concrete forming and placing techniques, and describes them in varying depth, in both text and graphics, on the basis of their geometric versatility and their appropriateness for the proposed task. Emphasis is given to the newly emerging and most promising Solid Freeform Fabrication processes, and within this area, to laser-curing technology. The feasibility of using computer-aided formwork design, and computer-aided formwork fabrication in today's standard building practices is evaluated for this particular technology on the basis of case-studies. Performance in the categories of process, material, product, lead time and economy is analyzed over the complete tooling cycle and is compared to the performance of existing, conventional forming systems for steel, wood, plywood veneer and glassfiber reinforced plastic; value s added to the construction process and/or to the formwork product through information technology are pointed out and become part of the evaluation. For this purpose, an analytical framework was developed for testing the performance of various Solid Freeform Fabrication processes as well as the "sensitivity," or the impact of various influencing processes and/or product parameters on lead time and economy. This tool allows us to make various suggestions for optimization as well as to formulate recommendations and guidelines for the implementation of this technology. The primary objective of this research is to offer architects and engineers unprecedented independence from planar, orthogonal building geometry, in the realization of design ideas and/or design requirements for concrete structures and/or their components. The interplay between process-oriented design and innovative implementation technology may ultimately lead to an architecture conceived on a different level of complexity, with an extended form-vocabulary and of high quality.
series thesis:PhD
last changed 2005/09/09 12:58

_id e76c
authors Sato, Y., Wheeler, M.D. and Ikeuchi, K.
year 1997
title Object shape and reflectance modeling from observation
source Proceedings of SIGGRAPH 97, pp. 379-387, August, 1997
summary An object model for computer graphics applications should contain two aspects of information: shape and reflectance properties of the object. A number of techniques have been developed for modeling object shapes by observing real objects. In contrast, attempts to model reflectance properties of real objects have been rather limited. In most cases, modeled reflectance properties are too simple or too complicated to be used for synthesizing realistic images of the object. In this paper, we propose a new method for modeling object reflectance properties, as well as object shapes, by observing real objects. First, an object surface shape is reconstructed by merging multiple range images of the object. By using the reconstructed object shape and a sequence of color images of the object, parameters of a reflection model are estimated in a robust manner. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.
series other
last changed 2003/04/23 15:50

_id dfaf
authors Ataman, Osman
year 2000
title Some Experimental Results in the Assessment of Architectural Media
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 163-171
doi https://doi.org/10.52842/conf.acadia.2000.163
summary The relationship between the media and architectural design can be an important factor and can influence the design outcome. However, the nature, direction and magnitude of this relationship are unknown. Consequently, there have been many speculative claims about this relationship and almost none of them are supported with empirical research studies. In order to investigate these claims and to provide a testable framework for their potential contributions to architectural education, this study aims to explore the effects of media on architectural design. During 1995-1997, a total of 90 students enrolling in First Year Design Studio and Introduction to Computing classes at Georgia Tech participated in the study. A set of quantitative measures was developed to assess the differences between the two media and the effects on the architectural design. The results suggested that media influenced certain aspects of students’ designs. It is concluded that there is a strong relationship between the media and architectural design. The type of media not only changes some quantifiable design parameters but also affects the quality of design.
series ACADIA
email
last changed 2022/06/07 07:54

_id 536e
authors Bouman, Ole
year 1997
title RealSpace in QuickTimes: architecture and digitization
source Rotterdam: Nai Publishers
summary Time and space, drastically compressed by the computer, have become interchangeable. Time is compressed in that once everything has been reduced to 'bits' of information, it becomes simultaneously accessible. Space is compressed in that once everything has been reduced to 'bits' of information, it can be conveyed from A to B with the speed of light. As a result of digitization, everything is in the here and now. Before very long, the whole world will be on disk. Salvation is but a modem away. The digitization process is often seen in terms of (information) technology. That is to say, one hears a lot of talk about the digital media, about computer hardware, about the modem, mobile phone, dictaphone, remote control, buzzer, data glove and the cable or satellite links in between. Besides, our heads are spinning from the progress made in the field of software, in which multimedia applications, with their integration of text, image and sound, especially attract our attention. But digitization is not just a question of technology, it also involves a cultural reorganization. The question is not just what the cultural implications of digitization will be, but also why our culture should give rise to digitization in the first place. Culture is not simply a function of technology; the reverse is surely also true. Anyone who thinks about cultural implications, is interested in the effects of the computer. And indeed, those effects are overwhelming, providing enough material for endless speculation. The digital paradigm will entail a new image of humankind and a further dilution of the notion of social perfectibility; it will create new notions of time and space, a new concept of cause and effect and of hierarchy, a different sort of public sphere, a new view of matter, and so on. In the process it will indubitably alter our environment. Offices, shopping centres, dockyards, schools, hospitals, prisons, cultural institutions, even the private domain of the home: all the familiar design types will be up for review. Fascinated, we watch how the new wave accelerates the process of social change. The most popular sport nowadays is 'surfing' - because everyone is keen to display their grasp of dirty realism. But there is another way of looking at it: under what sort of circumstances is the process of digitization actually taking place? What conditions do we provide that enable technology to exert the influence it does? This is a perspective that leaves room for individual and collective responsibility. Technology is not some inevitable process sweeping history along in a dynamics of its own. Rather, it is the result of choices we ourselves make and these choices can be debated in a way that is rarely done at present: digitization thanks to or in spite of human culture, that is the question. In addition to the distinction between culture as the cause or the effect of digitization, there are a number of other distinctions that are accentuated by the computer. The best known and most widely reported is the generation gap. It is certainly stretching things a bit to write off everybody over the age of 35, as sometimes happens, but there is no getting around the fact that for a large group of people digitization simply does not exist. Anyone who has been in the bit business for a few years can't help noticing that mum and dad are living in a different place altogether. (But they, at least, still have a sense of place!) In addition to this, it is gradually becoming clear that the age-old distinction between market and individual interests are still relevant in the digital era. On the one hand, the advance of cybernetics is determined by the laws of the marketplace which this capital-intensive industry must satisfy. Increased efficiency, labour productivity and cost-effectiveness play a leading role. The consumer market is chiefly interested in what is 'marketable': info- and edutainment. On the other hand, an increasing number of people are not prepared to wait for what the market has to offer them. They set to work on their own, appropriate networks and software programs, create their own domains in cyberspace, domains that are free from the principle whereby the computer simply reproduces the old world, only faster and better. Here it is possible to create a different world, one that has never existed before. One, in which the Other finds a place. The computer works out a new paradigm for these creative spirits. In all these distinctions, architecture plays a key role. Owing to its many-sidedness, it excludes nothing and no one in advance. It is faced with the prospect of historic changes yet it has also created the preconditions for a digital culture. It is geared to the future, but has had plenty of experience with eternity. Owing to its status as the most expensive of arts, it is bound hand and foot to the laws of the marketplace. Yet it retains its capacity to provide scope for creativity and innovation, a margin of action that is free from standardization and regulation. The aim of RealSpace in QuickTimes is to show that the discipline of designing buildings, cities and landscapes is not only a exemplary illustration of the digital era but that it also provides scope for both collective and individual activity. It is not just architecture's charter that has been changed by the computer, but also its mandate. RealSpace in QuickTimes consists of an exhibition and an essay.
series other
email
last changed 2003/04/23 15:14

_id cabb
authors Broughton, T., Tan, A. and Coates, P.S.
year 1997
title The Use of Genetic Programming In Exploring 3D Design Worlds - A Report of Two Projects by Msc Students at CECA UEL
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 885-915
summary Genetic algorithms are used to evolve rule systems for a generative process, in one case a shape grammar,which uses the "Dawkins Biomorph" paradigm of user driven choices to perform artificial selection, in the other a CA/Lindenmeyer system using the Hausdorff dimension of the resultant configuration to drive natural selection. (1) Using Genetic Programming in an interactive 3D shape grammar. A report of a generative system combining genetic programming (GP) and 3D shape grammars. The reasoning that backs up the basis for this work depends on the interpretation of design as search In this system, a 3D form is a computer program made up of functions (transformations) & terminals (building blocks). Each program evaluates into a structure. Hence, in this instance a program is synonymous with form. Building blocks of form are platonic solids (box, cylinder, etc.). A Variety of combinations of the simple affine transformations of translation, scaling, rotation together with Boolean operations of union, subtraction and intersection performed on the building blocks generate different configurations of 3D forms. Using to the methodology of genetic programming, an initial population of such programs are randomly generated,subjected to a test for fitness (the eyeball test). Individual programs that have passed the test are selected to be parents for reproducing the next generation of programs via the process of recombination. (2) Using a GA to evolve rule sets to achieve a goal configuration. The aim of these experiments was to build a framework in which a structure's form could be defined by a set of instructions encoded into its genetic make-up. This was achieved by combining a generative rule system commonly used to model biological growth with a genetic algorithm simulating the evolutionary process of selection to evolve an adaptive rule system capable of replicating any preselected 3D shape. The generative modelling technique used is a string rewriting Lindenmayer system the genes of the emergent structures are the production rules of the L-system, and the spatial representation of the structures uses the geometry of iso-spatial dense-packed spheres
series CAAD Futures
email
last changed 2003/11/21 15:16

_id b4c4
authors Carrara, G., Fioravanti, A. and Novembri, G.
year 2000
title A framework for an Architectural Collaborative Design
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 57-60
doi https://doi.org/10.52842/conf.ecaade.2000.057
summary The building industry involves a larger number of disciplines, operators and professionals than other industrial processes. Its peculiarity is that the products (building objects) have a number of parts (building elements) that does not differ much from the number of classes into which building objects can be conceptually subdivided. Another important characteristic is that the building industry produces unique products (de Vries and van Zutphen, 1992). This is not an isolated situation but indeed one that is spreading also in other industrial fields. For example, production niches have proved successful in the automotive and computer industries (Carrara, Fioravanti, & Novembri, 1989). Building design is a complex multi-disciplinary process, which demands a high degree of co-ordination and co-operation among separate teams, each having its own specific knowledge and its own set of specific design tools. Establishing an environment for design tool integration is a prerequisite for network-based distributed work. It was attempted to solve the problem of efficient, user-friendly, and fast information exchange among operators by treating it simply as an exchange of data. But the failure of IGES, CGM, PHIGS confirms that data have different meanings and importance in different contexts. The STandard for Exchange of Product data, ISO 10303 Part 106 BCCM, relating to AEC field (Wix, 1997), seems to be too complex to be applied to professional studios. Moreover its structure is too deep and the conceptual classifications based on it do not allow multi-inheritance (Ekholm, 1996). From now on we shall adopt the BCCM semantic that defines the actor as "a functional participant in building construction"; and we shall define designer as "every member of the class formed by designers" (architects, engineers, town-planners, construction managers, etc.).
keywords Architectural Design Process, Collaborative Design, Knowledge Engineering, Dynamic Object Oriented Programming
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:55

_id 6496
authors Chen, Y.Z. and Maver, T.W.
year 1997
title Integrating Design Tools within a Human Collaborative Working
source The Int. Journal of Construction IT 5(2), pp. 47-73
summary This paper stresses the importance of establishing a collaborative working context as the basis for design integration. Within a virtual studio environment framework, a hybrid architecture for design tool integration is presented. Each design tool is wrapped as an autonomous service provider with its own data store; thus the project design data is physically distributed with the design tools. A global product model, which is augmented with meta-data description, is employed to provide a common vocabulary for communications and to assist the management of the distributed resources and activities. Collaboration-aware information is modelled and structured through the meta-data model and a tool model. Based on this, mechanisms for tool service coodination in varying modes are developed. It is then illustrated, through an implemented prototype system, how the integrated design tools might be used in human design work.
series journal paper
last changed 2003/05/15 21:45

_id maver_107
id maver_107
authors Chen, Yan and Maver, Tom W.
year 1997
title Integrating Design Tools within a Human Collaborative Working Context
source International Journal of Construction IT, Vol5, No 2, pp 35-53
summary Integrating design tools has been an important research subject. The work to be reported in this paper differs from many previous efforts in that it not only tackles tool-tool interoperation, but also does so within a human collaborative working context We suggest that design integration support should include not only tool interoperability, but also mechanisms for co-ordinate and control the tool use. We also argue that the higher-level management support should include not only formalised and automated mechanisms, but also semi-automated and even informal mechanisms for human designers to directly interact with each other. Within a collaborative working framework, we'll present a hybrid architecture for tool integration, in which the human designers and the design tools are assumed to be distributed while the management is centralised. In this approach, each design tool is wrapped as an autonomous service provider with its own data store; thus the project design data is physically distributed with the design tools. A meta-data augmented product model, which populates a central meta-data repository serving as a "map" for locating the distributed design objects, is devised to provide a common vocabulary for communications and to assist the management of the distributed resources and activities. A design object broker is used to mediate among the distributed tools, and the central meta-data repository. The reported work has been part of a collaborative design system called virtual studio environment We'll illustrate how the integrated design tools might be used in human design work within the virtual studio environment.
series other
email
last changed 2003/09/03 15:36

_id 71ad
authors Cicognani, Anna and Maher Mary Lou
year 1997
title Models of Collaboration for Designers in a Computer Supported Environment
source Formal Aspects of Collaborative CAD, IFIP, pp. 99-108
summary The development of models for Computer Mediated Collaborative Design (CMCD) provides guidelines for the continuing development of technology and tools for CMCD. In order to develop models for CMCD, a range of experiments and research objectives needs to be developed. The current literature around models for CMCD is still quite informal and descriptive. In this paper, we define the roles and types of models for CMCD. We propose a framework for understanding the contribution such models can make that considers two phenomomena in CMCD: communicating and designing. We present some descriptive models from design research, CSCW research, and CMCD research and show how these models address communicating and designing.
series other
last changed 2003/04/23 15:50

_id 2354
authors Clayden, A. and Szalapaj, P.
year 1997
title Architecture in Landscape: Integrated CAD Environments for Contextually Situated Design
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.q6p
summary This paper explores the future role of a more holistic and integrated approach to the design of architecture in landscape. Many of the design exploration and presentation techniques presently used by particular design professions do not lend themselves to an inherently collaborative design strategy.

Within contemporary digital environments, there are increasing opportunities to explore and evaluate design proposals which integrate both architectural and landscape aspects. The production of integrated design solutions exploring buildings and their surrounding context is now possible through the design development of shared 3-D and 4-D virtual environments, in which buildings no longer float in space.

The scope of landscape design has expanded through the application of techniques such as GIS allowing interpretations that include social, economic and environmental dimensions. In architecture, for example, object-oriented CAD environments now make it feasible to integrate conventional modelling techniques with analytical evaluations such as energy calculations and lighting simulations. These were all ambitions of architects and landscape designers in the 70s when computer power restricted the successful implementation of these ideas. Instead, the commercial trend at that time moved towards isolated specialist design tools in particular areas. Prior to recent innovations in computing, the closely related disciplines of architecture and landscape have been separated through the unnecessary development, in our view, of their own symbolic representations, and the subsequent computer applications. This has led to an unnatural separation between what were once closely related disciplines.

Significant increases in the performance of computers are now making it possible to move on from symbolic representations towards more contextual and meaningful representations. For example, the application of realistic materials textures to CAD-generated building models can then be linked to energy calculations using the chosen materials. It is now possible for a tree to look like a tree, to have leaves and even to be botanicaly identifiable. The building and landscape can be rendered from a common database of digital samples taken from the real world. The complete model may be viewed in a more meaningful way either through stills or animation, or better still, through a total simulation of the lifecycle of the design proposal. The model may also be used to explore environmental/energy considerations and changes in the balance between the building and its context most immediately through the growth simulation of vegetation but also as part of a larger planning model.

The Internet has a key role to play in facilitating this emerging collaborative design process. Design professionals are now able via the net to work on a shared model and to explore and test designs through the development of VRML, JAVA, whiteboarding and video conferencing. The end product may potentially be something that can be more easily viewed by the client/user. The ideas presented in this paper form the basis for the development of a dual course in landscape and architecture. This will create new teaching opportunities for exploring the design of buildings and sites through the shared development of a common computer model.

keywords Integrated Design Process, Landscape and Architecture, Shared Environmentsenvironments
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/szalapaj/szalapaj.htm
last changed 2022/06/07 07:50

_id 47fc
authors Costanzo, E., De Vecchi, A., Di Miceli, C. and Giacchino, V.
year 1997
title A Software for Automatically Verifying Compatibility in Complicated Building Assemblies
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.q4q
summary The research we are carrying on is intended to develop a tool aiding to design building mechanical assembly systems, which are often characterised by high complexity levels. In fact, when designing complicated building assemblies by making use of common graphical representations, it might be impossible for the operator to choose the proper shape and installation sequence of components so that they do not interfere during the assembly, and to check, in the meantime, the most favorable setting up modalities according to execution problems. Our software, running within CAD, by starting from the definition of the node features, will allow the operator to automatically get three types of representation that can simulate the assembly according to the assigned installation sequence: - instant images of the phases for setting up each component into the node; - 3D views showing the position of each component disassembled from the node and indicating the movements required for connection; - the components moving while the node is being constructed. All the representations can be updated step by step each time modifications to the node are made. Through this digital iterative design process - that takes advantage of various simultaneous and realistic prefigurations - the shape and function compatibility between the elements during the assembling can be verified. Furthermore, the software can quickly check whether any change and integration to the node is efficacious, rising the approximation levels in the design phase. At the moment we have developed the part of the tool that simulates the assembly by moving the components into the nodes according to the installation sequence.
series eCAADe
more http://info.tuwien.ac.at/ecaade/proc/costanzo/costanzo.htm
last changed 2022/06/07 07:50

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 24HOMELOGIN (you are user _anon_730737 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002