CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 479

_id 010d
authors Kokosalakis, Jen
year 1996
title The Role and Status of Computing and Participation of Design Clients in the Curriculum
doi https://doi.org/10.52842/conf.ecaade.1996.227
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 227-238
summary This paper is not intended as a fully researched exploration into architecture course coverage, but an attempt to introduce debate regarding some concerns on the role and status of Computing and consumer participation in the hope that CAAD peers will discuss and reflect with other specialists. A number of commentaries on serious deficiencies in the education of architects point to poor take-up of computing into the curriculum and an almost disassociation of the eventual designed building user from decisions on the design. By comparison it seems easier to find architects today who involve clients almost throughout the design process and increasing competency and continuity of CAAD usage in practices. The few brief references to Schools’ curricula are not formalised random studies. Certainly many excellent features will have been omitted. The intention is to start the debate. Finally a few directions are noted and some conclusions proffered. An argument is made for 3D CAAD models as the backbone and direct negotiating focus for design arbitration between consumer, designer [or students] and other professional collaborators in tesigning buildings, particularly where complex forms and spatial relationships are involved.

series eCAADe
email
last changed 2022/06/07 07:51

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 149d
authors Rosenman, M.A.
year 1996
title The generation of form using an evolutionary approach
source J.S. Gero and F. Sudweeks (eds), Artificial Intelligence in Design Ì96, 643-662
summary Design is a purposeful knowledge-based human activity whose aim is to create form which, when realized, satisfies the given intended purposes.1 Design may be categorized as routine or non-routine with the latter further categorized as innovative or creative. The lesser the knowledge about existing relationships between the requirements and the form to satisfy those requirements, the more a design problem tends towards creative design. Thus, for non-routine design, a knowledge-lean methodology is necessary. Natural evolution has produced a large variety of forms well-suited to their environment suggesting that the use of an evolutionary approach could provide meaningful design solutions in a non-routine design environment. This work investigates the possibilities of using an evolutionary approach based on a genotype which represents design grammar rules for instructions on locating appropriate building blocks. A decomposition/aggregation hierarchical organization of the design object is used to overcome combinatorial problems and to maximize parallelism in implementation.
series other
last changed 2003/04/23 15:50

_id cf57
authors Anumba, C.J.
year 1996
title Functional Integration in CAD Systems
source Advances in Engineering Software, 25, 103-109
summary This paper examines the issue of integration in CAD systems and argues that for integration to be effective, it must address the functional aspects of a CAD system. It discusses the need for integrated systems and, within a structural engineering context, identifies several facets of integration that should be targeted. These include 2-D drafting and 3-D modelling, graphical and non-graphical design information, the CAD data structure and its user interface, as well as integration of the drafting function with other engineering applications. Means of achieving these levels of integration are briefly discussed and a prognosis for the future development of integrated systems explored. Particular attention is paid to the emergence (and potential role) of `product models' which seek to encapsulate the full range of data elements required to define completely an engineering artefact.
series journal paper
last changed 2003/04/23 15:14

_id d7eb
authors Bharwani, Seraj
year 1996
title The MIT Design Studio of the Future: Virtual Design Review Video Program
source Proceedings of ACM CSCW'96 Conference on Computer-Supported Cooperative Work 1996 p.10
summary The MIT Design Studio of the Future is an interdisciplinary effort to focus on geographically distributed electronic design and work group collaboration issues. The physical elements of this virtual studio comprise networked computer and videoconferencing connections among electronic design studios at MIT in Civil and Environmental Engineering, Architecture and Planning, Mechanical Engineering, the Lab for Computer Science, and the Rapid Prototyping Lab, with WAN and other electronic connections to industry partners and sponsors to take advantage of non-local expertise and to introduce real design and construction and manufacturing problems into the equation. This prototype collaborative design network is known as StudioNet. The project is looking at aspects of the design process to determine how advanced technologies impact the process. The first experiment within the electronic studio setting was the "virtual design review", wherein jurors for the final design review were located in geographically distributed sites. The video captures the results of that project, as does a paper recently published in the journal Architectural Research Quarterly (Cambridge, UK; Vol. 1, No. 2; Dec. 1995).
series other
last changed 2002/07/07 16:01

_id ebd6
authors Dobson, Adrian
year 1996
title Teaching Architectural Composition Through the Medium of Virtual Reality Modelling
source Approaches to Computer Aided Architectural Composition [ISBN 83-905377-1-0] 1996, pp. 91-102
summary This paper describes an experimental teaching programme to enable architectural students in the early years of their undergraduate study to explore their understanding of the principles of architectural composition, by the creation and experience of architectural form and space in simple virtual reality environments. Principles of architectural composition, based upon the ordering and organisation of typological architectural elements according to established rules of composition, are introduced to the students, through the study of recognised works of architectural design theory. Virtual reality modelling is then used as a tool by the students for the testing and exploration of these theoretical concepts. Compositional exercises involving the creation and manipulation of a family of architectural elements to create form and space within a three dimensional virtual reality environment are carried out using Superscape VRT, a PC based virtual reality modelling system. The project seeks to bring intuitive and immersive computer based design techniques directly into the context of design theory teaching and studio practice, at an early stage in the architectural education process.
series other
last changed 1999/04/08 17:16

_id 5fc4
authors Fruchter, R.
year 1996
title Conceptual Collaborative Building Design Through Shared Graphics
source IEEE Expert special issue on Al in Civil Engineering, June vol. 33-41
summary The Interdisciplinary Communication Medium computer environment integrates a shared graphic modeling environment with network-based services to accommodate many perspectives in an architecture/engineering/construction team. Communication is critical for achieving better cooperation and coordination among professionals in a multidisciplinary building team. The complexity of large construction projects, the specialization of the project participants, and the different forms of synchronous and asynchronous collaborative work increase the need for intensive information sharing and exchange. Architecture/engineering/construction (A/E/C) professionals use computers to perform a specific discipline's tasks, but they still exchange design decisions and data using paper drawings and documents. Each project participant investigates and communicates alternative solutions through representational idioms that are private to that member's profession. Other project participants must then interpret, extract, and reenter the relevant information using the conventional idioms of their disciplines and in the format required by their tools. The resulting communication difficulties often affect the quality of the final building and the time required to achieve design consensus. This article describes a computer environment, the Interdisciplinary Communication Medium (ICM), that supports conceptual, collaborative building design. The objective is to help improve communication among professionals in a multidisciplinary team. Collaborative teamwork is an iterative process of reaching a shared understanding of the design and construction domains, the requirements, the building to be built, and the necessary commitments. The understanding emerges over time, as team members begin to grasp their own part of the project, and as they provide information that lets others progress. The fundamental concepts incorporated in ICM include A communication cycle for collaborative teamwork that comprises propose-interpret-critique-explain-change notifications. An open system-integration architecture. A shared graphic modeling environment for design exploration and communication. A Semantic Modeling Extension (SME), which introduces a structured way to capture design intent. A change-notification mechanism that documents notes on design changes linked to the graphic models, and routes change notifications. Thus, the process involves communication, negotiation, and team learning.
series journal paper
last changed 2003/04/23 15:14

_id 0a80
authors Gero, J.S.
year 1996
title Creativity, emergence and evolution in design: concepts and framework
source Knowledge-Based Systems 9(7): 435-448
summary This paper commences by outlining notions of creativity before examining the role of emergence in creative design. Various process models of emergence are presented; these are based on notions of additive and substitutive variables resulting in additive and substitutive schemas. Frameworks for both representation and process for a computational model of creative design are presented. The representational framework is based on design prototypes whilst the process framework is based on an evolutionary model. The computational model brings both representation and process together.
series other
email
last changed 2003/04/06 07:32

_id 6e0f
authors Goldstein, Laurence
year 1996
title Teaching Creativity with Computers
doi https://doi.org/10.52842/conf.caadria.1996.307
source CAADRIA ‘96 [Proceedings of The First Conference on Computer Aided Architectural Design Research in Asia / ISBN 9627-75-703-9] Hong Kong (Hong Kong) 25-27 April 1996, pp. 307-316
summary Using computers as an aid to architectural design promotes efficiency – of that there is no doubt – but its real merit must surely lie in provoking inventiveness. The medium makes possible the speedy creation and manipulation of images, a holistic, integrational approach to design, the exploration of virtual environments, the real time collaboration in design by individuals at remote sites and so on – these all fall under my heading of ‘efficiency’, since more or less the same ends can be achieved, albeit much more slowly and tediously, by traditional methods. But inventiveness, that’s something different. For comparison, think of the advent of reinforced concrete. In the early years, the new medium was used, roughly speaking, as a substitute for timber beams; but the genius of Le Corbusier was required to appreciate that concrete had fluid qualities which afforded completely different kinds of design opportunities. Can computers likewise revolutionise design? Will new kinds of building get constructed as a result of the advent of computers into the design arena?
series CAADRIA
last changed 2022/06/07 07:51

_id 3451
authors Harrison, Beverly L.
year 1996
title The Design and Evaluation of Transparent User Interfaces. From Theory to Practice
source University of Toronto, Toronto
summary The central research issue addressed by this dissertation is how we can design systems where information on user interface tools is overlaid on the work product being developed with these tools. The interface tools typically appear in the display foreground while the data or work space being manipulated typically appear in the perceptual background. This represents a trade-off in focused foreground attention versus focused background attention. By better supporting human attention we hope to improve the fluency of work, where fluency is reflected in a more seamless integration between task goals, user interface tool manipulations to achieve these goals, and feedback from the data or work space being manipulated. This research specifically focuses on the design and evaluation of transparent user interface 'layers' applied to graphical user interfaces. By allowing users to see through windows, menus, and tool palettes appearing in the perceptual foreground, an improved awareness of the underlying workspace and preservation of context are possible. However, transparent overlapping objects introduce visual interference which may degrade task performance, through reduced legibility. This dissertation explores a new interface technique (i.e., transparent layering) and, more importantly, undertakes a deeper investigation into the underlying issues that have implications for the design and use of this new technique. We have conducted a series of experiments, progressively more representative of the complex stimuli from real task domains. This enables us to systematically evaluate a variety of transparent user interfaces, while remaining confident of the applicability of the results to actual task contexts. We also describe prototypes and a case study evaluation of a working system using transparency based on our design parameters and experimental findings. Our findings indicate that similarity in both image color and in image content affect the levels of visual interference. Solid imagery in either the user interface tools (e.g., icons) or in the work space content (e.g., video, rendered models) are highly interference resistant and work well up to 75% transparent (i.e., 25% of foreground image and 75% of background content). Text and wire frame images (or line drawings) perform equally poorly but are highly usable up to 50% transparent, with no apparent performance penalty. Introducing contrasting outlining techniques improves the usability of transparent text menu interfaces up to 90% transparency. These results suggest that transparency is a usable and promising interface alternative. We suggest several methods of overcoming today's technical challenges in order to integrate transparency into existing applications.  
series thesis:PhD
email
last changed 2003/02/12 22:37

_id f251
authors Hui-Ping, T., Veeramani, D., Kunigahalli, R. and Russell, J.S.
year 1996
title OPSALC: A computer-integrated operations planning system for autonomous landfill compaction
source Automation in Construction 5 (1) (1996) pp. 39-50
summary Construction workers and operators associated with sanitary waste landfilling operations face significant health risks because of high levels of exposure to harmful solids and gases. Automation of the spreading and compacting processes of a landfilling operation using an autonomous compactor can reduce exposure of workers to the harmful environment, and thereby lead to improved safety of workers. This paper describes a computer-integrated operations planning system that facilitates (1) the design of landfill cells and (2) the generation of area-covering path plans for spreading and compaction processes by the autonomous compactor. The partitioning of a given landfill site into three-dimensional cells is accomplished by a recursive spatial decomposition technique in which the cell sizes are determined using a probabilistic model for waste generation. A recursive sub-division of each cell into monominoes facilitates the system to automatically deal with any differences between the actual amount of waste generated on a particular day and the amount predicted by the probabilistic model. The partitioned configuration of the landfill site is used to generate the path plan for the autonomous compactor using three motion models, namely straight-up, straight-down, and zig-zag. The computer-integrated system is implemented using PHIGS graphics standard and MOTIF toolkit with C-program binding.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 2a99
authors Keul, A. and Martens, B.
year 1996
title SIMULATION - HOW DOES IT SHAPE THE MESSAGE?
source The Future of Endoscopy [Proceedings of the 2nd European Architectural Endoscopy Association Conference / ISBN 3-85437-114-4], pp. 47-54
summary Architectural simulation techniques - CAD, video montage, endoscopy, full-scale or smaller models, stereoscopy, holography etc. - are common visualizations in planning. A subjective theory of planners says "experts are able to distinguish between 'pure design' in their heads and visualized design details and contexts like color, texture, material, brightness, eye level or perspective." If this is right, simulation details should be compensated mentally by trained people, but act as distractors to the lay mind.

Environmental psychologists specializing in architectural psychology offer "user needs' assessments" and "post occupancy evaluations" to facilitate communication between users and experts. To compare the efficiency of building descriptions, building walkthroughs, regular plans, simulation, and direct, long-time exposition, evaluation has to be evaluated.

Computer visualizations and virtual realities grow more important, but studies on the effects of simulation techniques upon experts and users are rare. As a contribution to the field of architectural simulation, an expert - user comparison of CAD versus endoscopy/model simulations of a Vienna city project was realized in 1995. The Department for Spatial Simulation at the Vienna University of Technology provided diaslides of the planned city development at Aspern showing a) CAD and b) endoscopy photos of small-scale polystyrol models. In an experimental design, they were presented uncommented as images of "PROJECT A" versus "PROJECT B" to student groups of architects and non-architects at Vienna and Salzburg (n= 95) and assessed by semantic differentials. Two contradictory hypotheses were tested: 1. The "selective framing hypothesis" (SFH) as the subjective theory of planners, postulating different judgement effects (measured by item means of the semantic differential) through selective attention of the planners versus material- and context-bound perception of the untrained users. 2. The "general framing hypothesis" (GFH) postulates typical framing and distraction effects of all simulation techniques affecting experts as well as non-experts.

The experiment showed that -counter-intuitive to expert opinions- framing and distraction were prominent both for experts and lay people (= GFH). A position effect (assessment interaction of CAD and endoscopy) was present with experts and non-experts, too. With empirical evidence for "the medium is the message", a more cautious attitude has to be adopted towards simulation products as powerful framing (i.e. perception- and opinion-shaping) devices.

keywords Architectural Endoscopy, Real Environments
series EAEA
type normal paper
email
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id 6237
authors Kiechle, Horst
year 1996
title CONSTRUCTING THE AMORPHOUS
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary Constructing the Amorphous entails the ongoing research into a concept which aims to develop a new understanding for Art, Design and Architecture within society. Rigid, reductivist and confrontational methods based on static geometry, prejudice and competition are to be replaced by dynamic, interdisciplinary and integrative models. In his current art practice the author simulates existing architectural spaces whose interior are re-designed into sculpted environments, based on creative irregularity rather than idealised geometry. All the computer simulated “soft” environments can be realised on an architectural scale as temporary installations with the curved surfaces approximated through planar polygons cut from sheet materials. Within this framework the Darren Knight Gallery Project represents the most recently example.

The paper discusses furthermore various 3D modeling options, such as standard CAD representations, high quality rendered video walk-throughs, VRML models and physically produced, full-scale models, made of corrugated cardboard. The cost and equipment requirements necessary for full-scale modeling in cardboard are outlined.

keywords VRML, CAD, 3D Modeling, Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:40

_id 8ee5
authors Koutamanis, A., Mitossi, V.
year 1996
title SIMULATION FOR ANALYSIS: REQUIREMENTS FROM ARCHITECTURAL DESIGN
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary Computerization has been a positive factor in the evolution of both kinds of analysis with respect to cost, availability and efficiency. Knowledge-based systems offer an appropriate implementation environment for normative analysis which can be more reliable and economical than evaluation by human experts. Perhaps more significant is the potential of interactive computer simulation where designs can be examined intuitively in full detail and at the same time by quantitative models. The advantages of this coupling are evident in the achievements of scientific visualization. Another advantage of computational systems is that the analysis can be linked to the design representation, thereby adding feedback to the conventional visualization of designs in drawing and modeling systems. Such connections are essential for the development of design guidance systems capable of reflecting consequences of partial inadequacies or changes to other aspects in a transparent and meaningful network of design constraints.

The possibilities of computer simulation also extend to issues inadequately covered by normative analysis and in particular to dynamic aspects of design such as human movement and circulation. The paper reports on a framework for addressing two related problems, (a) the simulation of fire escape from buildings and (b) the simulation of human movement on stairs. In both cases we propose that current evaluation techniques and the underlying design norms are too abstract to offer a measure of design success, as testified by the number of fatal accidents in fires and on stairs. In addition, fire escape and stair climbing are characterized by great variability with respect to both the form of the possible designs and the profiles of potential users. This suggests that testing prototypical forms by typical users and publishing the results as new, improved norms is not a realistic proposition for ensuring a global solution. Instead, we should test every design individually, within its own context. The development of an affordable, readily available system for the analysis and evaluation of aspects such as fire escape and stair safety can be based on the combination of the technologies of virtual reality and motion capture. Testing of a design by a number of test people in an immersion space provides not only intuitive evaluations by actual users but also quantitative data on the cognitive and proprioceptive behaviour of the test people. These data can be compiled into profiles of virtual humans for further testing of the same or related designs.

keywords Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:40

_id 8bea
authors Lipson, H. and Shpitalni, M.
year 1996
title Optimization-based reconstruction of a 3D object from a single freehand line drawing
source Computer-Aided Design, Vol. 28 (8) (1996) pp. 651-663
summary This paper describes an optimization-based algorithm for reconstructing a 3D model from a single, inaccurate, 2D edge-vertex graph. The graph, which serves as input for the reconstruction process, is obtained froman inaccurate freehand sketch of a 3D wireframe object. Compared with traditional reconstruction methods based on line labelling, the proposed approach is more tolerant of faults in handling both inaccurate vertexpositioning and sketches with missing entities. Furthermore, the proposed reconstruction method supports a wide scope of general (manifold and non-manifold) objects containing flat and cylindrical faces. Sketchesof wireframe models usually include enough information to reconstruct the complete body. The optimization algorithm is discussed, and examples from a working implementation are given.
keywords Drawing To Model, Optimization, Robustness
series journal paper
last changed 2003/05/15 21:33

_id 2e5a
authors Matsumoto, N. and Seta, S.
year 1997
title A history and application of visual simulation in which perceptual behaviour movement is measured.
source Architectural and Urban Simulation Techniques in Research and Education [3rd EAEA-Conference Proceedings]
summary For our research on perception and judgment, we have developed a new visual simulation system based on the previous system. Here, we report on the development history of our system and on the current research employing it. In 1975, the first visual simulation system was introduced, witch comprised a fiberscope and small-scale models. By manipulating the fiberscope's handles, the subject was able to view the models at eye level. When the pen-size CCD TV camera came out, we immediately embraced it, incorporating it into a computer controlled visual simulation system in 1988. It comprises four elements: operation input, drive control, model shooting, and presentation. This system was easy to operate, and the subject gained an omnidirectional, eye-level image as though walking through the model. In 1995, we began developing a new visual system. We wanted to relate the scale model image directly to perceptual behavior, to make natural background images, and to record human feelings in a non-verbal method. Restructuring the above four elements to meet our equirements and adding two more (background shooting and emotion spectrum analysis), we inally completed the new simulation system in 1996. We are employing this system in streetscape research. Using the emotion spectrum system, we are able to record brain waves. Quantifying the visual effects through these waves, we are analyzing the relation between visual effects and physical elements. Thus, we are presented with a new aspect to study: the relationship between brain waves and changes in the physical environment. We will be studying the relation of brain waves in our sequential analysis of the streetscape.
keywords Architectural Endoscopy, Endoscopy, Simulation, Visualisation, Visualization, Real Environments
series EAEA
email
more http://www.bk.tudelft.nl/media/eaea/eaea97.html
last changed 2005/09/09 10:43

_id 2b9f
authors Nasar, Jack
year 1996
title DESIGN BY COMPETITION: LOOKING AT COMPETITION ARCHITECTURE THROUGH TIME
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary We have seen an increase in design competitions for delivery of public buildings. Architectural groups such as the AIA or RIBA often call for a jury dominated by architects. A series of studies of a highly publicized design competition (Peter Eisenman's Wexner Center for the Visual Arts) show the building as a functional and "aesthetic" failure for the public. Some may argue that this is only a short-term appraisal, and that eventually the aesthetic statement will come into favor. To the question of whether architects (the experts) lead public tastes over time, we only have anecdotal evidence. Otherwise, there has been consistent findings of differences between what architects like and what the public likes. How can we look at long-term trends? This paper discusses two historiographic studies of competition architecture through history. One looks at the record of "masterpiece" buildings derived from frequency of reference in books and encyclopedias, and then tallies how many of those "masterpieces" result from competitions. Because of potential flaws in generalizing from these numbers, a second study has architects and non-architects judge photos of competition winning and competition losing designs from a 100-year period. The results show that both groups preferred more losers to winners. This suggests a need for an alternative model for design competition juries.
keywords Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:41

_id aa13
authors Oxman, Rivka
year 1996
title Shared Design-Web-Space in Internet-Based Design
doi https://doi.org/10.52842/conf.ecaade.1996.301
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 301-312
summary The introduction of the computer into architectural studies has resulted in innovative pedagogical approaches to design education. In recent years we have employed a teaching approach in which the student models the formalization of design knowledge in a computerized environment and experiments with the formal processing of this knowledge in the generation of designs. Interacting with the computer in the generation of designs requires making design knowledge explicit and formalized. Knowledge modeling is an approach to design and education in which the designer models the design thinking involved in the making of the object. In this process appropriate computational technology is essential to support and enhance certain phenomena of reasoning. From the pedagogical point of view such computational design environment appear also to enhance design learning and performance through the capability gained in computer modelling. In this respect, there is an analogous impact on the potential of design knowledge environments which can support design performance in practice. In this paper we consider the Internetas a potential design knowledge environment. The nature of the Net as a medium for the representation, storage and accessing of design knowledge is presented and various research issues are introduced. The potential of this new medium as a resource for design learning, design practice and design collaboration derives from the attributes of the technology. We elaborate on the appropriateness of certain attributes of the medium as a potential design environment. Future possibilities of the Net as a shared design resource are proposed. Considerations of the Net as a collaboratively constructed design resource as well as a medium for collaborative design are introduced.
series eCAADe
email
more http://arrivka@technion.ac.il/~rivka
last changed 2022/06/07 08:00

_id 82d3
authors Park, Hoon
year 1996
title Digital and Manual Media in Design
doi https://doi.org/10.52842/conf.ecaade.1996.325
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 325-334
summary Although there is an important commitment to the use of Computer Aided Design (CAD) systems in the design studio, there are still technologies that are not broadly accepted as useful to the designer especially in the early design stage. This is because CAD systems use the monitor and mouse which differ from the sketch paper and pen of manual media. This presentation explores how CAD systems can be applicable and integrated to this early design stage by allowing paper as a digital medium. With this exploration, I look at some ways for bridging manual media and digital media. For accommodating this approach, this article includes the evaluation of a prototype CAD system that discusses enhancing the role of CAD systems in the early design stage and linking the realms of the two currently distinct media — manual and digital. This system allows the designer to work with computer based and paper based tools in the same conventional environment. The method provides interesting insights into the relationship between digital and manual media.

series eCAADe
last changed 2022/06/07 08:00

_id a4a4
authors Pellegrino, Anna and Caneparo, Luca
year 1996
title Lighting Simulation for Architectural Design: a Case Study
doi https://doi.org/10.52842/conf.ecaade.1996.335
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 335-346
summary The paper considers some of the lighting simulation instruments at present available to architects for lighting design. We study the usability and accuracy of various systems, scale models, numerical simulations, rendering programs. An already built environment is the reference comparison for the accuracy of the simulation systems. The accuracy of the systems is evaluated for respectively quantitative simulation and qualitative visualisation. Quantitatively, the programs compute photometric values in physical units in a discrete number of points of the environment. Qualitatively, the programs generate images of visible radiation comparable to photographs of the real environment. They combine calculations with computer graphics, that is, they translate numerical values into images.

series eCAADe
email
last changed 2022/06/07 07:59

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 23HOMELOGIN (you are user _anon_555925 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002