CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 483

_id 2a99
authors Keul, A. and Martens, B.
year 1996
title SIMULATION - HOW DOES IT SHAPE THE MESSAGE?
source The Future of Endoscopy [Proceedings of the 2nd European Architectural Endoscopy Association Conference / ISBN 3-85437-114-4], pp. 47-54
summary Architectural simulation techniques - CAD, video montage, endoscopy, full-scale or smaller models, stereoscopy, holography etc. - are common visualizations in planning. A subjective theory of planners says "experts are able to distinguish between 'pure design' in their heads and visualized design details and contexts like color, texture, material, brightness, eye level or perspective." If this is right, simulation details should be compensated mentally by trained people, but act as distractors to the lay mind.

Environmental psychologists specializing in architectural psychology offer "user needs' assessments" and "post occupancy evaluations" to facilitate communication between users and experts. To compare the efficiency of building descriptions, building walkthroughs, regular plans, simulation, and direct, long-time exposition, evaluation has to be evaluated.

Computer visualizations and virtual realities grow more important, but studies on the effects of simulation techniques upon experts and users are rare. As a contribution to the field of architectural simulation, an expert - user comparison of CAD versus endoscopy/model simulations of a Vienna city project was realized in 1995. The Department for Spatial Simulation at the Vienna University of Technology provided diaslides of the planned city development at Aspern showing a) CAD and b) endoscopy photos of small-scale polystyrol models. In an experimental design, they were presented uncommented as images of "PROJECT A" versus "PROJECT B" to student groups of architects and non-architects at Vienna and Salzburg (n= 95) and assessed by semantic differentials. Two contradictory hypotheses were tested: 1. The "selective framing hypothesis" (SFH) as the subjective theory of planners, postulating different judgement effects (measured by item means of the semantic differential) through selective attention of the planners versus material- and context-bound perception of the untrained users. 2. The "general framing hypothesis" (GFH) postulates typical framing and distraction effects of all simulation techniques affecting experts as well as non-experts.

The experiment showed that -counter-intuitive to expert opinions- framing and distraction were prominent both for experts and lay people (= GFH). A position effect (assessment interaction of CAD and endoscopy) was present with experts and non-experts, too. With empirical evidence for "the medium is the message", a more cautious attitude has to be adopted towards simulation products as powerful framing (i.e. perception- and opinion-shaping) devices.

keywords Architectural Endoscopy, Real Environments
series EAEA
type normal paper
email
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id 4931
authors Breen, Jack
year 1996
title Learning from the (In)Visible City
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 65-78
doi https://doi.org/10.52842/conf.ecaade.1996.065
summary This paper focuses on results and findings of an educational project, in which the participating students had to develop a design strategy for an urban plan by using and combining endoscopic and computational design visualisation techniques. This educational experiment attempted to create a link between the Media research programme titled 'Dynamic Perspective' and an educational exercise in design composition. It was conceived as a pilot study, aimed at the investigation of emerging applications and possible combinations of different imaging techniques which might be of benefit in architectural and urban design education and potentially for the (future) design practice. The aim of this study was also to explore the relationship between spatial perception and design simulation. The point of departure for the student exercise was an urban masterplan which the Dynamic Perspective research team prepared for the workshop 'the (in)visible city' as part of the 1995 European Architectural Endoscopy Association Conference in Vienna, Austria. The students taking part in the exercise were asked to develop, discuss and evaluate proposals for a given part of this masterplan by creating images through different model configurations using optical and computer aided visualisation techniques besides more traditional design media.The results of this project indicate that an active and combined use of visualisation media at a design level, may facilitate communication and lead to a greater understanding of design choices, thus creating insights and contributing to design decision-making both for the designers and for the other participants in the design process.
series eCAADe
email
more http://www.bk.tudelft.nl/Media/
last changed 2022/06/07 07:54

_id e2c4
authors Comair, C., Kaga, A. and Sasada, T.
year 1996
title Collaborative Design System with Network Technologies in Design Projects
source CAADRIA ‘96 [Proceedings of The First Conference on Computer Aided Architectural Design Research in Asia / ISBN 9627-75-703-9] Hong Kong (Hong Kong) 25-27 April 1996, pp. 269-286
doi https://doi.org/10.52842/conf.caadria.1996.269
summary This paper depicts the work of the team of researchers at the Sasada Laboratory in the area of collaborative design and the integration of global area network such as the Internet in order to extend the architectural studio into cyber-space. The Sasada Laboratory is located at the University of Osaka, Faculty of Engineering, Department of Environmental engineering, Japan. The portfolio of the Laboratory is extensive and impressive. The projects which were produced by the men and women of the Laboratory range from the production of databases and computer simulation of several segments of different cities throughout the world to specific studies of architectural monuments. The work performed on the databases was varied and included simulation of past, present, and future events. These databases were often huge and very complex to build. They presented challenges that sometimes seemed impossible to overcome. Often, specialised software, and in some cases hardware, had to be designed on the "fly” for the task. In this paper, we describe the advances of our research and how our work led us to the development of hardware and software. Most importantly, it depicts the methodology of work which our lab undertook. This research led to the birth of what we call the "Open Development Environment” (ODE) and later to the networked version of ODE (NODE). The main purpose of NODE is to allow various people, usually separated by great distances, to work together on a given project and to introduce computer simulation into the working environment. Today, our laboratory is no longer limited to the physical location of our lab. Thanks to global area networks, such as the Internet, our office has been extended into the virtual space of the web. Today, we exchange ideas and collaborate on projects using the network with people that are spread over the five continents.
series CAADRIA
email
last changed 2022/06/07 07:56

_id 413e
authors Dalholm-Hornyansky, Elisabeth and Rydberg-Mitchell, Birgitta
year 1996
title SPATIAL NAVIGATION IN VIRTUAL REALITY
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary For the past decade, we have carried out a number of participation projects using full-scale modeling as an aid for communication and design. We are currently participating in an interdisciplinary research project which aims to combine and compare various visualization methods and techniques, among others, full-scale modeling and virtual reality, in design processes with users. In this paper, we will discuss virtual reality as a design tool in light of previous experience with full-scale modeling and literature on cognitive psychology. We describe a minor explorative study, which was carried out to elucidate the answers to several crucial questions: Is realism in movement a condition for the perception of space or can it be achieved while moving through walls, floors and so forth? Does velocity of movement and reduced visual field have an impact on the perception of space? Are landmarks vital clues for spatial navigation and how do we reproduce them in virtual environments? Can “daylight“, color, material and texture facilitate navigation and are details, furnishings and people important objects of reference? How could contextual information clues, like views and surroundings, be added to facilitate orientation? Do we need our other senses to supplement the visual experience in virtual reality and what is the role of mental maps in spatial navigation?
keywords Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:49

_id ddssup9614
id ddssup9614
authors Loughreit, Fouad
year 1996
title Methods to assist the design of road surfaces with a reservoir structure: To improve flood risk management
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part two: Urban Planning Proceedings (Spa, Belgium), August 18-21, 1996
summary Reservoir road surfaces can be seen as equipment of the future, in that they have two functions in the same structure (circulation and hydraulic functions). They can thus be laid without immobilising land, which is very expensive and prized in urban areas. Furthermore, they enable the limitation of the flow or volumes of running water, and thus help control rainwater, resulting in better flood risk management. The questions asked by drainage designers are how can we design these structures in the best way? How are they going to work for different types of rain (rain from storms, prolonged winter rain ....)? As for the public administrators, they wonder how a series of areas equipped with this type of technique (total flow management) would work. By solving this latter problem, we could really arouse interest in flood risk management. Given the diversity of structures possible for reservoir road surfaces (regulated, non-regulated, draining surface, dispersion surface...), we suggest comparing design and simulation methods, taking into account the measurement and total flow management problems mentioned above. So as to validate these comparisons and to give some directions concerning the use of one or the other methods, we use flow-metre measures on two different sites in Lyons. One of these sites is a car- park on a tertiary activity zone on the La Doua campus in Villeurbanne, the other a refuse dump in the Greater Lyons area in the town of Craponne. They are both interesting as they have different features. The first is non-regulated downstream and is used on a car-park for light motor-vehicles. The other is regulated and the traffic on it is made up of lorries. These sites will be described in this article.
series DDSS
email
last changed 2003/08/07 16:36

_id 4710
authors Senyapili, Burcu
year 1996
title THE TRUE MODEL CONCEPT IN COMPUTER GENERATED SIMULATIONS
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary Each design product depends on a design model originated in the designer's mind. From initial design decisions even to the final product, each design step is a representation of this design model. Designers create and communicate using the design models in their minds. They solve design problems by recreating and transforming the design model and utilize various means to display the final form of the model. One of these means, the traditional paper-based media of design representation (drawings, mockup models) alienate the representation from the design model, largely due to the lack of the display of the 4th dimension. Architecture is essentially a four-dimensional issue, incorporating the life of the edifice and the dynamic perception of the space by people. However, computer generated simulations (walkthrough, flythrough, virtual reality applications) of architectural design give us the chance to represent the design model in 4D, which is not possible in the traditional media. Thus, they introduce a potential field of use and study in architectural design.

Most of the studies done for the effective use of this potential of computer aid in architectural design assert that the way architects design without the computer is not "familiar" to the way architects are led to design with the computer. In other words, they complain that the architectural design software does not work in the same way as the architects think and design the models in their brains. Within the above framework, this study initially discusses architectural design as a modeling process and defines computer generated simulations (walkthrough, flythrough, virtual reality) as models. Based on this discussion, the "familiarity" of architectural design and computer aided design is displayed. And then, it is asserted that the issue of familiarity should be discussed not from the point of the modeling procedure, but from the "trueness" of the model displayed.

Therefore, it is relevant to ask to what extent should the simulation simulate the design model. The simulation, actually, simulates not what is real, but what is unreal. In other words, the simulation tells lies in order to display the truth. Consequently, the study proposes measures as to how true a simulation model should be in order to represent the design model best.

keywords Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:45

_id 0ef8
authors Völker, H., Sariyildiz, S., Schwenck, M. and Durmisevic, S.
year 1996
title THE NEXT GENERATION OF ARCHITECTURE WITHIN COMPUTER SCIENCES
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary Considering architecture as a mixture of exact sciences and the art, we can state that as in all other sciences, every technical invention and development has resulted in advantages and disadvantages for the well-being and prosperity of mankind. Think about the developments in the fields of nuclear energy or space travel. Besides bringing a lot of improvements in many fields, it also has danger for the well-being of a mankind. The development of the advanced computer techniques has also influence on architecture, which is inevitable. How did the computer science influence architecture till now, and what is going to be the future of the architecture with this ongoing of computer science developments? The future developments will be both in the field of conceptual design (form aspect) and also in the area of materialization of the design process.

These all are dealing with the material world, for which the tools of computer science are highly appropriate. But what will happen to the immaterial world? How can we put these immaterial values into a computers model? Or can the computer be creative as a human being? Early developments of computer science in the field of architecture involved two-dimensional applications, and subsequently the significance of the third dimension became manifest. Nowadays, however, people are already speaking of a fourth dimension, interpreting it as time or as dynamics. And what, for instance, would a fifth, sixth or X-dimension represent?

In the future we will perhaps speak of the fifth dimension, comprising the tangible qualities of the building materials around us. And one day a sixth dimension might be created, when it will be possible to establish direct communication with computers, because direct exchange between the computer and the human brain has been realised. The ideas of designers can then be processed by the computer directly, and we will no longer be hampered by obstacles such as screen and keyboard. There are scientist who are working to realize bio-chips. If it will work, perhaps we can realise all these speculations. It is nearly sure that the emergence of new technologies will also affect our subject area, architecture and this will create fresh challenges, fresh concepts, and new buildings in the 21st century. The responsibility of the architects must be, to bear in mind that we are dealing with the well-being and the prosperity of mankind.

keywords Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:43

_id aff6
authors Ferrar, Steve
year 1996
title Back to the Drawing Board?
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 155-162
doi https://doi.org/10.52842/conf.ecaade.1996.155
summary I am starting my presentation with some slides of architecture as a reminder that above all else we are involved in the education of future architects. Such is the enthusiasm of many of us for our specialist subject that computers dominate any discussion of architecture. We must not lose sight of the fact that we are using computers to assist in the manipulation of space, form, light, texture and colour, and in communicating our ideas. They should also be helping us and our students to understand and deal with the relationship of built form to its environment, its users and other buildings. The use of computers should not get in the way of this. In the final analysis the image on a computer screen is only that - an image, a representation of a building. It is not the building itself. It is a means to an end and not an end in itself. The image must not be a substitute for the physical building. We must remember that we use most of our other senses when experiencing a building and it is just as important to be able to touch, hear and smell a piece of architecture as well as being able to see it. Who knows, perhaps even taste is important. How much does the use of computers affect the design process and the final appearance of the building? Would these buildings have been substantially different if a system of working in three dimensions, similar to computer aided design, had been available to these architects. To what degree has the design process and method of working shaped the architecture of designers like Frank Lloyd Wright, Carlo Scarpa, Louis Sullivan, Charles Rennie Mackintosh or Alvar Aalto.

series eCAADe
email
last changed 2022/06/07 07:50

_id c7e9
authors Maver, T.W.
year 2002
title Predicting the Past, Remembering the Future
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 2-3
summary Charlas Magistrales 2There never has been such an exciting moment in time in the extraordinary 30 year history of our subject area, as NOW,when the philosophical theoretical and practical issues of virtuality are taking centre stage.The PastThere have, of course, been other defining moments during these exciting 30 years:• the first algorithms for generating building layouts (circa 1965).• the first use of Computer graphics for building appraisal (circa 1966).• the first integrated package for building performance appraisal (circa 1972).• the first computer generated perspective drawings (circa 1973).• the first robust drafting systems (circa 1975).• the first dynamic energy models (circa 1982).• the first photorealistic colour imaging (circa 1986).• the first animations (circa 1988)• the first multimedia systems (circa 1995), and• the first convincing demonstrations of virtual reality (circa 1996).Whereas the CAAD community has been hugely inventive in the development of ICT applications to building design, it hasbeen woefully remiss in its attempts to evaluate the contribution of those developments to the quality of the built environmentor to the efficiency of the design process. In the absence of any real evidence, one can only conjecture regarding the realbenefits which fall, it is suggested, under the following headings:• Verisimilitude: The extraordinary quality of still and animated images of the formal qualities of the interiors and exteriorsof individual buildings and of whole neighborhoods must surely give great comfort to practitioners and their clients thatwhat is intended, formally, is what will be delivered, i.e. WYSIWYG - what you see is what you get.• Sustainability: The power of «first-principle» models of the dynamic energetic behaviour of buildings in response tochanging diurnal and seasonal conditions has the potential to save millions of dollars and dramatically to reduce thedamaging environmental pollution created by badly designed and managed buildings.• Productivity: CAD is now a multi-billion dollar business which offers design decision support systems which operate,effectively, across continents, time-zones, professions and companies.• Communication: Multi-media technology - cheap to deliver but high in value - is changing the way in which we canexplain and understand the past and, envisage and anticipate the future; virtual past and virtual future!MacromyopiaThe late John Lansdown offered the view, in his wonderfully prophetic way, that ...”the future will be just like the past, onlymore so...”So what can we expect the extraordinary trajectory of our subject area to be?To have any chance of being accurate we have to have an understanding of the phenomenon of macromyopia: thephenomenon exhibitted by society of greatly exaggerating the immediate short-term impact of new technologies (particularlythe information technologies) but, more importantly, seriously underestimating their sustained long-term impacts - socially,economically and intellectually . Examples of flawed predictions regarding the the future application of information technologiesinclude:• The British Government in 1880 declined to support the idea of a national telephonic system, backed by the argumentthat there were sufficient small boys in the countryside to run with messages.• Alexander Bell was modest enough to say that: «I am not boasting or exaggerating but I believe, one day, there will bea telephone in every American city».• Tom Watson, in 1943 said: «I think there is a world market for about 5 computers».• In 1977, Ken Olssop of Digital said: «There is no reason for any individuals to have a computer in their home».The FutureJust as the ascent of woman/man-kind can be attributed to her/his capacity to discover amplifiers of the modest humancapability, so we shall discover how best to exploit our most important amplifier - that of the intellect. The more we know themore we can figure; the more we can figure the more we understand; the more we understand the more we can appraise;the more we can appraise the more we can decide; the more we can decide the more we can act; the more we can act themore we can shape; and the more we can shape, the better the chance that we can leave for future generations a trulysustainable built environment which is fit-for-purpose, cost-beneficial, environmentally friendly and culturally significactCentral to this aspiration will be our understanding of the relationship between real and virtual worlds and how to moveeffortlessly between them. We need to be able to design, from within the virtual world, environments which may be real ormay remain virtual or, perhaps, be part real and part virtual.What is certain is that the next 30 years will be every bit as exciting and challenging as the first 30 years.
series SIGRADI
email
last changed 2016/03/10 09:55

_id 4b22
authors Moorhouse, J.
year 1996
title Teach a Man to Catch a Fish
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 281-286
doi https://doi.org/10.52842/conf.ecaade.1996.281
summary An international charity outlined the following principle recently in an advertisement. “Give a man a fish and he will feed himself for a day, teach a man how to catch a fish and he will feed himself for a lifetime.” In education, the same principle may be applied to learning.

To the student of architecture, skills in the use of commercial software may be advantageouus in the search for future employment and can prove for be a useful springboard for exploring the potential of CAAD in a broader sense. However, software (and hardware) is continually being upgraded and developed, and it is apparent that such software does not fully meet the need of the designer.

Exploring the possibilities of CAADesigning as an integral part of learning to design will equip the student with the CAAD literacy necessary for working in practice, but more importantly will provide the student with a rich and diverse understanding of design approaches.

Traditionally design tutors have taught (by example) how individual architects design. Providing a library of architects CAADesigning in different ways can be used to establish precedents and examples, demystify the activities to both students and tutors and provide a rich set of methodologies as a working context for students to draw inspiration from.

As part of an ongoing research study, a new direction has been taken gathering, comparing, contrasting and grouping live records of architects CAADesigning. This paper will outline the benefits of recording and creating such a library and will describe examples of recent findings.

series eCAADe
email
last changed 2022/06/07 07:58

_id a8b6
authors Oliver, S. and Betts, M.
year 1996
title An information technology forecast for the architectural profession
source Automation in Construction 4 (4) (1996) pp. 263-279
summary Much of our research in IT in construction is concerned with developing technologies and prescribing how they can be applied to construction problems. Our rationale for our choice of technologies to push is often unstated and the relative significance of a range of technologies is rarely considered. The impact of emerging technologies on the strategic health of companies and professions is also rarely discussed. Few professions appear to be explicitly in control of how IT will impact their future. This paper addresses both of these issues through the example of an IT forecast for the architectural profession. It does this by examining issues of technology forecasting and development by reviewing currently emerging IT's and by conducting an opinion survey of which are of greatest significance to the architectural profession. The result is a relative assessment of the importance to architects of 10 technological mini-scenarios from which an overall architectural IT scenario is constructed.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:23

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id e29d
authors Arvesen, Liv
year 1996
title LIGHT AS LANGUAGE
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary With the unlimited supply of electric light our surroundings very easily may be illuminated too strongly. Too much light is unpleasant for our eyes, and a high level of light in many cases disturbs the conception of form. Just as in a forest, we need shadows, contrasts and variation when we compose with light. If we focus on the term compose, it is natural to conceive our environment as a wholeness. In fact, this is not only aesthetically important, it is true in a physical context. Inspired by old windows several similar examples have been built in the Trondheim Full-scale Laboratory where depth is obtained by constructing shelves on each side of the opening. When daylight is fading, indirect artificial light from above gradually lightens the window. The opening is perceived as a space of light both during the day and when it is dark outside.

Another of the built examples at Trondheim University which will be presented, is a doctor's waitingroom. It is a case study of special interest because it often appears to be a neglected area. Let us start asking: What do we have in common when we are waiting to come in to a doctor? We are nervous and we feel sometimes miserable. Analysing the situation we understand the need for an interior that cares for our state of mind. The level of light is important in this situation. Light has to speak softly. Instead of the ordinary strong light in the middle of the ceiling, several spots are selected to lighten the small tables separating the seats. The separation is supposed to give a feeling of privacy. By the low row of reflected planes we experience an intimate and warming atmosphere in the room. A special place for children contributes to the total impression of calm. In this corner the inside of some shelves are lit by indirect light, an effect which puts emphasis on the small scale suitable for a child. And it also demonstrates the good results of variation. The light setting in this room shows how light is “caught” two different ways.

keywords Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:34

_id b6a7
authors Jensen, K.
year 1996
title Coloured Petri Nets: Basic Concepts
source 2nd ed., Springer Verlag, Berlin
summary This book presents a coherent description of the theoretical and practical aspects of Coloured Petri Nets (CP-nets or CPN). It shows how CP-nets have been developed - from being a promising theoretical model to being a full-fledged language for the design, specification, simulation, validation and implementation of large software systems (and other systems in which human beings and/or computers communicate by means of some more or less formal rules). The book contains the formal definition of CP-nets and the mathematical theory behind their analysis methods. However, it has been the intention to write the book in such a way that it also becomes attractive to readers who are more interested in applications than the underlying mathematics. This means that a large part of the book is written in a style which is closer to an engineering textbook (or a users' manual) than it is to a typical textbook in theoretical computer science. The book consists of three separate volumes. The first volume defines the net model (i.e., hierarchical CP-nets) and the basic concepts (e.g., the different behavioural properties such as deadlocks, fairness and home markings). It gives a detailed presentation of many small examples and a brief overview of some industrial applications. It introduces the formal analysis methods. Finally, it contains a description of a set of CPN tools which support the practical use of CP-nets. Most of the material in this volume is application oriented. The purpose of the volume is to teach the reader how to construct CPN models and how to analyse these by means of simulation. The second volume contains a detailed presentation of the theory behind the formal analysis methods - in particular occurrence graphs with equivalence classes and place/transition invariants. It also describes how these analysis methods are supported by computer tools. Parts of this volume are rather theoretical while other parts are application oriented. The purpose of the volume is to teach the reader how to use the formal analysis methods. This will not necessarily require a deep understanding of the underlying mathematical theory (although such knowledge will of course be a help). The third volume contains a detailed description of a selection of industrial applications. The purpose is to document the most important ideas and experiences from the projects - in a way which is useful for readers who do not yet have personal experience with the construction and analysis of large CPN diagrams. Another purpose is to demonstrate the feasibility of using CP-nets and the CPN tools for such projects. Together the three volumes present the theory behind CP-nets, the supporting CPN tools and some of the practical experiences with CP-nets and the tools. In our opinion it is extremely important that these three research areas have been developed simultaneously. The three areas influence each other and none of them could be adequately developed without the other two. As an example, we think it would have been totally impossible to develop the hierarchy concepts of CP-nets without simultaneously having a solid background in the theory of CP-nets, a good idea for a tool to support the hierarchy concepts, and a thorough knowledge of the typical application areas.
series other
last changed 2003/04/23 15:14

_id 06e1
authors Keul, Alexander
year 1996
title LOST IN SPACE? ARCHITECTURAL PSYCHOLOGY - PAST, PRESENT, FUTURE
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary A methodological review by Kaminski (1995) summed up five perspectives in environmental psychology - patterns of spatial distribution, everyday “jigsaw puzzles”, functional everyday action systems, sociocultural change and evolution of competence. Architectural psychology (named so at the Strathclyde conference 1969; Canter, 1973) as psychology of built environments is one leg of environmental psychology, the second one being psychology of environmental protection. Architectural psychology has come of age and passed its 25th birthday. Thus, a triangulation of its position, especially in Central Europe, seems interesting and necessary. A recent survey mainly on university projects in German-speaking countries (Kruse & Trimpin, 1995) found a marked decrease of studies in psychology of built environments. 1994, 25% of all projects were reported in this category, which in 1975 had made up 40% (Kruse, 1975). Guenther, in an unpublished survey of BDP (association of professional German psychologists) members, encountered only a handful active in architectural psychology - mostly part-time, not full-time. 1996, Austria has two full-time university specialists. The discrepancy between the general interest displayed by planners and a still low institutionalization is noticeable.

How is the research situation? Using several standard research data banks, the author collected articles and book(chapter)s on architectural psychology in German- and English-language countries from 1990 to 1996. Studies on main architecture-psychology interface problems such as user needs, housing quality evaluations, participatory planning and spatial simulation / virtual reality did not outline an “old, settled” discipline, but rather the sketchy, random surface of a field “always starting anew”. E.g., discussions at the 1995 EAEA-Conference showed that several architectural simulation studies since 1973 caused no major impact on planner's opinions (Keul&Martens, 1996). “Re-inventions of the wheel” are caused by a lack of meetings (except this one!) and of interdisciplinary infrastructure in German-language countries (contrary to Sweden or the United States). Social pressures building up on architecture nowadays by inter-European competition, budget cuts and citizen activities for informed consent in most urban projects are a new challenge for planners to cooperate efficiently with social scientists. At Salzburg, the author currently manages the Corporate Design-process for the Chamber of Architecture, Division for Upper Austria and Salzburg. A “working group for architectural psychology” (Keul-Martens-Maderthaner) has been active since 1994.

keywords Model Simulation, Real Environments
series EAEA
type normal paper
email
more http://info.tuwien.ac.at/efa/
last changed 2005/09/09 10:43

_id 2e5a
authors Matsumoto, N. and Seta, S.
year 1997
title A history and application of visual simulation in which perceptual behaviour movement is measured.
source Architectural and Urban Simulation Techniques in Research and Education [3rd EAEA-Conference Proceedings]
summary For our research on perception and judgment, we have developed a new visual simulation system based on the previous system. Here, we report on the development history of our system and on the current research employing it. In 1975, the first visual simulation system was introduced, witch comprised a fiberscope and small-scale models. By manipulating the fiberscope's handles, the subject was able to view the models at eye level. When the pen-size CCD TV camera came out, we immediately embraced it, incorporating it into a computer controlled visual simulation system in 1988. It comprises four elements: operation input, drive control, model shooting, and presentation. This system was easy to operate, and the subject gained an omnidirectional, eye-level image as though walking through the model. In 1995, we began developing a new visual system. We wanted to relate the scale model image directly to perceptual behavior, to make natural background images, and to record human feelings in a non-verbal method. Restructuring the above four elements to meet our equirements and adding two more (background shooting and emotion spectrum analysis), we inally completed the new simulation system in 1996. We are employing this system in streetscape research. Using the emotion spectrum system, we are able to record brain waves. Quantifying the visual effects through these waves, we are analyzing the relation between visual effects and physical elements. Thus, we are presented with a new aspect to study: the relationship between brain waves and changes in the physical environment. We will be studying the relation of brain waves in our sequential analysis of the streetscape.
keywords Architectural Endoscopy, Endoscopy, Simulation, Visualisation, Visualization, Real Environments
series EAEA
email
more http://www.bk.tudelft.nl/media/eaea/eaea97.html
last changed 2005/09/09 10:43

_id a447
authors Ng, E., Lam, K.P., Wu., W. and Nagakura, T
year 1996
title Advanced lighting Visualisation in Architectural Design
source Research Report RP960019, National University of Singapore, Singapore
summary To visually simulate a building interior before it is built has always been the wishes of the designer and his client. The visualised image serves to help the designer in interacting with the client on improving the design. Recently, advanced lighting simulation techniques are becoming available thanks to the advancements in software design and hardware speed. This paper reports how these advanced techniques could be harness to serve the local design professionals. The author argues that to serve the professonals well, it is important to look beyond technology and to synerise technology with design method and work-flow in a typical architectural design office.
series report
last changed 2003/04/23 15:14

_id e1a1
authors Rodriguez, G.
year 1996
title REAL SCALE MODEL VS. COMPUTER GENERATED MODEL
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary Advances in electronic design and communication are already reshaping the way architecture is done. The development of more sophisticated and user-friendly Computer Aided Design (CAD) software and of cheaper and more powerful hardware is making computers more and more accessible to architects, planners and designers. These professionals are not only using them as a drafting tool but also as a instrument for visualization. Designers are "building" digital models of their designs and producing photo-like renderings of spaces that do not exist in the dimensional world.

The problem resides in how realistic these Computer Generated Models (CGM) are. Moss & Banks (1958) considered realism “the capacity to reproduce as exactly as possible the object of study without actually using it”. He considers that realism depends on: 1)The number of elements that are reproduced; 2) The quality of those elements; 3) The similarity of replication and 4) Replication of the situation. CGM respond well to these considerations, they can be very realistic. But, are they capable of reproducing the same impressions on people as a real space?

Research has debated about the problems of the mode of representation and its influence on the judgement which is made. Wools (1970), Lau (1970) and Canter, Benyon & West (1973) have demonstrated that the perception of a space is influenced by the mode of presentation. CGM are two-dimensional representations of three-dimensional space. Canter (1973) considers the three-dimensionality of the stimuli as crucial for its perception. So, can a CGM afford as much as a three-dimensional model?

The “Laboratorio de Experimentacion Espacial” (LEE) has been concerned with the problem of reality of the models used by architects. We have studied the degree in which models can be used as reliable and representative of real situations analyzing the Ecological Validity of several of them, specially the Real-Scale Model (Abadi & Cavallin, 1994). This kind of model has been found to be ecologically valid to represent real space. This research has two objectives: 1) to study the Ecological Validity of a Computer Generated Model; and 2) compare it with the Ecological Validity of a Real Scale Model in representing a real space.

keywords Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:42

_id avocaad_2001_19
id avocaad_2001_19
authors Shen-Kai Tang, Yu-Tung Liu, Yu-Sheng Chung, Chi-Seng Chung
year 2001
title The visual harmony between new and old materials in the restoration of historical architecture: A study of computer simulation
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In the research of historical architecture restoration, scholars respectively focus on the field of architectural context and architectural archeology (Shi, 1988, 1990, 1991, 1992, 1995; Fu, 1995, 1997; Chiu, 2000) or on architecture construction and the procedure of restoration (Shi, 1988, 1989; Chiu, 1990). How to choose materials and cope with their durability becomes an important issue in the restoration of historical architecture (Dasser, 1990; Wang, 1998).In the related research of the usage and durability of materials, some scholars deem that, instead of continuing the traditional ways that last for hundreds of years (that is to replace new materials with old ones), it might be better to keep the original materials (Dasser, 1990). However, unavoidably, some of the originals are much worn. Thus we have to first establish the standard of eliminating components, and secondly to replace identical or similar materials with the old components (Lee, 1990). After accomplishing the restoration, we often unexpectedly find out that the renewed historical building is too new that the sense of history is eliminated (Dasser, 1990; Fu, 1997). Actually this is the important factor that determines the accomplishment of restoration. In the past, some scholars find out that the contrast and conflict between new and old materials are contributed to the different time of manufacture and different coating, such as antiseptic, pattern, etc., which result in the discrepancy of the sense of visual perception (Lee, 1990; Fu, 1997; Dasser, 1990).In recent years, a number of researches and practice of computer technology have been done in the field of architectural design. We are able to proceed design communication more exactly by the application of some systematic softwares, such as image processing, computer graphic, computer modeling/rendering, animation, multimedia, virtual reality and so on (Lawson, 1995; Liu, 1996). The application of computer technology to the research of the preservation of historical architecture is comparatively late. Continually some researchers explore the procedure of restoration by computer simulation technology (Potier, 2000), or establish digital database of the investigation of historical architecture (Sasada, 2000; Wang, 1998). How to choose materials by the technology of computer simulation influences the sense of visual perception. Liu (2000) has a more complete result on visual impact analysis and assessment (VIAA) about the research of urban design projection. The main subjects of this research paper focuses on whether the technology of computer simulation can extenuate the conflict between new and old materials that imposed on visual perception.The objective of this paper is to propose a standard method of visual harmony effects for materials in historical architecture (taking the Gigi Train Station destroyed by the earthquake in last September as the operating example).There are five steps in this research: 1.Categorize the materials of historical architecture and establish the information in digital database. 2.Get new materials of historical architecture and establish the information in digital database. 3.According to the mixing amount of new and old materials, determinate their proportion of the building; mixing new and old materials in a certain way. 4.Assign the mixed materials to the computer model and proceed the simulation of lighting. 5.Make experts and the citizens to evaluate the accomplished computer model in order to propose the expected standard method.According to the experiment mentioned above, we first address a procedure of material simulation of the historical architecture restoration and then offer some suggestions of how to mix new and old materials.By this procedure of simulation, we offer a better view to control the restoration of historical architecture. And, the discrepancy and discordance by new and old materials can be released. Moreover, we thus avoid to reconstructing ¡§too new¡¨ historical architecture.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id e7e0
authors Watanabe, Shun
year 1996
title Computer Literacy in Design Education
source CAADRIA ‘96 [Proceedings of The First Conference on Computer Aided Architectural Design Research in Asia / ISBN 9627-75-703-9] Hong Kong (Hong Kong) 25-27 April 1996, pp. 1-10
doi https://doi.org/10.52842/conf.caadria.1996.001
summary Many Schools of Architecture in Japan installed many computers in their class rooms, and have already begun courses for CAAD skill. But in many cases, few teachers make their efforts for this kind of education personally. Having limited staff prevents one from making the global program of design education by using computers.

On the other hand, only teaching how to use individual CAD/CG software in architectural and urban design is already out of date in education. Students will be expected to adapt themselves to the coming multi-media society. For example, many World Wide Web services were started commercially and the Internet has become very familiar within the last year. But I dare to say that a few people can enjoy Internet services actually in schools of Architecture and construction companies.

Students should be brought up to improve their ability of analysing, planning and designing by linking various software technologies efficiently in the word-wide network environment and using them at will. In future design education, we should teach that computers can be used not only as a presentation media of architectural form, but also as a simulation media of architectural and urban design from various points of view.

The University of Tsukuba was established about 25 years ago, and its system is different from the other universities in Japan. In comparison with other faculties of Architecture and Urban Planning, our Faculty is very multi-disciplinary, and ability of using computers has been regarded as the essential skill of foundation. In this paper, I will introduce how CAAD education is situated in our global program, and discuss the importance of computer literacy in architectural and urban design education.

keywords Computer Literacy, Design Education, CAD, Internet
series CAADRIA
last changed 2022/06/07 07:58

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 24HOMELOGIN (you are user _anon_553394 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002