CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 626

_id c21a
authors Fitzsimons, J. Kent
year 1999
title Net-Based History of Architecture
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 319-325
summary History sequences in professional architecture programs must meet broad educational objectives. Inherent in an architect’s education is a tension between the rigorous consideration of important ideas in the history of architecture and the inspired implementation of these ideas in the design studio. A digital history course can bridge the education/training divide by making the study of history emulate the methods and strategies used in the architecture studio. Using a relational database and navigation software, we have developed a course in which students move through a digital environment of text, image, audio and video resources pertaining to broad historical categories in architecture. Charged with producing historical genealogies, students must incorporate current architectural and cultural concerns in their distillation of the history presented by the articles, surveys, manifestoes, photographs, drawings and interviews encountered online. The immersive multimedia environment uses hyperlinks as a structure, placing emphasis on the student’s role in navigation while increasing the possibilities for chance encounters in the material. The delivery of basic material having been accomplished independently by the student, class meetings are used for higher-level discussions of the issues that surface. The project is currently being implemented as a half-semester course in 20th century architecture for a small group of sophomore students in the professional Bachelor of Architecture program. The project’s pedagogical and technical aspects will be discussed with respect to this stage of its development.
series SIGRADI
email
last changed 2016/03/10 09:51

_id a70b
authors Jung, Th., Do, E.Y.-L. and Gross, M.D.
year 1999
title Immersive Redlining and Annotation of 3D Design Models on the Web
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, pp. 81-98
summary The Web now enables people in different places to view three-dimensional models of buildings and places in a collaborative design discussion. Already design firms with offices around the world are exploiting this capability. In a typical application, design drawings and models are posted by one party for review by others, and a dialogue is carried out either synchronously using on line streamed video and audio, or asynchronously using email, chat room, and bulletin board software. However, most of these systems do not allow designers to embed annotations and proposed design changes in the threedimensional design model under discussion. We present a working prototype of a system that has these capabilities and describe the configuration of Web technologies we used to construct it.
keywords VRML, Immersive Environment, Virtual Annotation, Computer-aided Design, Building Models
series CAAD Futures
email
last changed 2006/11/07 07:22

_id cd0b
authors Meloni, Wanda
year 2001
title The Slow Rise of 3D on the Web
source Computer Graphics Worlds - July 2001, p. 22
summary Consumer, commercial, and educational applications on the Web have been slow to take advantage of 3D, although for years it has been viewed as a boon for the graphics industry. Over the last 18 months, the situation has begun to look more favorable for the graphics industry, reports M2 Research's Wanda Meloni. Meloni says changes in the market and in technology have fueled the rise of 3D on the Web. The increase in broadband connections from 2.7 million users in 1999 to 8 million users in 2000 means that the market of consumers who have Internet connections fast enough to view and interact with 3D content has grown considerably. Also, 3D players are no longer limited to a proprietary format now that new game consoles from Nintendo and Microsoft will offer Web-based real-time 3D multiplayer gaming; in addition, 3D graphics technology will now be embedded into applications for Internet appliances and handheld devices. M2 Research estimates that the number of Web media players that are 3D-enabled will rise from 17 percent currently to 32 percent by the end of the year, as 3D player vendors offer more direct support to 2D players such as RealPlayer and Shockwave. Still, content production will remain a major hurdle because millions of Web authors are not using 3D. Meloni says creative professionals and digital designers will need a new set of 3D tools that will work seamlessly with current Web content in video, 2D graphics, and audio.
series journal paper
last changed 2003/04/23 15:50

_id 9ce0
authors Ozcan, Oguzhan
year 1999
title Education of Interactive Panorama-design in Architecture
doi https://doi.org/10.52842/conf.ecaade.1999.223
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 223-229
summary This paper mainly discusses the importance of interactive panorama in design, and its education in the MDes program, which will run at Yildiz Technical University in the year 2000. The first part of the paper summarizes the potentials of current interactive panorama technique, which was "A popular form of the public entertainment" in 19th-century. Then, it compares the real-world experiences with observations in an interactive panorama. This comparison is carried out together with technical aspects i.e. limitations, audio-visual effects, composite techniques, live video input, and conceptual aspects i.e. camera actions, natural phenomenon. The technical discussion in the paper is concentrated on the examples from newly developed tools such as Nodemedia, Electrifier, Wasabi Software, and Skypaint as well as Apple QuickTime VR Authoring Tool. The second part underlines the role of interactive panorama technique in design. In this part, the paper also summarizes how to use the technique at the beginning and, during creation of the design and in its presentation, taking the installation advantages of sound, vision, text and transition effects. The third part concentrates on the interactive panorama design as an individual project, offered in the MDes program. Then it explains how the preliminary courses were planned for this individual project and summarizes the content of the course formulated through the linear and non-linear structures of the media. Finally, considering with the future development of interactive panorama technique, the last part of the paper discusses the possible results of this education method.
keywords Interactive Media, Panoramic Image, Design Education
series eCAADe
email
last changed 2022/06/07 08:00

_id 65b4
authors Kos, Jose Ripper
year 1999
title Architecture and Hyperdocument: Data Shaping Space
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 462-465
summary The computer interface can't convey the hole experience of walking through a city or a building. Nevertheless, the complexity of all the aspects involved in those threedimensional spaces can be better understood through the non-linearity of the hyperdocument. Each dweller of a city and a building has many layers of relationship with both. The sequence and the extent each observer explores the space is unique. It’s not totally apprehended in a first visit. As the observer knows better that space, his experience changes. A similar situation takes place in a multimedia application. Hence, it's possible to build an analogy between the architectonic or urban structure and a hyperdocument navigation structure. We can also state that the computer is critical to create paths of architectural information through space and time. The 3D model of a city is a powerful basis to structure the hyperdocument navigation. The city can be viewed in separate parts or layers of information. One investigates the city through different aspects of its configuration and explores it in different scales and levels of detail. The images generated from this 3D model can be combined with video, photo, sound and text, organizing the information which gives form to the city. The navigation through this information, addresses the citty by its economy, housing, religion, politics, leisure, projects, symbolic buildings, and other aspects. This paper will discuss these issues through the experiments of the research done at the School of Architecture and Urbanism of the Universidade Federal do Rio de Janeiro. The research group at the "Laboratory of Urban Analysis and Digital Representation" in PROURB (Graduate Program of Urbanism) analyses the city and its buildings using CD-ROMs and websites.
keywords 3D City Modeling, Hyperdocument, Multimedia, Architecture, Urbanism
series SIGRADI
email
last changed 2016/03/10 09:54

_id 29f3
authors Ohno, Ryuzo and Aoki, Hirofumi
year 1999
title Development of an Interactive Simulation System for Environment-Behavior Study
source Simulation of Architectural Space - Color and Light, Methods and Effects [Proceedings of the 4rd European Architectural Endoscopy Association Conference / ISBN 3-86005-267-5] Dresden (Germany), 29 September - 1 October 1999, pp. 36-49
summary An important recent development in the simulation techniques was the changes in the mode of presentation: from passive mode to active one. It is now possible to present an image according to the observer’s voluntary movement of body and head by means of a head-mounted display. Such interactive simulation system, which allows people to observe what they like to see, is suitable to study environmental perception, because active attention is essential to manipulate enormous information in the environment. The present paper reports two case studies in which an interactive simulation system was developed to test psychological impact of interior and exterior spaces: the case study 1 intended to clarify the effect of the disposition of transparent and opaque surfaces of a room on the occupants’ „sense of enclosure“, the case study 2 intended to make clear some physical features along a street which are influential for changing atmosphere. In addition to the empirical research, an attempt to develop a new simulation system which uses both analogue and digital images is briefly reported, and a preliminary experiment was conducted to test the performance of the simulation system in which such movable elements as pedestrians and cars generated by real-time CG were overlaid on the video image of a scale model street.
series EAEA
email
more http://info.tuwien.ac.at/eaea
last changed 2005/09/09 10:43

_id 0b84
authors De Silva Garza, Andrés Gómez and Maher, Mary Lou
year 1999
title Evolving Design Layout Cases to Satisfy Feng Shui Constraints
doi https://doi.org/10.52842/conf.caadria.1999.115
source CAADRIA '99 [Proceedings of The Fourth Conference on Computer Aided Architectural Design Research in Asia / ISBN 7-5439-1233-3] Shanghai (China) 5-7 May 1999, pp. 115-124
summary We present a computational process model for design that combines the functionalities of case-based reasoning (CBR) and genetic algorithms (GAÌs). CBR provides a precedent-based framework in which prior design cases are retrieved and adapted in order to meet the requirements of a new design problem. GAÌs provide a general-purpose mechanism for randomly combining and modifying potential solutions to a new problem repeatedly until an adequate solution is found. In our model we use a GA to perform the case-adaptation subtask of CBR. In this manner, a gradual improvement in the overall quality of the proposed designs is obtained as more and more adaptations of the design cases originally retrieved from memory are evolved. We describe how these ideas can be used to perform layout design of residences such that the final designs satisfy the requirements imposed by feng shui, the Chinese art of placement.
series CAADRIA
email
last changed 2022/06/07 07:55

_id 39cb
authors Kelleners, Richard H.M.C.
year 1999
title Constraints in object-oriented graphics
source Eindhoven University of Technology
summary In the area of interactive computer graphics, two important approaches to deal with the complexity of designing and implementing graphics systems are object-oriented programming and constraint-based programming. From literature, it appears that combination of these two has clear advantages but has also proven to be difficult. One of the main problems is that constraint programming infringes the information hiding principle of object-oriented programming. The goal of the research project is to combine these two approaches to benefit from the strengths of both. Two research groups at the Eindhoven University of Technology investigate the use of constraints on graphics objects. At the Architecture department, constraints are applied in a virtual reality design environment. At the Computer Science department, constraints aid in modeling 3D animations. For these two groups, a constraint system for 3D graphical objects was developed. A conceptual model, called CODE (Constraints on Objects via Data flows and Events), is presented that enables integration of constraints and objects by separating the object world from the constraint world. In the design of this model, the main aspect being considered is that the information hiding principle among objects may not be violated. Constraint solvers, however, should have direct access to an object’s internal data structure. Communication between the two worlds is done via a protocol orthogonal to the message passing mechanism of objects, namely, via events and data flows. This protocol ensures that the information hiding principle at the object-oriented programming level is not violated while constraints can directly access “hidden” data. Furthermore, CODE is built up of distinct elements, or entity types, like constraint, solver, event, data flow. This structure enables that several special purpose constraint solvers can be defined and made to cooperate to solve complex constraint problems. A prototype implementation was built to study the feasibility of CODE. Therefore, the implementation should correspond directly to the conceptual model. To this end, every entity (object, constraint, solver) of the conceptual model is represented by a separate process in the language MANIFOLD. The (concurrent) processes communicate by events and data flows. The implementation serves to validate the conceptual model and to demonstrate that it is a viable way of combining constraints and objects. After the feasibility study, the prototype was discarded. The gained experience was used to build an implementation of the conceptual model for the two research groups. This implementation encompassed a constraint system with multiple solvers and constraint types. The constraint system was built as an object-oriented library that can be linked to the applications in the respective research groups. Special constructs were designed to ensure information hiding among application objects while constraints and solvers have direct access to the object data. CODE manages the complexity of object-oriented constraint solving by defining a communication protocol to allow the two paradigms to cooperate. The prototype implementation demonstrates that CODE can be implemented into a working system. Finally, the implementation of an actual application shows that the model is suitable for the development of object-oriented software.
keywords Computer Graphics; Object Oriented Programming; Constraint Programming
series thesis:PhD
last changed 2003/02/12 22:37

_id f813
authors Martens, Bob (Ed.)
year 1999
title Full-scale Modeling and the Simulation of Light
source Proceedings of the 7th European Full-scale Modeling Association Conference / ISBN 3-85437-167-5 / Florence (Italy) 18-20 February 1999, 100 p.
summary EFA ‘ 99 covered the use of light throughout 1:1 simulation. As a rule the field of light design has a closer relation with simulation in true scale. Therefore, it is surprising that a conference dealing with this field did not take place at an earlier stage which might be due to the differing approaches concerning implementation and working focus at the various laboratories. The remarkable achievements of the individual lighting companies on the market regarding research work seem very promising and necessarily are to be duly acknowledged also on the part of academic circles. Furthermore, a productive exchange of information might develop between the, somewhat incompatibly seeming, interest groups. More interaction would surely prove wise, as the stage for successful research work in the field of light design and light impact is only to be set by combining all strengths.
keywords Model Simulation, Real Environments
series other
email
more http://info.tuwien.ac.at/efa
last changed 2003/08/25 10:12

_id ab92
authors Pal, S.K. and Mitra, S.
year 1999
title Neuro-Fuzzy Pattern Recognition
source John Wiley & Sons, New York
summary Neural networks and fuzzy techniques are among the most promising approaches to pattern recognition. Neuro-fuzzy systems aim at combining the advantages of the two paradigms. This book is a collection of papers describing state-of-the-art work in this emerging field. It covers topics such as feature selection, classification, classifier training, and clustering. Also included are applications of neuro-fuzzy systems in speech recognition, land mine detection, medical image analysis, and autonomous vehicle control. The intended audience includes graduate students in computer science and related fields, as well as researchers at academic institutions and in industry.
series other
last changed 2003/04/23 15:14

_id ecaade2015_161
id ecaade2015_161
authors Papasarantou, Chrissa; Kalaouzis, Giorgos, Pentazou, Ioulia and Bourdakis, Vassilis
year 2015
title A Spatio-Temporal 3D Representation of a Historic Dataset
doi https://doi.org/10.52842/conf.ecaade.2015.1.701
source Martens, B, Wurzer, G, Grasl T, Lorenz, WE and Schaffranek, R (eds.), Real Time - Proceedings of the 33rd eCAADe Conference - Volume 1, Vienna University of Technology, Vienna, Austria, 16-18 September 2015, pp. 701-708
summary Previous research (Bourdakis et al, 2012; Papasarantou et al, 2013) dealt with the problem of creating information visualisation systems capable of combining historical data of MUCIV's database and developing strategies that embed the non-spatial data in spatial models. The database was primarily designed as an experimental flexible spatio-temporal configuration of dynamic visual structures generating a variety of narrations through interaction.The attempt of producing a legible configuration driven by a number of criteria, led to the proposition of two different arrangements, namely the linear and radial array. The aim of this paper is to present the next step on the visualization after redefining both the way that thematic axes and data are visualized and arranged/scattered. Alternate configurations are investigated, based also on theoretical analysis on the conceptualization and perception of information visualization systems (Card et al 1999, Ware, 2004).
wos WOS:000372317300076
series eCAADe
email
more https://mh-engage.ltcc.tuwien.ac.at/engage/ui/watch.html?id=74178dba-702a-11e5-aa5b-67bfe1e6502f
last changed 2022/06/07 08:00

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ga9911
id ga9911
authors Riley, Howard
year 1999
title Semiotics and Generative Art
source International Conference on Generative Art
summary The paper begins with a brief explanation of David Marr’s computational theory of visual perception, and his key terms. Marr argued that vision consists in the algorithmic transformation of retinal images so as to produce output of viewer-centred and object-centred representations from an input at the retinae. Those two kinds of output, the viewer-centred and the object-centred representations, enable us to negotiate the physical world. The paper goes on to suggest that the activity of Drawing is comparable as a process of transformation: a picture is a transformation from either viewer-centred, or object-centred descriptions, or a combination of both types of representation, to a two-dimensional drawn representation. These pictures may be described as resulting from algorithmic transformations since picture-making utilises specific geometric procedures for transforming input (our perceptions) into output (our drawings). However, a key point is made about such algorithms: they are culturally-determined. They may be defined in terms of the procedure of selecting and combining choices from the matrix of semiotic systems available within a particular social context. These systems are presented in the paper as a Chart, and are further correlated with the social functions of a communication system such as Drawing. Thus, the paper proposes a systemic-functional semiotics of Drawing, within which algorithms operate to realise specific cultural values in material form. Familiar algorithms are illustrated, such as those governing the transformation of the physics of an array of light at the eye into the set of representations known as perspective projection systems; and also illustrated in the paper are less familiar algorithms devised by artists such as Kenneth Martin and Sol LeWitt.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 0647
authors Rosenberg, D.
year 1999
title Use Case Driven Object Modeling with UML
source A Practical Approach. Reading, MA: Addison-Wesley
summary Combining some of today's best ideas about customer-driven object-oriented design, Use Case Driven Object Modeling with UML: A Practical Approach shows you how to use Unified Modeling Language (UML) in the real world, keeping with the author's proprietary software design process. The book begins with the genesis of the author's ICONIX Unified Object Modeling Approach, borrowing ideas and strategies from the "three amigos" who invented UML: Grady Booch, James Rumbaugh, and Ivar Jacobson. Throughout this text, the ICONIX method is used to model a stock trading system, with all the relevant UML diagrams, beginning with class definition and use cases. The author's approach to software relies heavily on customer requirements and use case scenarios for which he has a good deal of practical advice. He provides numerous hints for avoiding bogged-down diagrams. After preliminary design, he advocates drilling down into specifics with robustness diagrams, which trace how classes interact with one another. The most detailed design work comes next with sequence diagrams. Subsequent chapters offer tips on project management, implementation, and testing. Throughout this lively and intelligently organized book, the author presents numerous real-world tips (and Top 10 lists) that supply wisdom to his perspective on effective software design.
series other
last changed 2003/04/23 15:14

_id 53df
authors Uddin, M.S.
year 1999
title Hybrid Drawing Techniques by Contemporary Architects and Designers
source John Wiley, New York,
summary The complete hybrid drawing sourcebook Hybrid drawings offer limitless possibilities for the fusion and superimposition of ideas, media, and techniques-powerful creative tools for effective and innovative architectural graphic presentation. This unique guide offers a dynamic introduction to these drawings and how they are created, with a stunning color portfolio of presentation-quality examples that give full visual expression to the power and potential of hybrid drawing techniques. Featuring the work of dozens of internationally recognized architects and firms, including Takefumi Aida, Helmut Jahn of Murphy/Jahn Architects, Morphosis, Eric Owen Moss, NBBJ Sports & Entertainment, Smith-Miller & Hawkinson, and Bernard Tschumi Architects, the book's visual examples are accompanied by descriptive and analytical commentary that gives valuable practical insight into the background of each project, along with essential information on the design concept and the drawing process. Combining all of the best features of an idea resource and a how-to guide, Hybrid Drawing Techniques by Contemporary Architects and Designers is an important creative tool for students and professionals in architecture, design, illustration, and related areas
series other
last changed 2003/04/23 15:14

_id 5f23
authors Vineeta, Pal
year 1999
title Integrated Computational Analysis of the Visual Environment in Buildings
source Carnegie Mellon University, Pittsburgh
summary Despite significant advances in the area of computational support for lighting design, lighting simulation tools have not been sufficiently integrated into the lighting design process. There is a significant body of designers who rely solely on their individual experience and do not use predictive simulation tools. Even when simulation tools are utilized, it is for design verification or presentation rather than for design support. A number of factors are thought to contribute to this lack of integration of simulation tools into the design process: a) Most existing tools rely on the problematic assumption implying the appropriateness of simplified models for the less complex early design and detailed simulation for the more complex later stages of design; b) They do not support an active exploration of design variables to satisfy desired performance criteria; c) They are not integrated with other building performance simulation models. This thesis addresses the above shortcomings by contributing to the field of visual analysis in the following areas, pertaining to the development of active, integrated design and performance simulation environments: - Implementation of a consistent and coherent, physically-based modeling approach, combining radiosity and ray-tracing methods for the simulation of light propagation. - Provision of design support both in terms of evaluation support for interpreting large amounts of computed data with diverse performance indices, and in terms of active design support to explore the relationships between the design variables and performance indices. - Integration of the lighting simulation module within a larger software environment (SEMPER) for the prediction and evaluation of multiple performance indicators (for energy, light, acoustics, etc.) in buildings.
series thesis:PhD
last changed 2003/02/12 22:37

_id 24f0
authors Kram, Reed and Maeda, John
year 1999
title Transducer: 3D Audio-Visual Form-Making as Performance
source AVOCAAD Second International Conference [AVOCAAD Conference Proceedings / ISBN 90-76101-02-07] Brussels (Belgium) 8-10 April 1999, pp. 285-291
summary This paper describes Transducer, a prototype digital system for live, audio-visual performance. Currently the process of editing sounds or crafting three-dimensional structures on a computer remains a frustratingly rigid process. Current tools for real-time audio or visual construction using computers involve obtuse controls, either heavily GUI'ed or overstylized. Transducer asks one to envision a space where the process of editing and creating on a computer becomes a dynamic performance. The content of this performance may be sufficiently complex to elicit multiple interpretations, but Transducer enforces the notion that the process of creation should itself be a fluid and transparent expression. The system allows a performer to build constructions of sampled audio and computational three-dimensional form simultaneously. Each sound clip is visualized as a "playable" cylinder of sound that can be manipulated both visually and aurally in real-time. The transducer system demonstrates a creative space with equal design detailing at both the construction and performance phase.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 5222
authors Moloney, Jules
year 1999
title Bike-R: Virtual Reality for the Financially Challenged
doi https://doi.org/10.52842/conf.ecaade.1999.410
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 410-413
summary This paper describes a 'low tech' approach to producing interactive virtual environments for the evaluation of design proposals. The aim was to produce a low cost alternative to such expensive installations as CAVE virtual reality systems. The system utilises a library of pre-rendered animation, video and audio files and hence is not reliant on powerful hardware to produce real time simulation. The participant sits astride a bicycle exercise machine and animation is triggered by the pedal revolution. Navigation is achieved by steering along and around the streets of the animated design. This project builds on the work of Desmond Hii. ( Hii, 1997) The innovations are the bicycle interface and the application to urban scale simulation.
keywords Virtual, Design, Interface, Urban
series eCAADe
email
last changed 2022/06/07 07:58

_id 1419
authors Spitz, Rejane
year 1999
title Dirty Hands on the Keyboard: In Search of Less Aseptic Computer Graphics Teaching for Art & Design
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 13-18
summary In recent decades our society has witnessed a level of technological development that has not been matched by that of educational development. Far from the forefront in the process of social change, education has been trailing behind transformations occurring in industrial sectors, passively and sluggishly assimilating their technological innovations. Worse yet, educators have taken the technology and logic of innovations deriving predominantly from industry and attempted to transpose them directly into the classroom, without either analyzing them in terms of demands from the educational context or adjusting them to the specificities of the teaching/learning process. In the 1970s - marked by the effervescence of Educational Technology - society witnessed the extensive proliferation of audio-visual resources for use in education, yet with limited development in teaching theories and educational methods and procedures. In the 1980s, when Computers in Education emerged as a new area, the discussion focused predominantly on the issue of how the available computer technology could be used in the school, rather than tackling the question of how it could be developed in such a way as to meet the needs of the educational proposal. What, then, will the educational legacy of the 1990s be? In this article we focus on the issue from the perspective of undergraduate and graduate courses in Arts and Design. Computer Graphics slowly but surely has gained ground and consolidated as part of the Art & Design curricula in recent years, but in most cases as a subject in the curriculum that is not linked to the others. Computers are usually allocated in special laboratories, inside and outside Departments, but invariably isolated from the dust, clay, varnish, and paint and other wastes, materials, and odors impregnating - and characterizing - other labs in Arts and Design courses.In spite of its isolation, computer technology coexists with centuries-old practices and traditions in Art & Design courses. This interesting meeting of tradition and innovation has led to daring educational ideas and experiments in the Arts and Design which have had a ripple effect in other fields of knowledge. We analyze these issues focusing on the pioneering experience of the Núcleo de Arte Eletrônica – a multidisciplinary space at the Arts Department at PUC-Rio, where undergraduate and graduate students of technological and human areas meet to think, discuss, create and produce Art & Design projects, and which constitutes a locus for the oxygenation of learning and for preparing students to face the challenges of an interdisciplinary and interconnected society.
series SIGRADI
email
last changed 2016/03/10 10:01

_id bacd
authors Abadí Abbo, Isaac
year 1999
title APPLICATION OF SPATIAL DESIGN ABILITY IN A POSTGRADUATE COURSE
source Full-scale Modeling and the Simulation of Light [Proceedings of the 7th European Full-scale Modeling Association Conference / ISBN 3-85437-167-5] Florence (Italy) 18-20 February 1999, pp. 75-82
summary Spatial Design Ability (SDA) has been defined by the author (1983) as the capacity to anticipate the effects (psychological impressions) that architectural spaces or its components produce in observers or users. This concept, which requires the evaluation of spaces by the people that uses it, was proposed as a guideline to a Masters Degree Course in Architectural Design at the Universidad Autonoma de Aguascalientes in Mexico. The theory and the exercises required for the experience needed a model that could simulate spaces in terms of all the variables involved. Full-scale modeling as has been tested in previous research, offered the most effective mean to experiment with space. A simple, primitive model was designed and built: an articulated ceiling that allows variation in height and shape, and a series of wooden panels for the walls and structure. Several exercises were carried out, mainly to experience cause -effect relationships between space and the psychological impressions they produce. Students researched into spatial taxonomy, intentional sequences of space and spatial character. Results showed that students achieved the expected anticipation of space and that full-scale modeling, even with a simple model, proved to be an effective tool for this purpose. The low cost of the model and the short time it took to be built, opens an important possibility for Institutions involved in architectural studies, both as a research and as a learning tool.
keywords Spatial Design Ability, Architectural Space, User Evaluation, Learning, Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 11:27

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 31HOMELOGIN (you are user _anon_866333 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002