CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 628

_id 7ccd
authors Augenbroe, Godfried and Eastman, Chuck
year 1999
title Computers in Building: Proceedings of the CAADfutures '99 Conference
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, 398 p.
summary This is the eight CAADfutures Conference. Each of these bi-annual conferences identifies the state of the art in computer application in architecture. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. Early conferences, for example, addressed project work, either for real construction or done in academic studios, that approached the teaching or use of CAD tools in innovative ways. By the early 1990s, such project-based examples of CAD use disappeared from the conferences, as this area was no longer considered a research contribution. Computer-based design has become a basic way of doing business. This conference is marked by a similar evolutionary change. More papers were submitted about Web- based applications than about any other area. Rather than having multiple sessions on Web-based applications and communications, we instead came to the conclusion that the Web now is an integral part of digital computing, as are CAD applications. Using the conference as a sample, Web-based projects have been integrated into most research areas. This does not mean that the application of the Web is not a research area, but rather that the Web itself is an integral tool in almost all areas of CAAD research.
series CAAD Futures
email
last changed 2006/11/07 07:22

_id 4805
authors Bentley, P.
year 1999
title Evolutionary Design by Computers Morgan Kaufmann
source San Francisco, CA
summary Computers can only do what we tell them to do. They are our blind, unconscious digital slaves, bound to us by the unbreakable chains of our programs. These programs instruct computers what to do, when to do it, and how it should be done. But what happens when we loosen these chains? What happens when we tell a computer to use a process that we do not fully understand, in order to achieve something we do not fully understand? What happens when we tell a computer to evolve designs? As this book will show, what happens is that the computer gains almost human-like qualities of autonomy, innovative flair, and even creativity. These 'skills'which evolution so mysteriously endows upon our computers open up a whole new way of using computers in design. Today our former 'glorified typewriters' or 'overcomplicated drawing boards' can do everything from generating new ideas and concepts in design, to improving the performance of designs well beyond the abilities of even the most skilled human designer. Evolving designs on computers now enables us to employ computers in every stage of the design process. This is no longer computer aided design - this is becoming computer design. The pages of this book testify to the ability of today's evolutionary computer techniques in design. Flick through them and you will see designs of satellite booms, load cells, flywheels, computer networks, artistic images, sculptures, virtual creatures, house and hospital architectural plans, bridges, cranes, analogue circuits and even coffee tables. Out of all of the designs in the world, the collection you see in this book have a unique history: they were all evolved by computer, not designed by humans.
series other
last changed 2003/04/23 15:14

_id b4d2
authors Caldas, Luisa G. and Norford, Leslie K.
year 1999
title A Genetic Algorithm Tool for Design Optimization
source Media and Design Process [ACADIA ‘99 / ISBN 1-880250-08-X] Salt Lake City 29-31 October 1999, pp. 260-271
doi https://doi.org/10.52842/conf.acadia.1999.260
summary Much interest has been recently devoted to generative processes in design. Advances in computational tools for design applications, coupled with techniques from the field of artificial intelligence, have lead to new possibilities in the way computers can inform and actively interact with the design process. In this paper we use the concepts of generative and goal-oriented design to propose a computer tool that can help the designer to generate and evaluate certain aspects of a solution towards an optimized behavior of the final configuration. This work focuses mostly on those aspects related to the environmental performance of the building. Genetic Algorithms are applied as a generative and search procedure to look for optimized design solutions in terms of thermal and lighting performance in a building. The Genetic Algorithm (GA) is first used to generate possible design solutions, which are then evaluated in terms of lighting and thermal behavior using a detailed thermal analysis program (DOE2.1E). The results from the simulations are subsequently used to further guide the GA search towards finding low-energy solutions to the problem under study. Solutions can be visualized using an AutoLisp routine. The specific problem addressed in this study is the placing and sizing of windows in an office building. The same method is applicable to a wide range of design problems like the choice of construction materials, design of shading elements, or sizing of lighting and mechanical systems for buildings.
series ACADIA
email
last changed 2022/06/07 07:54

_id 37d1
authors Corona Martíne, Alfonso and Vigo, Libertad
year 1999
title Before the Digital Design Studio
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 247-252
summary This paper contains some observations which derive from our work as Studio Professors . In the last years, studios are in a transition phase with the progressive introduction of computers in later stages of the design process. The initiative generally belongs to students rather than to studio masters, since the former are aware that a knowledge of CAD systems will make them able to get work in architects offices. It is the first few Studios that will guide the student in forming a conception of what is architecture . Therefore, we have observer more attentively the way in which he establishes his first competence as a designer. We believe it is useful to clarify design training before we can integrate computers into it. The ways we all learn to design and which we transmit in the Studio were obviously created a long time ago, when Architecture became a subject taught in Schools, no longer a craft to be acquired under a master. The conception of architecture that the student forms in his mind is largely dependent on a long tradition of Beaux-Arts training which survives (under different forms) in Modern Architecture. The methods he or she acquires will become the basis of his creative design process also in professional life. Computer programmes are designed to adapt into the stages of this design process simply as time saving tools. We are interested in finding out how they can become an active part in the creative process and how to control this integration in teaching. Therefore, our work deals mainly with the tradition of the Studio and the conditioning it produces. The next step will be to explore the possiblities and restrictions that will inevitably issue from the introduction of new media.
series SIGRADI
email
last changed 2016/03/10 09:49

_id 26e4
authors Da Rosa Sampaio, Andrea
year 1999
title Design Thinking Proces and New Paradigms of Graphic Expression (Design Thinking Proces and New Paradigms of Graphic Expression)
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 68-73
summary It is undeniable that infotechnology has brought significant changes into architectural representation. Whether these changes has altered design conception proccess or are only media matters, is a discussion concerned with the role of graphic expression in architects designs. Is it just a language, or a design thinking tool, fully engaged with the formal solution? Thus, the investigation of the role of represententional systems in the design thinking proccess and the analysis of their intrinsic relationship will approach traditional methods facing the widespread use of Computer Aided Design. There are polemics about the issue: on the one hand, seductive simulations and a plethora of rendering choices available, on the other hand, impersonal expression, to name a few arguments for and against CAD use. Computers have not replaced the straight reciprocity between the acts of conceiving and drawing, between mind and image, which results in manual sketches, quite effective in embodying a design idea. Yet, we have to admit that manipulating complex forms such as Gehry's Guggenheim Museum quickly would not be feasible before CAD advent. We have been faced with new paradigms challenging the graphic expression of architects and urban designers. Besides the consequences of this new reality to design thinking, a crucial point to be stressed at this discussion is the possibility of achieving a balance between the cherished mind-hand intimacy and the available technological resources.
keywords Traditional Representation, Design Thinking, CAD
series SIGRADI
email
last changed 2016/03/10 09:50

_id f51a
authors Del Pup, Claudio
year 1999
title Carbon Pencil, Brush and Mouse, Three Tools in the Learning Process of New University Art Designers
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 420-425
summary This article develops the introduction of computer technologies in the fine arts environment the use of these new tools, sharing the process of creation and interacting at the same level with older technics, breaks the myth of technology and tries to reach the right place according to current or modern advances. As an introduction, it explains the insertion in the current courses of study of the "computer languages area", its implementation, present situation and future stages. An important point we have developed is the teaching methodology, to solve the transition of those who, challenging their investigations in different areas, like fire arts, graphic arts, film or video, need the support of computers. The first steps consist in designing sample courses, which allow the measurement of results, the definition of concepts like extension, capacities, teaching hours and the most important, a methodology to share the enthusiasm of creation with the difficulties of learning a new technique it is necessary to discover limits, to avoid easy results as a creative tool one of the most important problems we have faced is the necessity of coordinating the process of creation with the individual time of a plastic artist, finding the right way that allows the integration of all the group, minimizing desertion and losing of motivation. Two years later, the first results in the field of digital image investigations and assistance in form design. Volume as a challenge and solutions supported in techniques of modeling in 3D (experiences of modeling a virtual volume from a revolution profile, its particular facts and the parallelism with potter's lathe the handling of image as the most important element, as an work of art itself, but also as a support in the transmission of knowledge (design of a CD as a tool for the department of embryology of medical school with the participation of people from the medical school, engineering school and school of fine arts). Time as a variable, movement, animation and its techniques, multimedia (design of short videos for the 150th anniversary of the Republic University). Conclusions, good hits, adjustments, new areas to include, problems to solve, the way of facing a constantly evolving technology.
series SIGRADI
email
last changed 2016/03/10 09:50

_id 161c
authors Juroszek, Steven P.
year 1999
title Access, Instruction, Application: Towards a Universal Lab
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 141-150
doi https://doi.org/10.52842/conf.ecaade.1999.141
summary In January 1998, the Montana State University School of Architecture embarked upon an initiative to successfully integrate computer technology into its design curriculum. At that time only a handful of student computers could be found in the design studio. By January 1999 over 95 students have and use computers in their courses. The increase in computer access and use is occurring through a five-phase initiative called the Universal Lab-a school-wide commitment to the full integration of computer technology into all design studios, support courses and architectural electives. The Universal Lab uses the areas of Access, Instruction and Application as the vehicles for appropriate placement and usage of digital concepts within the curriculum. The three-pronged approach allows each instructor to integrate technology using one, two or all three areas with varying degrees of intensity. This paper presents the current status of the Universal Lab-Phase I and Phase II-and describes the effect of this program on student work, course design and faculty instruction.
keywords Design, Access, Instruction, Application, Integration
series eCAADe
email
last changed 2022/06/07 07:52

_id 24f0
authors Kram, Reed and Maeda, John
year 1999
title Transducer: 3D Audio-Visual Form-Making as Performance
source AVOCAAD Second International Conference [AVOCAAD Conference Proceedings / ISBN 90-76101-02-07] Brussels (Belgium) 8-10 April 1999, pp. 285-291
summary This paper describes Transducer, a prototype digital system for live, audio-visual performance. Currently the process of editing sounds or crafting three-dimensional structures on a computer remains a frustratingly rigid process. Current tools for real-time audio or visual construction using computers involve obtuse controls, either heavily GUI'ed or overstylized. Transducer asks one to envision a space where the process of editing and creating on a computer becomes a dynamic performance. The content of this performance may be sufficiently complex to elicit multiple interpretations, but Transducer enforces the notion that the process of creation should itself be a fluid and transparent expression. The system allows a performer to build constructions of sampled audio and computational three-dimensional form simultaneously. Each sound clip is visualized as a "playable" cylinder of sound that can be manipulated both visually and aurally in real-time. The transducer system demonstrates a creative space with equal design detailing at both the construction and performance phase.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 4a1a
authors Laird, J.E.
year 2001
title Using Computer Game to Develop Advanced AI
source Computer, 34 (7), July pp. 70-75
summary Although computer and video games have existed for fewer than 40 years, they are already serious business. Entertainment software, the entertainment industry's fastest growing segment, currently generates sales surpassing the film industry's gross revenues. Computer games have significantly affected personal computer sales, providing the initial application for CD-ROMs, driving advancements in graphics technology, and motivating the purchase of ever faster machines. Next-generation computer game consoles are extending this trend, with Sony and Toshiba spending $2 billion to develop the Playstation 2 and Microsoft planning to spend more than $500 million just to market its Xbox console [1]. These investments have paid off. In the past five years, the quality and complexity of computer games have advanced significantly. Computer graphics have shown the most noticeable improvement, with the number of polygons rendered in a scene increasing almost exponentially each year, significantly enhancing the games' realism. For example, the original Playstation, released in 1995, renders 300,000 polygons per second, while Sega's Dreamcast, released in 1999, renders 3 million polygons per second. The Playstation 2 sets the current standard, rendering 66 million polygons per second, while projections indicate the Xbox will render more than lOO million polygons per second. Thus, the images on today's $300 game consoles rival or surpass those available on the previous decade's $50,000 computers. The impact of these improvements is evident in the complexity and realism of the environments underlying today's games, from detailed indoor rooms and corridors to vast outdoor landscapes. These games populate the environments with both human and computer controlled characters, making them a rich laboratory for artificial intelligence research into developing intelligent and social autonomous agents. Indeed, computer games offer a fitting subject for serious academic study, undergraduate education, and graduate student and faculty research. Creating and efficiently rendering these environments touches on every topic in a computer science curriculum. The "Teaching Game Design " sidebar describes the benefits and challenges of developing computer game design courses, an increasingly popular field of study
series journal paper
last changed 2003/04/23 15:50

_id 9eb6
authors Peng C. and Blundell Jones, P.
year 1999
title Hypermedia Authoring and Contextual Modeling in Architecture and Urban Design: Collaborative Reconstructing Historical Sheffield
source Media and Design Process [ACADIA ‘99 / ISBN 1-880250-08-X] Salt Lake City 29-31 October 1999, pp. 114-124
doi https://doi.org/10.52842/conf.acadia.1999.114
summary Studies of historical architecture and urban contexts in preparation for contemporary design interventions are inherently rich in information, demanding versatile and efficient methods of documentation and retrieval. We report on a developing program to establish a hypermedia authoring approach to collaborative contextual modeling in architecture and urban design. The paper begins with a description of a large-scale urban history study project in which 95 students jointly built a physical model of the city center of Sheffield as it stood in 1900, at a scale of 1:500. Continuing work on the Sheffield urban study project, it appears to us desirable to adopt a digital approach to archiving the material and in making it both indexible and accessible via multiple routes. In our review of digital models of cities, some interesting yet unexplored issues were identified. Given the issues and tasks elicited, we investigated hypermedia authoring in HTML and VRML as a designer-centered modeling methodology. Conceptual clarity of the methodology was considered, intending that an individual or members of design groups with reasonable computing skills could learn to operate it quickly. The methodology shows that it is practicable to build a digital contextual databank by a group of architecture/urban designers rather than by specialized modeling teams. Contextual modeling with or without computers can be a research activity on its own. However, we intend to investigate further how hypermedia-based contextual models can be interrelated to design development and communication. We discuss three aspects that can be explored in a design education setting.
series ACADIA
email
last changed 2022/06/07 07:59

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ga9908
id ga9908
authors Senagala, Mahesh
year 1999
title Artistic Process, Cybernetics of Self and the Epistemology of Digital Technology
source International Conference on Generative Art
summary From the viewpoint of Batesonian cybernetics, ‘conscious purpose’ and artistic process are distinct ends of a spectrum of the functioning of self. Artistic activities— by which I mean art, poetry, play, design, etc.— involve processes that are beneath the stratum of consciousness. By definition, consciousness is selective awareness and is linear in execution and limited in its capability to synthesize complex parameters. As Heidegger pointed out, technology is a special form of knowledge (episteme). A machine is a manifestation of such a knowledge. A machine is a result of conscious purpose and is normally task-driven to accomplish a specific purpose(s). The questions this paper raises are to do with the connections between conscious purpose, artistic process and digital technology. One of the central questions of the paper is "if artistic process requires an abandonment or relinquishment of conscious purpose at the time of the generation of the work of art, and if the artistic process is a result of vast number of ‘unconscious’ forces and impulses, then could we say that the computer would ever be able to ‘generate’ or ‘create’ a work of art?" In what capacity and what role would the computer be a part of the generative process of art? Would a computer be able to ‘generate’ and ‘know’ a work of art, which, according to Bateson, requires the abandonment of conscious purpose? The ultimate goal of the paper is to unearth and examine the potential of the computers to be a part of the generative process of what Bateson has called "total self as a cybernetic model". On another plane of discourse, Deleuze and Guattari have added a critical dimension to the discourse of cybernetics and models of human mind and the global computer networks. Their notion of ‘rhizome’ has its roots in Batesonian cybernetics and the cybernetic couplings between the ‘complex systems’ such as human mind, biological and computational systems. Deleuze and Guattari call such systems as human brain and the neural networks as rhizomatic. Given the fact that the computer is the first known cybernetic machine to lay claims to artificial intelligence, the aforementioned questions become even more significant. The paper will explore how, cybernetically, the computer could be ‘coupled’ with ‘self’ and the artistic process — the ultimate expression of human condition. These philosophical and artistic explorations will take place through a series of generative artistic projects (See the figure below for an example) that aim at understanding the couplings and ‘ecology’ of digital technology and the cybernetics of self.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 1419
authors Spitz, Rejane
year 1999
title Dirty Hands on the Keyboard: In Search of Less Aseptic Computer Graphics Teaching for Art & Design
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 13-18
summary In recent decades our society has witnessed a level of technological development that has not been matched by that of educational development. Far from the forefront in the process of social change, education has been trailing behind transformations occurring in industrial sectors, passively and sluggishly assimilating their technological innovations. Worse yet, educators have taken the technology and logic of innovations deriving predominantly from industry and attempted to transpose them directly into the classroom, without either analyzing them in terms of demands from the educational context or adjusting them to the specificities of the teaching/learning process. In the 1970s - marked by the effervescence of Educational Technology - society witnessed the extensive proliferation of audio-visual resources for use in education, yet with limited development in teaching theories and educational methods and procedures. In the 1980s, when Computers in Education emerged as a new area, the discussion focused predominantly on the issue of how the available computer technology could be used in the school, rather than tackling the question of how it could be developed in such a way as to meet the needs of the educational proposal. What, then, will the educational legacy of the 1990s be? In this article we focus on the issue from the perspective of undergraduate and graduate courses in Arts and Design. Computer Graphics slowly but surely has gained ground and consolidated as part of the Art & Design curricula in recent years, but in most cases as a subject in the curriculum that is not linked to the others. Computers are usually allocated in special laboratories, inside and outside Departments, but invariably isolated from the dust, clay, varnish, and paint and other wastes, materials, and odors impregnating - and characterizing - other labs in Arts and Design courses.In spite of its isolation, computer technology coexists with centuries-old practices and traditions in Art & Design courses. This interesting meeting of tradition and innovation has led to daring educational ideas and experiments in the Arts and Design which have had a ripple effect in other fields of knowledge. We analyze these issues focusing on the pioneering experience of the Núcleo de Arte Eletrônica – a multidisciplinary space at the Arts Department at PUC-Rio, where undergraduate and graduate students of technological and human areas meet to think, discuss, create and produce Art & Design projects, and which constitutes a locus for the oxygenation of learning and for preparing students to face the challenges of an interdisciplinary and interconnected society.
series SIGRADI
email
last changed 2016/03/10 10:01

_id 642a
authors Stacey, Michael
year 1999
title Digital Design and the Architecture of Brookes Stacey Randall
source ACADIA Quarterly, vol. 18, no. 1, pp. 1-9
doi https://doi.org/10.52842/conf.acadia.1999.001.2
summary I am an architect who has the experience of using computers. A user and not an expert in digital design, therefore what follows is a foot soldier's report from my practice over the past 10 to 11 years, including the role of computers in our approach to creating architecture. I began my working life tending IBM mainframes for the British Shoe Corporation. The two IBM mainframe computers were state of the art computer technology of the mid 1970's. There were two as one was used, and the other we needed for backup. The developments in computing in terms of size, increase in storage capacity and faster processing speed over the past 30 years, is a technological acceleration which is difficult to visualize. The IBM historian in the UK suggested "that if cars had developed in the same way they would be given away free with corn flakes". A frightening thought as our cities grind under the pressure of increased car ownership. British Shoe Corporation also had a reserve system some sixty miles away and a halon extinguishing system in case of fire - such was the capital and commercial value of the system. We carried out transitional computing for a number of European countries. The CAD was limited - pen potters drawing shoes, drawing them less well than an average A level or high school student! My interest was primarily in art and not computers; my aim to earn enough to tour Europe to see key work 'in the flesh' not just in reproduction.
series ACADIA
email
last changed 2022/06/07 07:56

_id c232
authors Trinder, Michael
year 1999
title The Computer's Role in Sketch Design: A Transparent Sketching Medium
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, pp. 227-244
summary Starting from an analysis of the current unsuitability of computers for sketching, three key requirements are identified, in particular the notion that re-drawing or over-drawing are more important than editing and tweaking. These requirements are encapsulated in the broad concept of Transparency, understood both literally and metaphorically. Two experiments in implementing aspects of Transparency are described. One subverts the Macintosh window manager to provide windows with variable transparency, so that tracing between applications becomes a practical possibility. The other implements a graphical interface that requires no on-screen palettes or sliders to control it, allowing uninterrupted concentration on the design in hand. User tests show that the tool can be learnt quickly, is engaging to use, and most importantly, has character.
keywords Sketching, Sketch Design, User-Interface, Transparency, Immersion, Computer Aided Design
series CAAD Futures
last changed 2006/11/07 07:22

_id 2355
authors Tweed, Christopher and Carabine, Brendan
year 1999
title CAAD in the Future Perfect
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 18-24
doi https://doi.org/10.52842/conf.ecaade.1999.018
summary The history of CAAD research is largely one of generic computing techniques grafted on to existing design practices. The motivation behind such research, on different occasions, has been to automate some or all of the design process, to provide design assistance, to check designs for compliance against some predefined criteria, or more recently to enable people to experience designs as realistically as possible before they are built. But these goals remain unexamined, and their fulfilment is assumed to be a self-evident benefit. In the worst cases, they are examples of barely concealed technology-push. Few researchers have stated in detail what they want computers to do for architectural design, most choosing instead to focus on what computers can do, rather than what is needed. This paper considers what we want CAAD systems to do for us. However, this will be a modest effort, a beginning, a mere sketch of possible directions for CAAD. But it should open channels for criticism and serious debate about the role of CAAD in the changing professional, social and cultural contexts of its eventual use in education and practice. The paper, therefore, is not so concerned to arrive at a single 'right' vision for future CAAD systems as concerned by the lack of any cogent vision for CAAD.
keywords History, CAAD Research, Future Trends
series eCAADe
email
more http://www.aic.salford.ac.uk/Pit/home.html
last changed 2022/06/07 07:58

_id add2
authors Won, Peng-Whai
year 1999
title The Comparison between Visual Thinking Using Computer and Conventional Media in the Concept Generation Stages of Design
source CAADRIA '99 [Proceedings of The Fourth Conference on Computer Aided Architectural Design Research in Asia / ISBN 7-5439-1233-3] Shanghai (China) 5-7 May 1999, pp. 363-372
doi https://doi.org/10.52842/conf.caadria.1999.363
summary Computer, this new kind of media, has influenced the behavior of design to some degree. Among these years, many researches have appeared for the development of computer-aided design. In recent years, such kind of computer-aided studies about the forepart of design, that is the stage of concept generation, have also started to generate. But most of these researches belonged to the kind of applied studies with the test of computer systems. On the other hand, there were many researches about the visual thinking and cognitive behavior of designers while sketching or drawing in the stage of concept generation. From the synthesis of the fore two disciplines, we can find that there existing a point of deficiency, that is the cognitive research about designers using computers as the sketching media is absent. And that is what I want to study and discuss in this research. The fundamental analytic data of this research is the visual process chronicled form the sketching of subjects, and the assistant analytic data is the verbal data from the questions that the subjects are asked after his/her sketching. These data is analyzed by three coding schema. The cognitive appearance while designers generating concepts with computers or conventional media are propounded and discussed in this research.
series CAADRIA
last changed 2022/06/07 07:57

_id avocaad_2001_17
id avocaad_2001_17
authors Ying-Hsiu Huang, Yu-Tung Liu, Cheng-Yuan Lin, Yi-Ting Cheng, Yu-Chen Chiu
year 2001
title The comparison of animation, virtual reality, and scenario scripting in design process
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary Design media is a fundamental tool, which can incubate concrete ideas from ambiguous concepts. Evolved from freehand sketches, physical models to computerized drafting, modeling (Dave, 2000), animations (Woo, et al., 1999), and virtual reality (Chiu, 1999; Klercker, 1999; Emdanat, 1999), different media are used to communicate to designers or users with different conceptual levels¡@during the design process. Extensively employed in design process, physical models help designers in managing forms and spaces more precisely and more freely (Millon, 1994; Liu, 1996).Computerized drafting, models, animations, and VR have gradually replaced conventional media, freehand sketches and physical models. Diversely used in the design process, computerized media allow designers to handle more divergent levels of space than conventional media do. The rapid emergence of computers in design process has ushered in efforts to the visual impact of this media, particularly (Rahman, 1992). He also emphasized the use of computerized media: modeling and animations. Moreover, based on Rahman's study, Bai and Liu (1998) applied a new design media¡Xvirtual reality, to the design process. In doing so, they proposed an evaluation process to examine the visual impact of this new media in the design process. That same investigation pointed towards the facilitative role of the computerized media in enhancing topical comprehension, concept realization, and development of ideas.Computer technology fosters the growth of emerging media. A new computerized media, scenario scripting (Sasada, 2000; Jozen, 2000), markedly enhances computer animations and, in doing so, positively impacts design processes. For the three latest media, i.e., computerized animation, virtual reality, and scenario scripting, the following question arises: What role does visual impact play in different design phases of these media. Moreover, what is the origin of such an impact? Furthermore, what are the similarities and variances of computing techniques, principles of interaction, and practical applications among these computerized media?This study investigates the similarities and variances among computing techniques, interacting principles, and their applications in the above three media. Different computerized media in the design process are also adopted to explore related phenomenon by using these three media in two projects. First, a renewal planning project of the old district of Hsinchu City is inspected, in which animations and scenario scripting are used. Second, the renewal project is compared with a progressive design project for the Hsinchu Digital Museum, as designed by Peter Eisenman. Finally, similarity and variance among these computerized media are discussed.This study also examines the visual impact of these three computerized media in the design process. In computerized animation, although other designers can realize the spatial concept in design, users cannot fully comprehend the concept. On the other hand, other media such as virtual reality and scenario scripting enable users to more directly comprehend what the designer's presentation.Future studies should more closely examine how these three media impact the design process. This study not only provides further insight into the fundamental characteristics of the three computerized media discussed herein, but also enables designers to adopt different media in the design stages. Both designers and users can more fully understand design-related concepts.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id bacd
authors Abadí Abbo, Isaac
year 1999
title APPLICATION OF SPATIAL DESIGN ABILITY IN A POSTGRADUATE COURSE
source Full-scale Modeling and the Simulation of Light [Proceedings of the 7th European Full-scale Modeling Association Conference / ISBN 3-85437-167-5] Florence (Italy) 18-20 February 1999, pp. 75-82
summary Spatial Design Ability (SDA) has been defined by the author (1983) as the capacity to anticipate the effects (psychological impressions) that architectural spaces or its components produce in observers or users. This concept, which requires the evaluation of spaces by the people that uses it, was proposed as a guideline to a Masters Degree Course in Architectural Design at the Universidad Autonoma de Aguascalientes in Mexico. The theory and the exercises required for the experience needed a model that could simulate spaces in terms of all the variables involved. Full-scale modeling as has been tested in previous research, offered the most effective mean to experiment with space. A simple, primitive model was designed and built: an articulated ceiling that allows variation in height and shape, and a series of wooden panels for the walls and structure. Several exercises were carried out, mainly to experience cause -effect relationships between space and the psychological impressions they produce. Students researched into spatial taxonomy, intentional sequences of space and spatial character. Results showed that students achieved the expected anticipation of space and that full-scale modeling, even with a simple model, proved to be an effective tool for this purpose. The low cost of the model and the short time it took to be built, opens an important possibility for Institutions involved in architectural studies, both as a research and as a learning tool.
keywords Spatial Design Ability, Architectural Space, User Evaluation, Learning, Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 11:27

_id 3cde
authors Alik, B.
year 1999
title A topology construction from line drawings using a uniform plane subdivision technique
source Computer-Aided Design, Vol. 31 (5) (1999) pp. 335-348
summary The paper describes an algorithm for constructing the topology from a set of line segments or polylines. The problem appears for example at land-maps that have been drawnby general-purpose drawing packages or captured from blue-prints by digitalisation. The solution comprises two steps; in the first step inconsistencies in the input data aredetected and removed, and in the second step the topology is constructed. The algorithm for topology construction consists of two phases: determination of a concave hull,and generation of polygons. It is shown that the running-time of the presented algorithm is better than O(n2), where n is the number of input points. Because of a largenumber of geometric elements being expected, the geometric search needed at the first step of the algorithm is speeded up by an acceleration techniquea uniform planesubdivision.
keywords Computational Geometry, Topology Construction, Uniform Space Subdivision
series journal paper
email
last changed 2003/05/15 21:33

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 31HOMELOGIN (you are user _anon_24301 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002