CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 502

_id 536e
authors Bouman, Ole
year 1997
title RealSpace in QuickTimes: architecture and digitization
source Rotterdam: Nai Publishers
summary Time and space, drastically compressed by the computer, have become interchangeable. Time is compressed in that once everything has been reduced to 'bits' of information, it becomes simultaneously accessible. Space is compressed in that once everything has been reduced to 'bits' of information, it can be conveyed from A to B with the speed of light. As a result of digitization, everything is in the here and now. Before very long, the whole world will be on disk. Salvation is but a modem away. The digitization process is often seen in terms of (information) technology. That is to say, one hears a lot of talk about the digital media, about computer hardware, about the modem, mobile phone, dictaphone, remote control, buzzer, data glove and the cable or satellite links in between. Besides, our heads are spinning from the progress made in the field of software, in which multimedia applications, with their integration of text, image and sound, especially attract our attention. But digitization is not just a question of technology, it also involves a cultural reorganization. The question is not just what the cultural implications of digitization will be, but also why our culture should give rise to digitization in the first place. Culture is not simply a function of technology; the reverse is surely also true. Anyone who thinks about cultural implications, is interested in the effects of the computer. And indeed, those effects are overwhelming, providing enough material for endless speculation. The digital paradigm will entail a new image of humankind and a further dilution of the notion of social perfectibility; it will create new notions of time and space, a new concept of cause and effect and of hierarchy, a different sort of public sphere, a new view of matter, and so on. In the process it will indubitably alter our environment. Offices, shopping centres, dockyards, schools, hospitals, prisons, cultural institutions, even the private domain of the home: all the familiar design types will be up for review. Fascinated, we watch how the new wave accelerates the process of social change. The most popular sport nowadays is 'surfing' - because everyone is keen to display their grasp of dirty realism. But there is another way of looking at it: under what sort of circumstances is the process of digitization actually taking place? What conditions do we provide that enable technology to exert the influence it does? This is a perspective that leaves room for individual and collective responsibility. Technology is not some inevitable process sweeping history along in a dynamics of its own. Rather, it is the result of choices we ourselves make and these choices can be debated in a way that is rarely done at present: digitization thanks to or in spite of human culture, that is the question. In addition to the distinction between culture as the cause or the effect of digitization, there are a number of other distinctions that are accentuated by the computer. The best known and most widely reported is the generation gap. It is certainly stretching things a bit to write off everybody over the age of 35, as sometimes happens, but there is no getting around the fact that for a large group of people digitization simply does not exist. Anyone who has been in the bit business for a few years can't help noticing that mum and dad are living in a different place altogether. (But they, at least, still have a sense of place!) In addition to this, it is gradually becoming clear that the age-old distinction between market and individual interests are still relevant in the digital era. On the one hand, the advance of cybernetics is determined by the laws of the marketplace which this capital-intensive industry must satisfy. Increased efficiency, labour productivity and cost-effectiveness play a leading role. The consumer market is chiefly interested in what is 'marketable': info- and edutainment. On the other hand, an increasing number of people are not prepared to wait for what the market has to offer them. They set to work on their own, appropriate networks and software programs, create their own domains in cyberspace, domains that are free from the principle whereby the computer simply reproduces the old world, only faster and better. Here it is possible to create a different world, one that has never existed before. One, in which the Other finds a place. The computer works out a new paradigm for these creative spirits. In all these distinctions, architecture plays a key role. Owing to its many-sidedness, it excludes nothing and no one in advance. It is faced with the prospect of historic changes yet it has also created the preconditions for a digital culture. It is geared to the future, but has had plenty of experience with eternity. Owing to its status as the most expensive of arts, it is bound hand and foot to the laws of the marketplace. Yet it retains its capacity to provide scope for creativity and innovation, a margin of action that is free from standardization and regulation. The aim of RealSpace in QuickTimes is to show that the discipline of designing buildings, cities and landscapes is not only a exemplary illustration of the digital era but that it also provides scope for both collective and individual activity. It is not just architecture's charter that has been changed by the computer, but also its mandate. RealSpace in QuickTimes consists of an exhibition and an essay.
series other
email
last changed 2003/04/23 15:14

_id 88f9
authors Carrara, G., Novembri, G., Zorgno, A.M., Brusasco, P.L.
year 1997
title Virtual Studio of Design and Technology on Internet (I) - Educator's approach
doi https://doi.org/10.52842/conf.ecaade.1997.x.n2w
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
summary This paper presents a teaching experience involving students and professors from various universities, in Italy and abroad, which began in 1996 and is still on going. The Virtual Studios on the Internet (VSI) have some features in common with the Teaching Studios planned for the new programme of the faculties of Architecture in Italian universities. These are the definition of a common design theme, and the participation of disciplinary teachers. The greatest difference is in the modes of collaboration, which is achieved through information and communication technologies. The chief result of this is that the various work groups in different places can work and collaborate at the same time: the computer networks provide the means to express, communicate and share the design project.
keywords CAAD, Teaching of architectural design, Shared virtual reality, Virtualdesign studio, Collective intelligence.
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/lvi_i&ii/zorgno.html
last changed 2022/06/07 07:50

_id cf2011_p016
id cf2011_p016
authors Merrick, Kathryn; Gu Ning
year 2011
title Supporting Collective Intelligence for Design in Virtual Worlds: A Case Study of the Lego Universe
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 637-652.
summary Virtual worlds are multi-faceted technologies. Facets of virtual worlds include graphical simulation tools, communication, design and modelling tools, artificial intelligence, network structure, persistent object-oriented infrastructure, economy, governance and user presence and interaction. Recent studies (Merrick et al., 2010) and applications (Rosenman et al., 2006; Maher et al., 2006) have shown that the combination of design, modelling and communication tools, and artificial intelligence in virtual worlds makes them suitable platforms for supporting collaborative design, including human-human collaboration and human-computer co-creativity. Virtual worlds are also coming to be recognised as a platform for collective intelligence (Levy, 1997), a form of group intelligence that emerges from collaboration and competition among large numbers of individuals. Because of the close relationship between design, communication and virtual world technologies, there appears a strong possibility of using virtual worlds to harness collective intelligence for supporting upcoming “design challenges on a much larger scale as we become an increasingly global and technological society” (Maher et al, 2010), beyond the current support for small-scale collaborative design teams. Collaborative design is relatively well studied and is characterised by small-scale, carefully structured design teams, usually comprising design professionals with a good understanding of the design task at hand. All team members are generally motivated and have the skills required to structure the shared solution space and to complete the design task. In contrast, collective design (Maher et al, 2010) is characterised by a very large number of participants ranging from professional designers to design novices, who may need to be motivated to participate, whose contributions may not be directly utilised for design purposes, and who may need to learn some or all of the skills required to complete the task. Thus the facets of virtual worlds required to support collective design differ from those required to support collaborative design. Specifically, in addition to design, communication and artificial intelligence tools, various interpretive, mapping and educational tools together with appropriate motivational and reward systems may be required to inform, teach and motivate virtual world users to contribute and direct their inputs to desired design purposes. Many of these world facets are well understood by computer game developers, as level systems, quests or plot and achievement/reward systems. This suggests the possibility of drawing on or adapting computer gaming technologies as a basis for harnessing collective intelligence in design. Existing virtual worlds that permit open-ended design – such as Second Life and There – are not specifically game worlds as they do not have extensive level, quest and reward systems in the same way as game worlds like World of Warcraft or Ultima Online. As such, while Second Life and There demonstrate emergent design, they do not have the game-specific facets that focus users towards solving specific problems required for harnessing collective intelligence. However, a new massively multiplayer virtual world is soon to be released that combines open-ended design tools with levels, quests and achievement systems. This world is called Lego Universe (www.legouniverse.com). This paper presents technology spaces for the facets of virtual worlds that can contribute to the support of collective intelligence in design, including design and modelling tools, communication tools, artificial intelligence, level system, motivation, governance and other related facets. We discuss how these facets support the design, communication, motivational and educational requirements of collective intelligence applications. The paper concludes with a case study of Lego Universe, with reference to the technology spaces defined above. We evaluate the potential of this or similar tools to move design beyond the individual and small-scale design teams to harness large-scale collective intelligence. We also consider the types of design tasks that might best be addressed in this manner.
keywords collective intelligence, collective design, virtual worlds, computer games
series CAAD Futures
email
last changed 2012/02/11 19:21

_id 4485
authors Kerckhove, D. de
year 1997
title Connected Intelligence
source The Arrival of the Web Society, Somerville House, Toronto
summary De Kerckhove's beat is the philosophy of emerging media. When media pundits want to know how McLuhan would interpret the societal consequences of the World Wide Web, they call de Kerckhove.
series other
last changed 2003/04/23 15:14

_id d60a
authors Casti, J.C.
year 1997
title Would be Worlds: How simulation is changing the frontiers of science
source John Wiley & Sons, Inc., New York.
summary Five Golden Rules is caviar for the inquiring reader. Anyone who enjoyed solving math problems in high school will be able to follow the author's explanations, even if high school was a long time ago. There is joy here in watching the unfolding of these intricate and beautiful techniques. Casti's gift is to be able to let the nonmathematical reader share in his understanding of the beauty of a good theory.-Christian Science Monitor "[Five Golden Rules] ranges into exotic fields such as game theory (which played a role in the Cuban Missile Crisis) and topology (which explains how to turn a doughnut into a coffee cup, or vice versa). If you'd like to have fun while giving your brain a first-class workout, then check this book out."-San Francisco Examiner "Unlike many popularizations, [this book] is more than a tour d'horizon: it has the power to change the way you think. Merely knowing about the existence of some of these golden rules may spark new, interesting-maybe even revolutionary-ideas in your mind. And what more could you ask from a book?"-New Scientist "This book has meat! It is solid fare, food for thought . . . makes math less forbidding, and much more interesting."-Ben Bova, The Hartford Courant "This book turns math into beauty."-Colorado Daily "John Casti is one of the great science writers of the 1990s."-San Francisco Examiner In the ever-changing world of science, new instruments often lead to momentous discoveries that dramatically transform our understanding. Today, with the aid of a bold new instrument, scientists are embarking on a scientific revolution as profound as that inspired by Galileo's telescope. Out of the bits and bytes of computer memory, researchers are fashioning silicon surrogates of the real world-elaborate "artificial worlds"-that allow them to perform experiments that are too impractical, too costly, or, in some cases, too dangerous to do "in the flesh." From simulated tests of new drugs to models of the birth of planetary systems and galaxies to computerized petri dishes growing digital life forms, these laboratories of the future are the essential tools of a controversial new scientific method. This new method is founded not on direct observation and experiment but on the mapping of the universe from real space into cyberspace. There is a whole new science happening here-the science of simulation. The most exciting territory being mapped by artificial worlds is the exotic new frontier of "complex, adaptive systems." These systems involve living "agents" that continuously change their behavior in ways that make prediction and measurement by the old rules of science impossible-from environmental ecosystems to the system of a marketplace economy. Their exploration represents the horizon for discovery in the twenty-first century, and simulated worlds are charting the course. In Would-Be Worlds, acclaimed author John Casti takes readers on a fascinating excursion through a number of remarkable silicon microworlds and shows us how they are being used to formulate important new theories and to solve a host of practical problems. We visit Tierra, a "computerized terrarium" in which artificial life forms known as biomorphs grow and mutate, revealing new insights into natural selection and evolution. We play a game of Balance of Power, a simulation of the complex forces shaping geopolitics. And we take a drive through TRANSIMS, a model of the city of Albuquerque, New Mexico, to discover the root causes of events like traffic jams and accidents. Along the way, Casti probes the answers to a host of profound questions these "would-be worlds" raise about the new science of simulation. If we can create worlds inside our computers at will, how real can we say they are? Will they unlock the most intractable secrets of our universe? Or will they reveal instead only the laws of an alternate reality? How "real" do these models need to be? And how real can they be? The answers to these questions are likely to change the face of scientific research forever.
series other
last changed 2003/04/23 15:14

_id 03d0
authors Neiman, Bennett and Bermudez, Julio
year 1997
title Between Digital & Analog Civilizations: The Spatial Manipulation Media Workshop
doi https://doi.org/10.52842/conf.acadia.1997.131
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 131-137
summary As the power shift from material culture to media culture accelerates, architecture finds itself in the midst of a clash between centuries-old analog design methods (such as tracing paper, vellum, graphite, ink, chipboard, clay, balsa wood, plastic, metal, etc.) and the new digital systems of production (such as scanning, video capture, image manipulation, visualization, solid modeling, computer aided drafting, animation, rendering, etc.). Moving forward requires a realization that a material interpretation of architecture proves limiting at a time when information and media environments are the major drivers of culture. It means to pro-actively incorporate the emerging digital world into our traditional analog work. It means to change.

This paper presents the results of an intense design workshop that looks, probes, and builds at the very interface that is provoking the cultural and professional shifts. Media space is presented and used as an interpretive playground for design experimentation in which the poetics of representation (and not its technicalities) are the driving force to generate architectural ideas. The work discussed was originally developed as a starting exercise for a digital design course. The exercise was later conducted as a workshop at two schools of architecture by different faculty working in collaboration with it's inventor.

The workshop is an effective sketch problem that gives students an immediate start into a non-traditional, hands-on, and integrated use of contemporary media in the design process. In doing so, it establishes a procedural foundation for a design studio dealing with digital media.

series ACADIA
email
last changed 2022/06/07 07:58

_id a732
authors Wenz, Florian
year 1997
title Babylon S M L XL - The Missing Language of Cyberspace
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 749-756
summary We first discuss the future role of the CITY as a main generator of cultural fiction and suggest a superimposition of the PHYSICAL city and the DIGITAL city. We then draw parallels between the original intentions behind the World Wide Web and Hyper Text Markup Language and its expected follow up CYBERSPACE and Virtual Reality Markup Language. The development of three-dimensional SEMANTIC CODES for interactive environments is identified as one main task of the future. Within this framework, Babylon S M L XL, a series of research experiments conducted at the Architectural Space Laboratory at the professorship is investigating concepts and methods. The images display some scenes from this work in chronological order, while the captions provide content descriptions and META CODE abstractions.
series CAAD Futures
email
more http://caad.arch.ethz.ch/~wenz/babylon
last changed 1999/04/06 09:19

_id 23ea
authors Seebohm, Thomas and Wallace, William
year 1997
title Rule - Based Representation Of Design In Architectural Practice
doi https://doi.org/10.52842/conf.acadia.1997.251
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 251-264
summary It is suggested that expert systems storing the design knowledge of particular offices in terms of stylistic and construction practice provide a means to take considerably more advantage of information technology than currently. The form of the knowledge stored by such expert systems is a building representation in the form of rules stating how components are placed in three-dimensional space relative to each other. By describing how Frank Lloyd Wright designed his Usonian houses it is demonstrated that the proposed approach is very much in the spirit of distinguished architectural practice. To illustrate this idea, a system for assembling three-dimensional architectural details is presented with particular emphasis on the nature of the rules and the form of the building components created by the rules to assemble typical details. The nature of the rules, which are a three-dimensional adaptation of Stiny's shape grammars, is described. In particular, it is shown how the rules themselves are structured into different classes, what the nature of these classes is and how specific rules can be obtained from more general rules. The rules embody a firm's collective design experience in detailing. As a conclusion, an overview is given of architectural practice using rule-based representations.

series ACADIA
email
last changed 2022/06/07 07:56

_id 7ebf
authors Clark, G. and Mehta, P.
year 1997
title Artificial intelligence and networking in integrated building management systems
source Automation in Construction 6 (5-6) (1997) pp. 481-498
summary In recent years the emphasis has moved towards integrating all a building's systems via centralised building management systems (BMS). To provide a more intelligent approach to the facility management, safety and energy control in building management systems (IBMS), this paper proposes a methodology for integrating the data within a BMS via a single multi-media networking technology and providing the BMS with artificial intelligence (AI) through the use of knowledge-based systems (KBS) technology. By means of artificial intelligence, the system is capable of assessing, diagnosing and suggesting the best solution. This paper outlines how AI techniques can enhance the control of HVAC systems for occupant comfort and efficient running costs based on occupancy prediction. Also load control and load balancing are investigated. Instead of just using pre-programmed load priorities, this work has investigated the use of a dynamic system of priorities which are based on many factors such as area usage, occupancy, time of day and real time environmental conditions. This control strategy which is based on a set of rules running on the central control system, makes use of information gathered from outstations throughout the building and communicated via the building's data-bus.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 2483
authors Gero, J.S. and Kazakov, V.
year 1997
title Learning and reusing information in space layout problems using genetic engineering
source Artificial Intelligence in Engineering 11(3):329-334
summary The paper describes the application of a genetic engineering based extension to genetic algorithms to the layout planning problem. We study the gene evolution which takes place when an algorithm of this type is running and demonstrate that in many cases it effectively leads to the partial decomposition of the layout problem by grouping some activit ies together and optimally placing these groups during the first stage of the computation. At a second stage it optimally places activities within these groups. We show that the algorithm finnds the solution faster than standard evolutionary methods and that evolved genes represent design features that can be re-used later in a range of similar problems.
keywords Genetic Engineering, Learning
series other
email
last changed 2001/09/08 12:04

_id sigradi2007_af13
id sigradi2007_af13
authors Granero, Adriana Edith; Alicia Barrón; María Teresa Urruti
year 2007
title Transformations in the educational system, Influence of the Digital Graph [Transformaciones en el sistema educacional, influencia de la Gráfica Digital]
source SIGraDi 2007 - [Proceedings of the 11th Iberoamerican Congress of Digital Graphics] México D.F. - México 23-25 October 2007, pp. 182-186
summary The educative proposal was based on the summary attained through experiences piled up during the 2 last semester courses, 2/2006-1/2007. This proposal corresponds to a mix of methodology (by personal attendance / by internet). Founding on the Theory of the Game (Eric Berne 1960) and on different theories such as: Multiple intelligences (Haward Gardner 1983), Emotional Intelligence (Peter Salowey and John Mayer 1990, Goleman 1998), Social Intelligence (Goleman 2006), the Triarchy of Intelligence (Stemberg, R.J. 1985, 1997), “the hand of the human power”, it´s established that the power of the voice, that of the imagination, the reward, the commitment and association produce a significant increase of the productivity (Rosabeth Moss Kanter 2000), aside from the constructive processes of the knowledge (new pedagogical concepts constructivista of Ormrod J.E. 2003 and Tim O´Reilly 2004).
series SIGRADI
email
last changed 2016/03/10 09:52

_id d036
authors Jang, J.S.R., Sun, C.T. and Mizutani, E.
year 1997
title Neuro-fuzzy and soft computing; a computational approach to learning and machine intelligence
source Prentice Hall, Upper Saddle River
summary Included in Prentice Hall's MATLAB Curriculum Series, this text provides a comprehensive treatment of the methodologies underlying neuro-fuzzy and soft computing. The book places equal emphasis on theoretical aspects of covered methodologies, empirical observations, and verifications of various applications in practice.
series other
last changed 2003/04/23 15:14

_id 01f7
authors Krause, Jeffrey
year 1997
title Agent Generated Architecture
doi https://doi.org/10.52842/conf.acadia.1997.063
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 63-70
summary This paper will describe a behavior based artificial intelligence experiment in computer generated architectural design and will explain the internal representations and procedures of an agent based autonomous system. This is a departure from traditional (AI and architectural) top-down approaches, allowing hundreds of agents to work simultaneously—building, manipulating, and dismantling their environment. Individual agents work in collaboration, in disjunction or autonomously.

Architectural design is perhaps most commonly described by the architect as consisting of the ability to see the whole picture, to organize, to collect, to juggle, to manage, and to maintain multiple conflicting goals and values. Architecture by the preceding definition is hierarchical and top-down in nature. The agent based experiment in this paper presents an alternative design process, involving multiple autonomous agents acting distributively. The agents (objects) move through the design landscape, simultaneously collaborating, building, degenerating, and transforming their world.

series ACADIA
email
last changed 2022/06/07 07:51

_id ab84
authors Li, Thomas S.P. and Will, Barry F.
year 1997
title A Computer-Aided Evaluation Tool for the Visual Aspects in Architectural Design for High-Density and High- Rise Buildings
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 345-356
summary The field of view, the nature of the objects being seen, the distances between the objects and the viewer, daylighting and sunshine are some major factors affecting perceived reactions when viewing through a window. View is one major factor that leads to the satisfaction and comfort of the users inside the building enclosure. While computer technologies are being widely used in the field of architecture, designers still have to use their own intelligence, experience and preferences in judging their designs with respect to the quality of view. This paper introduces an alternative approach to the analysis of views by the use of computers. The prototype of this system and its underlying principles were first introduced in the C A A D R I A 1997 conference. This paper describes the further development of this system where emphasis has been placed on the high- rise and high-density environments. Architects may find themselves facing considerable limitations for improving their designs regarding views out of the building under these environmental conditions. This research permits an interactive real-time response to altering views as the forms and planes of the building are manipulated.
series CAAD Futures
email
last changed 2001/05/27 18:39

_id 1767
authors Loveday, D.L., Virk, G.S., Cheung, J.Y.M. and Azzi, D.
year 1997
title Intelligence in buildings: the potential of advanced modelling
source Automation in Construction 6 (5-6) (1997) pp. 447-461
summary Intelligence in buildings usually implies facilities management via building automation systems (BAS). However, present-day commercial BAS adopt a rudimentary approach to data handling, control and fault detection, and there is much scope for improvement. This paper describes a model-based technique for raising the level of sophistication at which BAS currently operate. Using stochastic multivariable identification, models are derived which describe the behaviour of air temperature and relative humidity in a full-scale office zone equipped with a dedicated heating, ventilating and air-conditioning (HVAC) plant. The models are of good quality, giving prediction accuracies of ± 0.25°C in 19.2°C and of ± 0.6% rh in 53% rh when forecasting up to 15 minutes ahead. For forecasts up to 3 days ahead, accuracies are ± 0.65°C and ± 1.25% rh, respectively. The utility of the models for facilities management is investigated. The "temperature model" was employed within a predictive on/off control strategy for the office zone, and was shown to substantially improve temperature regulation and to reduce energy consumption in comparison with conventional on/off control. Comparison of prediction accuracies for two different situations, that is, the office with and without furniture plus carpet, showed that some level of furnishing is essential during the commissioning phase if model-based control of relative humidity is contemplated. The prospects are assessed for wide-scale replication of the model-based technique, and it is shown that deterministic simulation has potential to be used as a means of initialising a model structure and hence of selecting the sensors for a BAS for any building at the design stage. It is concluded that advanced model-based methods offer significant promise for improving BAS performance, and that proving trials in full-scale everyday situations are now needed prior to commercial development and installation.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id cc51
authors Schnier, T. and Gero, J.S
year 1997
title Dominant and recessive genes in evolutionary systems applied to spatial reasoning
source A. Sattar (Ed.), Advanced Topics in Artificial Intelligence: 10th Australian Joint Conference on Artificial Intelligence AI97 Proceedings, Springer, Heidelberg, pp. 127-136
summary Learning genetic representation has been shown to be a useful tool in evolutionary computation. It can reduce the time required to find solutions and it allows the search process to be biased towards more desirable solutions. Learn-ing genetic representation involves the bottom-up creation of evolved genes from either original (basic) genes or from other evolved genes and the introduction of those into the population. The evolved genes effectively protect combinations of genes that have been found useful from being disturbed by the genetic operations (cross-over, mutation). However, this protection can rapidly lead to situations where evolved genes in-terlock in such a way that few or no genetic operations are possible on some genotypes. To prevent the interlocking previous implementations only allow the creation of evolved genes from genes that are direct neighbours on the genotype and therefore form continuous blocks. In this paper it is shown that the notion of dominant and recessive genes can be used to remove this limitation. Using more than one gene at a single location makes it possible to construct genetic operations that can separate interlocking evolved genes. This allows the use of non-continuous evolved genes with only minimal violations of the protection of evolved genes from those operations. As an example, this paper shows how evolved genes with dominant and re-cessive genes can be used to learn features from a set of Mondrian paintings. The representation can then be used to create new designs that contain features of the examples. The Mondrian paintings can be coded as a tree, where every node represents a rectangle division, with values for direction, position, line-width and colour. The modified evolutionary operations allow the system to cre-ate non-continuous evolved genes, for example associate two divisions with thin lines, without specifying other values. Analysis of the behaviour of the system shows that about one in ten genes is a dominant/recessive gene pair. This shows that while dominant and recessive genes are important to allow the use of non-continuous evolved genes, they do not occur often enough to seriously violate the protection of evolved genes from genetic operations.
keywords Evolutionary Systems, Genetic Representations
series other
email
last changed 2003/04/06 07:24

_id c14d
authors Silva, Neander
year 1997
title Artificial Intelligence and 3D Modelling Exploration: An Integrated Digital Design Studio
doi https://doi.org/10.52842/conf.ecaade.1997.x.l5p
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
summary

This paper describes a CAAD teaching strategy in which some Artificial Intelligence techniques are integrated with 3D modelling exploration. The main objective is to lead the students towards "repertoire" acquisition and creative exploration of design alternatives. This strategy is based on dialogue emulation, graphic precedent libraries, and 3D modelling as a medium of design study.

The course syllabus is developed in two parts: a first stage in which the students interact with an intelligent interface that emulates a dialogue. This interface produces advice composed of either precedents or possible new solutions. Textual descriptions of precedents are coupled with graphical illustrations and textual descriptions of possible new solutions are coupled with sets of 3D components. The second and final stage of the course is based on 3D modelling, not simply as a means of presentation, but as a design study medium. The students are then encouraged to get the system’s output from the first stage of the course and explore it graphically. This is done through an environment in which modelling in 3D is straightforward allowing the focus to be placed on design exploration rather than simply on design presentation. The students go back to the first stage for further advice depending on the results achieved in the second stage. This cycle is repeated until the design solution receives a satisfactory assessment.

keywords Education, Design Process, Interfaces, Neural Networks, 3D Modelling
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/silva/silva.htm
last changed 2022/06/07 07:50

_id 8ec9
authors Asanowicz, Alexander
year 1997
title Incompatible Pencil - Chance for Changing in Design Process
source AVOCAAD First International Conference [AVOCAAD Conference Proceedings / ISBN 90-76101-01-09] Brussels (Belgium) 10-12 April 1997, pp. 93-101
summary The existing Caad systems limit designers creativity by constraining them to work with prototypes provided by the system's knowledge base. Most think of computers as drafting machines and consider CAAD models as merely proposals for future buildings. But this kind of thinking (computers as simple drafting machines) seems to be a way without future. New media demands new process and new process demands new media. We have to give some thougt to impact of CAAD on the design process and in which part of it CAAD can add new value. In this paper there will be considered two ways of using of computers. First - creation of architectural form in an architect's mind and projects visualisation with using renderings, animation and virtual reality. In the second part - computer techniques are investigated as a medium of creation. Unlike a conventional drawing the design object within computer has a life of its own. In computer space design and the final product are one. Computer creates environments for new kind of design activities. In fact, many dimensions of meaning in cyberspace have led to a cyberreal architecture that is sure to have dramatic consequences for the profession.
series AVOCAAD
last changed 2005/09/09 10:48

_id 76ba
authors Bermudez, Julio
year 1997
title Cyber(Inter)Sections: Looking into the Real Impact of The Virtual in the Architectural Profession
source Proceedings of the Symposium on Architectural Design Education: Intersecting Perspectives, Identities and Approaches. Minneapolis, MN: College of Architecture & Landscape Architecture, pp. 57-63
summary As both the skepticism and 'hype' surrounding cyberspace vanish under the weight of ever increasing power, demand, and use of information, the architectural discipline must prepare for significant changes. For cyberspace is remorselessly cutting through the dearest structures, rituals, roles, and modes of production in our profession. Yet, this section is not just a detached cut through the existing tissues of the discipline. Rather it is an inter-section, as cyberspace becomes also transformed in the act of piercing. This phenomenon is causing major transformations in at least three areas: 1. Cyberspace is substantially altering the way we produce and communicate architectural information. The arising new working environment suggests highly hybrid and networked conditions that will push the productive and educational landscape of the discipline towards increasing levels of fluidity, exchanges, diversity and change. 2. It has been argued that cyberspace-based human and human-data interactions present us with the opportunity to foster a more free marketplace of ideologies, cultures, preferences, values, and choices. Whether or not the in-progress cyberincisions have the potential to go deep enough to cure the many illnesses afflicting the body of our discipline need to be considered seriously. 3. Cyberspace is a new place or environment wherein new kinds of human activities demand unprecedented types of architectural services. Rather than being a passing fashion, these new architectural requirements are destined to grow exponentially. We need to consider the new modes of practice being created by cyberspace and the education required to prepare for them. This paper looks at these three intersecting territories showing that it is academia and not practice that is leading the profession in the incorporation of virtuality into architecture. Rafael Moneo's words come to mind. [2]
series other
email
last changed 2003/11/21 15:16

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 25HOMELOGIN (you are user _anon_919439 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002