CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 749

_id 2f0b
authors Kurzweil, R.
year 2000
title The Age of Spiritual Machines: When Computers Exceed Human Intelligence
source Penguin Books, London
summary How much do we humans enjoy our current status as the most intelligent beings on earth? Enough to try to stop our own inventions from surpassing us in smarts? If so, we'd better pull the plug right now, because if Ray Kurzweil is right, we've only got until about 2020 before computers outpace the human brain in computational power. Kurzweil, artificial intelligence expert and author of The Age of Intelligent Machines, shows that technological evolution moves at an exponential pace. Further, he asserts, in a sort of swirling postulate, time speeds up as order increases, and vice versa. He calls this the "Law of Time and Chaos," and it means that although entropy is slowing the stream of time down for the universe overall, and thus vastly increasing the amount of time between major events, in the eddy of technological evolution the exact opposite is happening, and events will soon be coming faster and more furiously. This means that we'd better figure out how to deal with conscious machines as soon as possible--they'll soon not only be able to beat us at chess, they'll likely demand civil rights, and they may at last realize the very human dream of immortality. The Age of Spiritual Machines is compelling and accessible, and not necessarily best read from front to back--it's less heavily historical if you jump around (Kurzweil encourages this). Much of the content of the book lays the groundwork to justify Kurzweil's timeline, providing an engaging primer on the philosophical and technological ideas behind the study of consciousness. Instead of being a gee-whiz futurist manifesto, Spiritual Machines reads like a history of the future, without too much science fiction dystopianism. Instead, Kurzweil shows us the logical outgrowths of current trends, with all their attendant possibilities. This is the book we'll turn to when our computers
series other
last changed 2003/04/23 15:14

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 1bb0
authors Russell, S. and Norvig, P.
year 1995
title Artificial Intelligence: A Modern Approach
source Prentice Hall, Englewood Cliffs, NJ
summary Humankind has given itself the scientific name homo sapiens--man the wise--because our mental capacities are so important to our everyday lives and our sense of self. The field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves. But unlike philosophy and psychology, which are also concerned with AI strives to build intelligent entities as well as understand them. Another reason to study AI is that these constructed intelligent entities are interesting and useful in their own right. AI has produced many significant and impressive products even at this early stage in its development. Although no one can predict the future in detail, it is clear that computers with human-level intelligence (or better) would have a huge impact on our everyday lives and on the future course of civilization. AI addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain{brain}, whether biological or electronic, to perceive, understand, predict, and manipulate a world far larger and more complicated than itself? How do we go about making something with those properties? These are hard questions, but unlike the search for faster-than-light travel or an antigravity device, the researcher in AI has solid evidence that the quest is possible. All the researcher has to do is look in the mirror to see an example of an intelligent system. AI is one of the newest disciplines. It was formally initiated in 1956, when the name was coined, although at that point work had been under way for about five years. Along with modern genetics, it is regularly cited as the ``field I would most like to be in'' by scientists in other disciplines. A student in physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest, and that it takes many years of study before one can contribute new ideas. AI, on the other hand, still has openings for a full-time Einstein. The study of intelligence is also one of the oldest disciplines. For over 2000 years, philosophers have tried to understand how seeing, learning, remembering, and reasoning could, or should, be done. The advent of usable computers in the early 1950s turned the learned but armchair speculation concerning these mental faculties into a real experimental and theoretical discipline. Many felt that the new ``Electronic Super-Brains'' had unlimited potential for intelligence. ``Faster Than Einstein'' was a typical headline. But as well as providing a vehicle for creating artificially intelligent entities, the computer provides a tool for testing theories of intelligence, and many theories failed to withstand the test--a case of ``out of the armchair, into the fire.'' AI has turned out to be more difficult than many at first imagined, and modern ideas are much richer, more subtle, and more interesting as a result. AI currently encompasses a huge variety of subfields, from general-purpose areas such as perception and logical reasoning, to specific tasks such as playing chess, proving mathematical theorems, writing poetry{poetry}, and diagnosing diseases. Often, scientists in other fields move gradually into artificial intelligence, where they find the tools and vocabulary to systematize and automate the intellectual tasks on which they have been working all their lives. Similarly, workers in AI can choose to apply their methods to any area of human intellectual endeavor. In this sense, it is truly a universal field.
series other
last changed 2003/04/23 15:14

_id ga0013
id ga0013
authors Annunziato, Mauro and Pierucci, Piero
year 2000
title Artificial Worlds, Virtual Generations
source International Conference on Generative Art
summary The progress in the scientific understanding/simulation of the evolution mechanisms and the first technological realizations (artificial life environments, robots, intelligent toys, self reproducing machines, agents on the web) are creating the base of a new age: the coming of the artificial beings and artificial societies. Although this aspect could seems a technological conquest, by our point of view it represent the foundation of a new step in the human evolution. The anticipation of this change is the development of a new cultural paradigm inherited from the theories of evolution and complexity: a new way to think to the culture, aesthetics and intelligence seen as emergent self-organizing qualities of a collectivity evolved along the time through genetic and language evolution. For these reasons artificial life is going to be an anticipatory and incredibly creative area for the artistic expression and imagination. In this paper we try to correlate some elements of the present research in the field of artificial life, art and technological grow up in order to trace a path of development for the creation of digital worlds where the artificial beings are able to evolve own culture, language and aesthetics and they are able to interact con the human people.Finally we report our experience in the realization of an interactive audio-visual art installation based on two connected virtual worlds realized with artificial life environments. In these worlds,the digital individuals can interact, reproduce and evolve through the mechanisms of genetic mutations. The real people can interact with the artificial individuals creating an hybrid ecosystem and generating emergent shapes, colors, sound architectures and metaphors for imaginary societies, virtual reflections of the real worlds.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id sigradi2006_e183a
id sigradi2006_e183a
authors Costa Couceiro, Mauro
year 2006
title La Arquitectura como Extensión Fenotípica Humana - Un Acercamiento Basado en Análisis Computacionales [Architecture as human phenotypic extension – An approach based on computational explorations]
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 56-60
summary The study describes some of the aspects tackled within a current Ph.D. research where architectural applications of constructive, structural and organization processes existing in biological systems are considered. The present information processing capacity of computers and the specific software development have allowed creating a bridge between two holistic nature disciplines: architecture and biology. The crossover between those disciplines entails a methodological paradigm change towards a new one based on the dynamical aspects of forms and compositions. Recent studies about artificial-natural intelligence (Hawkins, 2004) and developmental-evolutionary biology (Maturana, 2004) have added fundamental knowledge about the role of the analogy in the creative process and the relationship between forms and functions. The dimensions and restrictions of the Evo-Devo concepts are analyzed, developed and tested by software that combines parametric geometries, L-systems (Lindenmayer, 1990), shape-grammars (Stiny and Gips, 1971) and evolutionary algorithms (Holland, 1975) as a way of testing new architectural solutions within computable environments. It is pondered Lamarck´s (1744-1829) and Weismann (1834-1914) theoretical approaches to evolution where can be found significant opposing views. Lamarck´s theory assumes that an individual effort towards a specific evolutionary goal can cause change to descendents. On the other hand, Weismann defended that the germ cells are not affected by anything the body learns or any ability it acquires during its life, and cannot pass this information on to the next generation; this is called the Weismann barrier. Lamarck’s widely rejected theory has recently found a new place in artificial and natural intelligence researches as a valid explanation to some aspects of the human knowledge evolution phenomena, that is, the deliberate change of paradigms in the intentional research of solutions. As well as the analogy between genetics and architecture (Estévez and Shu, 2000) is useful in order to understand and program emergent complexity phenomena (Hopfield, 1982) for architectural solutions, also the consideration of architecture as a product of a human extended phenotype can help us to understand better its cultural dimension.
keywords evolutionary computation; genetic architectures; artificial/natural intelligence
series SIGRADI
email
last changed 2016/03/10 09:49

_id fa1b
authors Haapasalo, H.
year 2000
title Creative computer aided architectural design An internal approach to the design process
source University of Oulu (Finland)
summary This survey can be seen as quite multidisciplinary research. The basis for this study has been inapplicability of different CAD user interfaces in architectural design. The objective of this research is to improve architectural design from the creative problem-solving viewpoint, where the main goal is to intensify architectural design by using information technology. The research is linked to theory of methods, where an internal approach to design process means studying the actions and thinking of architects in the design process. The research approach has been inspired by hermeneutics. The human thinking process is divided into subconscious and conscious thinking. The subconscious plays a crucial role in creative work. The opposite of creative work is systematic work, which attempts to find solutions by means of logical inference. Both creative and systematic problem solving have had periods of predominance in the history of Finnish architecture. The perceptions in the present study indicate that neither method alone can produce optimal results. Logic is one of the tools of creativity, since the analysis and implementation of creative solutions require logical thinking. The creative process cannot be controlled directly, but by creating favourable work conditions for creativity, it can be enhanced. Present user interfaces can make draughting and the creation of alternatives quicker and more effective in the final stages of designing. Only two thirds of the architects use computers in working design, even the CAD system is being acquired in greater number of offices. User interfaces are at present inflexible in sketching. Draughting and sketching are the basic methods of creative work for architects. When working with the mouse, keyboard and screen the natural communication channel is impaired, since there is only a weak connection between the hand and the line being drawn on the screen. There is no direct correspondence between hand movements and the lines that appear on the screen, and the important items cannot be emphasized by, for example, pressing the pencil more heavily than normally. In traditional sketching the pen is a natural extension of the hand, as sketching can sometimes be controlled entirely by the unconscious. Conscious efforts in using the computer shift the attention away from the actual design process. However, some architects have reached a sufficiently high level of skill in the use of computer applications in order to be able to use them effectively in designing without any harmful effect on the creative process. There are several possibilities in developing CAD systems aimed at architectural design, but the practical creative design process has developed during a long period of time, in which case changing it in a short period of time would be very difficult. Although CAD has had, and will have, some evolutionary influences on the design process of architects as an entity, the future CAD user interface should adopt its features from the architect's practical and creative design process, and not vice versa.
keywords Creativity, Systematicism, Sketching
series thesis:PhD
email
more http://herkules.oulu.fi/isbn9514257545/
last changed 2003/02/12 22:37

_id ga0008
id ga0008
authors Koutamanis, Alexander
year 2000
title Redirecting design generation in architecture
source International Conference on Generative Art
summary Design generation has been the traditional culmination of computational design theory in architecture. Motivated either by programmatic and functional complexity (as in space allocation) or by the elegance and power of representational analyses (shape grammars, rectangular arrangements), research has produced generative systems capable of producing new designs that satisfied certain conditions or of reproducing exhaustively entire classes (such as all possible Palladian villas), comprising known and plausible new designs. Most generative systems aimed at a complete spatial design (detailing being an unpopular subject), with minimal if any intervention by the human user / designer. The reason for doing so was either to give a demonstration of the elegance, power and completeness of a system or simply that the replacement of the designer with the computer was the fundamental purpose of the system. In other words, the problem was deemed either already resolved by the generative system or too complex for the human designer. The ongoing democratization of the computer stimulates reconsideration of the principles underlying existing design generation in architecture. While the domain analysis upon which most systems are based is insightful and interesting, jumping to a generative conclusion was almost always based on a very sketchy understanding of human creativity and of the computer's role in designing and creativity. Our current perception of such matters suggests a different approach, based on the augmentation of intuitive creative capabilities with computational extensions. The paper proposes that architectural generative design systems can be redirected towards design exploration, including the development of alternatives and variations. Human designers are known to follow inconsistent strategies when confronted with conflicts in their designs. These strategies are not made more consistent by the emerging forms of design analysis. The use of analytical means such as simulation, couple to the necessity of considering a rapidly growing number of aspects, means that the designer is confronted with huge amounts of information that have to be processed and integrated in the design. Generative design exploration that can combine the analysis results in directed and responsive redesigning seems an effective method for the early stages of the design process, as well as for partial (local) problems in later stages. The transformation of generative systems into feedback support and background assistance for the human designer presupposes re-orientation of design generation with respect to the issues of local intelligence and autonomy. Design generation has made extensive use of local intelligence but has always kept it subservient to global schemes that tended to be holistic, rigid or deterministic. The acceptance of local conditions as largely independent structures (local coordinating devices) affords a more flexible attitude that permits not only the emergence of internal conflicts but also the resolution of such conflicts in a transparent manner. The resulting autonomy of local coordinating devices can be expanded to practically all aspects and abstraction levels. The ability to have intelligent behaviour built in components of the design representation, as well as in the spatial and building elements they signify, means that we can create the new, sharper tools required by the complexity resulting from the interpretation of the built environment as a dynamic configuration of co-operating yet autonomous parts that have to be considered independently and in conjunction with each other.   P.S. The content of the paper will be illustrated by a couple of computer programs that demonstrate the princples of local intelligence and autonomy in redesigning. It is possible that these programs could be presented as independent interactive exhibits but it all depends upon the time we can make free for the development of self-sufficient, self-running demonstrations until December.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ga0004
id ga0004
authors Lund, Andreas
year 2000
title Evolving the Shape of Things to Come - A Comparison of Interactive Evolution and Direct Manipulation for Creative Tasks
source International Conference on Generative Art
summary This paper is concerned with differences between direct manipulation and interactive evolutionary design as two fundamentally different interaction styles for creative tasks. Its main contribution to the field of generative design is the treatment of interactive evolutionary design as a general interaction style that can be used to support users in creative tasks. Direct manipulation interfaces, a term coined by Ben Shneiderman in the mid-seventies, are the kind of interface that is characteristic of most modern personal computer application user interfaces. Typically, direct manipulation interfaces incorporate a model of a context (such as a desktop environment) supposedly familiar to users. Rather than giving textual commands (i.e. "remove file.txt", "copy file1.txt file2.txt") to an imagined intermediary between the user and the computer, the user acts directly on the objects of interest to complete a task. Undoubtedly, direct manipulation has played an important role in making computers accessible to non-computer experts. Less certain are the reasons why direct manipulation interfaces are so successful. It has been suggested that this kind of interaction style caters for a sense of directness, control and engagement in the interaction with the computer. Also, the possibilities of incremental action with continuous feedback are believed to be an important factor of the attractiveness of direct manipulation. However, direct manipulation is also associated with a number of problems that make it a less than ideal interaction style in some situations. Recently, new interaction paradigms have emerged that address the shortcomings of direct manipulation in various ways. One example is so-called software agents that, quite the contrary to direct manipulation, act on behalf of the user and alleviate the user from some of the attention and cognitive load traditionally involved in the interaction with large quantities of information. However, this relief comes at the cost of lost user control and requires the user to put trust into a pseudo-autonomous piece of software. Another emerging style of human-computer interaction of special interest for creative tasks is that of interactive evolutionary design (sometimes referred to as aesthetic selection). Interactive evolutionary design is inspired by notions from biological evolution and may be described as a way of exploring a large – potentially infinite – space of possible design configurations based on the judgement of the user. Rather than, as is the case with direct manipulation, directly influencing the features of an object, the user influences the design by means of expressing her judgement of design examples. Variations of interactive evolutionary design have been employed to support design and creation of a variety of objects. Examples of such objects include artistic images, web advertising banners and facial expressions. In order to make an empirical investigation possible, two functional prototypes have been designed and implemented. Both prototypes are targeted at typeface design. The first prototype allows a user to directly manipulate a set of predefined attributes that govern the design of a typeface. The second prototype allows a user to iteratively influence the design of a typeface by means of expressing her judgement of typeface examples. Initially, these examples are randomly generated but will, during the course of interaction, converge upon design configurations that reflect the user’s expressed subjective judgement. In the evaluation of the prototypes, I am specifically interested in users’ sense of control, convergence and surprise. Is it possible to maintain a sense of control and convergence without sacrificing the possibilities of the unexpected in a design process? The empirical findings seem to suggest that direct manipulation caters for a high degree of control and convergence, but with a small amount of surprise and sense of novelty. The interactive evolutionary design prototype supported a lower degree of experienced control, but seems to provide both a sense of surprise and convergence. One plausible interpretation of this is that, on the one hand, direct manipulation is a good interaction style for realizing the user’s intentions. On the other hand, interactive evolutionary design has a potential to actually change the user’s intentions and pre-conceptions of that which is being designed and, in doing so, adds an important factor to the creative process. Based on the empirical findings, the paper discusses situations when interactive evolutionary design may be a serious contender with direct manipulation as the principal interaction style and also how a combination of both styles can be applied.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 3fb8
authors Monedero, Javier
year 1999
title Can a Machine Design? A Disturbing Recreation of Turing's Test for the Use of Architects
doi https://doi.org/10.52842/conf.ecaade.1999.589
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 589-594
summary In 1950, fifty years ago, Alan Turing published a much-quoted paper that has given rise to a long list of articles and books. It presented, perhaps for the first time, in a clever and somehow sarcastic way, what has become one of the main big questions raised by the use of computers in human societies. The title of that paper was "Computing Machinery and Intelligence" (Mind, Vol. LIX, No. 236, October 1950) and the game proposed in it, called by Turing "the imitation game" has come to be known as "Turing's Test". The paper presented here is a rather simple adaptation of Turing's Test. It may, I hope, present in a, perhaps, not too serious a way, some central points related to the way that computers have integrated themselves in architect's, engineer's and building enterprises and, through them, in the way that architecture evolves in our times and adapts itself to modern societies.
series eCAADe
email
last changed 2022/06/07 07:58

_id ga0010
id ga0010
authors Moroni, A., Zuben, F. Von and Manzolli, J.
year 2000
title ArTbitrariness in Music
source International Conference on Generative Art
summary Evolution is now considered not only powerful enough to bring about the biological entities as complex as humans and conciousness, but also useful in simulation to create algorithms and structures of higher levels of complexity than could easily be built by design. In the context of artistic domains, the process of human-machine interaction is analyzed as a good framework to explore creativity and to produce results that could not be obtained without this interaction. When evolutionary computation and other computational intelligence methodologies are involved, every attempt to improve aesthetic judgement we denote as ArTbitrariness, and is interpreted as an interactive iterative optimization process. ArTbitrariness is also suggested as an effective way to produce art through an efficient manipulation of information and a proper use of computational creativity to increase the complexity of the results without neglecting the aesthetic aspects [Moroni et al., 2000]. Our emphasis will be in an approach to interactive music composition. The problem of computer generation of musical material has received extensive attention and a subclass of the field of algorithmic composition includes those applications which use the computer as something in between an instrument, in which a user "plays" through the application's interface, and a compositional aid, which a user experiments with in order to generate stimulating and varying musical material. This approach was adopted in Vox Populi, a hybrid made up of an instrument and a compositional environment. Differently from other systems found in genetic algorithms or evolutionary computation, in which people have to listen to and judge the musical items, Vox Populi uses the computer and the mouse as real-time music controllers, acting as a new interactive computer-based musical instrument. The interface is designed to be flexible for the user to modify the music being generated. It explores evolutionary computation in the context of algorithmic composition and provides a graphical interface that allows to modify the tonal center and the voice range, changing the evolution of the music by using the mouse[Moroni et al., 1999]. A piece of music consists of several sets of musical material manipulated and exposed to the listener, for example pitches, harmonies, rhythms, timbres, etc. They are composed of a finite number of elements and basically, the aim of a composer is to organize those elements in an esthetic way. Modeling a piece as a dynamic system implies a view in which the composer draws trajectories or orbits using the elements of each set [Manzolli, 1991]. Nonlinear iterative mappings are associated with interface controls. In the next page two examples of nonlinear iterative mappings with their resulting musical pieces are shown.The mappings may give rise to attractors, defined as geometric figures that represent the set of stationary states of a non-linear dynamic system, or simply trajectories to which the system is attracted. The relevance of this approach goes beyond music applications per se. Computer music systems that are built on the basis of a solid theory can be coherently embedded into multimedia environments. The richness and specialty of the music domain are likely to initiate new thinking and ideas, which will have an impact on areas such as knowledge representation and planning, and on the design of visual formalisms and human-computer interfaces in general. Above and bellow, Vox Populi interface is depicted, showing two nonlinear iterative mappings with their resulting musical pieces. References [Manzolli, 1991] J. Manzolli. Harmonic Strange Attractors, CEM BULLETIN, Vol. 2, No. 2, 4 -- 7, 1991. [Moroni et al., 1999] Moroni, J. Manzolli, F. Von Zuben, R. Gudwin. Evolutionary Computation applied to Algorithmic Composition, Proceedings of CEC99 - IEEE International Conference on Evolutionary Computation, Washington D. C., p. 807 -- 811,1999. [Moroni et al., 2000] Moroni, A., Von Zuben, F. and Manzolli, J. ArTbitration, Las Vegas, USA: Proceedings of the 2000 Genetic and Evolutionary Computation Conference Workshop Program – GECCO, 143 -- 145, 2000.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 2abf
id 2abf
authors Rafi, A
year 2001
title Design creativity in emerging technologies
source In Von, H., Stocker, G. and Schopf, C. (Eds.), Takeover: Who’s doing art of tomorrow (pp. 41-54), New York: SpringerWein.
summary Human creativity works best when there are constraints – pressures to react to, to shape, to suggest. People are generally not very good at making it all up from scratch (Laurel, 1991). Emerging technology particularly virtual reality (VR) Multimedia and Internet is yet to be fully discovered as it allows unprecedented creative talent, ability, skill set, creative thinking, representation, exploration, observation and reference. In an effort to deliver interactive content, designers tend to freely borrow from different fields such as advertising, medicine, game, fine art, commerce, entertainment, edutainment, film-making and architecture (Rafi, Kamarulzaman, Fauzan and Karboulonis, 2000). As a result, content becomes a base that developers transfer the technique of conventional medium design media to the computer. What developers (e.g. artist and technologist) often miss is that to develop the emerging technology content based on the nature of the medium. In this context, the user is the one that will be the best judge to value the effectiveness of the content.

The paper will introduce Global Information Infrastructure (GII) that is currently being developed in the Asian region and discuss its impact on the Information Age society. It will further highlight the ‘natural’ value and characteristics of the emerging technologies in particular Virtual Reality (VR), Multimedia and Internet as a guidance to design an effective, rich and innovative content development. This paper also argues that content designers of the future must not only be both artist and technologist, but artist and technologist that are aware of the re-convergence of art and science and context in which content is being developed. Some of our exploration at the Faculty of Creative Multimedia, Multimedia University will also be demonstrated. It is hoped that this will be the evidence to guide future ‘techno-creative designers’.

keywords design, creativity, content, emerging technologies
series book
type normal paper
email
last changed 2007/09/13 03:46

_id avocaad_2001_16
id avocaad_2001_16
authors Yu-Ying Chang, Yu-Tung Liu, Chien-Hui Wong
year 2001
title Some Phenomena of Spatial Characteristics of Cyberspace
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary "Space," which has long been an important concept in architecture (Bloomer & Moore, 1977; Mitchell, 1995, 1999), has attracted interest of researchers from various academic disciplines in recent years (Agnew, 1993; Benko & Strohmayer, 1996; Chang, 1999; Foucault, 1982; Gould, 1998). Researchers from disciplines such as anthropology, geography, sociology, philosophy, and linguistics regard it as the basis of the discussion of various theories in social sciences and humanities (Chen, 1999). On the other hand, since the invention of Internet, Internet users have been experiencing a new and magic "world." According to the definitions in traditional architecture theories, "space" is generated whenever people define a finite void by some physical elements (Zevi, 1985). However, although Internet is a virtual, immense, invisible and intangible world, navigating in it, we can still sense the very presence of ourselves and others in a wonderland. This sense could be testified by our naming of Internet as Cyberspace -- an exotic kind of space. Therefore, as people nowadays rely more and more on the Internet in their daily life, and as more and more architectural scholars and designers begin to invest their efforts in the design of virtual places online (e.g., Maher, 1999; Li & Maher, 2000), we cannot help but ask whether there are indeed sensible spaces in Internet. And if yes, these spaces exist in terms of what forms and created by what ways?To join the current interdisciplinary discussion on the issue of space, and to obtain new definition as well as insightful understanding of "space", this study explores the spatial phenomena in Internet. We hope that our findings would ultimately be also useful for contemporary architectural designers and scholars in their designs in the real world.As a preliminary exploration, the main objective of this study is to discover the elements involved in the creation/construction of Internet spaces and to examine the relationship between human participants and Internet spaces. In addition, this study also attempts to investigate whether participants from different academic disciplines define or experience Internet spaces in different ways, and to find what spatial elements of Internet they emphasize the most.In order to achieve a more comprehensive understanding of the spatial phenomena in Internet and to overcome the subjectivity of the members of the research team, the research design of this study was divided into two stages. At the first stage, we conducted literature review to study existing theories of space (which are based on observations and investigations of the physical world). At the second stage of this study, we recruited 8 Internet regular users to approach this topic from different point of views, and to see whether people with different academic training would define and experience Internet spaces differently.The results of this study reveal that the relationship between human participants and Internet spaces is different from that between human participants and physical spaces. In the physical world, physical elements of space must be established first; it then begins to be regarded as a place after interaction between/among human participants or interaction between human participants and the physical environment. In contrast, in Internet, a sense of place is first created through human interactions (or activities), Internet participants then begin to sense the existence of a space. Therefore, it seems that, among the many spatial elements of Internet we found, "interaction/reciprocity" Ñ either between/among human participants or between human participants and the computer interface Ð seems to be the most crucial element.In addition, another interesting result of this study is that verbal (linguistic) elements could provoke a sense of space in a degree higher than 2D visual representation and no less than 3D visual simulations. Nevertheless, verbal and 3D visual elements seem to work in different ways in terms of cognitive behaviors: Verbal elements provoke visual imagery and other sensory perceptions by "imagining" and then excite personal experiences of space; visual elements, on the other hand, provoke and excite visual experiences of space directly by "mapping".Finally, it was found that participants with different academic training did experience and define space differently. For example, when experiencing and analyzing Internet spaces, architecture designers, the creators of the physical world, emphasize the design of circulation and orientation, while participants with linguistics training focus more on subtle language usage. Visual designers tend to analyze the graphical elements of virtual spaces based on traditional painting theories; industrial designers, on the other hand, tend to treat these spaces as industrial products, emphasizing concept of user-center and the control of the computer interface.The findings of this study seem to add new information to our understanding of virtual space. It would be interesting for future studies to investigate how this information influences architectural designers in their real-world practices in this digital age. In addition, to obtain a fuller picture of Internet space, further research is needed to study the same issue by examining more Internet participants who have no formal linguistics and graphical training.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id ae61
authors Af Klercker, Jonas
year 1999
title CAAD - Integrated with the First Steps into Architecture
doi https://doi.org/10.52842/conf.ecaade.1999.266
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 266-272
summary How and when should CAAD be introduced in the curriculum of the School of Architecture? This paper begins with some arguments for starting CAAD education at the very beginning. At the School of Architecture in Lund teachers in the first year courses have tried to integrate CAAD with the introduction to architectural concepts and techniques. Traditionally the first year is divided by several subjects running courses separatly without any contact for coordination. From the academic year 96/97 the teachers of Aplied aestetics, Building Science, Architectural design and CAAD have decided to colaborate as much as possible to make the role of our different fields as clear as possible to the students. Therefore integrating CAAD was a natural step in the academic year 98/99. The computer techniques were taught one step in advance so that the students can practise their understanding of the programs in their tasks in the other subjects. The results were surprisingly good! The students have quickly learned to mix the manual and computer techniques to make expressive and interesting visual presentations of their ideas. Some students with antipaty to computers have overcome this handicap. Some interesting observations are discussed.
keywords Curriculum, First Year Studies, Integration, CAAD, Modelling
series eCAADe
email
last changed 2022/06/07 07:54

_id b0e7
authors Ahmad Rafi, M.E. and Karboulonis, P.
year 2000
title The Re-Convergence of Art and Science: A Vehicle for Creativity
doi https://doi.org/10.52842/conf.caadria.2000.491
source CAADRIA 2000 [Proceedings of the Fifth Conference on Computer Aided Architectural Design Research in Asia / ISBN 981-04-2491-4] Singapore 18-19 May 2000, pp. 491-500
summary Ever-increasing complexity in product design and the need to deliver a cost-effective solution that benefits from a dynamic approach requires the employment and adoption of innovative design methods which ensure that products are of the highest quality and meet or exceed customers' expectations. According to Bronowski (1976) science and art were originally two faces of the same human creativity. However, as civilisation advances and works became specialised, the dichotomy of science and art gradually became apparent. Hence scientists and artists were born, and began to develop work that was polar opposite. The sense of beauty itself became separated from science and was confined within the field of art. This dichotomy existed through mankind's efforts in advancing civilisation to its present state. This paper briefly examines the relationship between art and science through the ages and discusses their relatively recent re-convergence. Based on this hypothesis, this paper studies the current state of the convergence between arts and sciences and examines the current relationship between the two by considering real world applications and products. The study of such products and their successes and impact they had in the marketplace due to their designs and aesthetics rather than their advanced technology that had partially failed them appears to support this argument. This text further argues that a re-convergence between art and science is currently occurring and highlights the need for accelerating this process. It is suggested that re-convergence is a result of new technologies which are adopted by practitioners that include effective visualisation and communication of ideas and concepts. Such elements are widely found today in multimedia and Virtual Environments (VEs) where such tools offer increased power and new abilities to both scientists and designers as both venture in each other's domains. This paper highlights the need for the employment of emerging computer based real-time interactive technologies that are expected to enhance the design process through real-time prototyping and visualisation, better decision-making, higher quality communication and collaboration, lessor error and reduced design cycles. Effective employment and adoption of innovative design methods that ensure products are delivered on time, and within budget, are of the highest quality and meet customer expectations are becoming of ever increasing importance. Such tools and concepts are outlined and their roles in the industries they currently serve are identified. Case studies from differing fields are also studied. It is also suggested that Virtual Reality interfaces should be used and given access to Computer Aided Design (CAD) model information and data so that users may interrogate virtual models for additional information and functionality. Adoption and appliance of such integrated technologies over the Internet and their relevance to electronic commerce is also discussed. Finally, emerging software and hardware technologies are outlined and case studies from the architecture, electronic games, and retail industries among others are discussed, the benefits are subsequently put forward to support the argument. The requirements for adopting such technologies in financial, skills required and process management terms are also considered and outlined.
series CAADRIA
email
last changed 2022/06/07 07:54

_id 5cba
authors Anders, Peter
year 1999
title Beyond Y2k: A Look at Acadia's Present and Future
doi https://doi.org/10.52842/conf.acadia.1999.x.o3r
source ACADIA Quarterly, vol. 18, no. 1, p. 10
summary The sky may not be falling, but it sure is getting closer. Where will you when the last three zeros of our millennial odometer click into place? Computer scientists tell us that Y2K will bring the world’s computer infrastructure to its knees. Maybe, maybe not. But it is interesting that Y2K is an issue at all. Speculating on the future is simultaneously a magnifying glass for examining our technologies and a looking glass for what we become through them. "The future" is nothing new. Orwell's vision of totalitarian mass media did come true, if only as Madison Avenue rather than Big Brother. Futureboosters of the '50s were convinced that each garage would house a private airplane by the year 2000. But world citizens of the 60's and 70's feared a nuclear catastrophe that would replace the earth with a smoking crater. Others - perhaps more optimistically -predicted that computers were going to drive all our activities by the year 2000. And, in fact, theymay not be far off... The year 2000 is symbolic marker, a point of reflection and assessment. And - as this date is approaching rapidly - this may be a good time to come to grips with who we are and where we want to be.
series ACADIA
email
last changed 2022/06/07 07:49

_id 9bc4
authors Bhavnani, S.K. and John, B.E.
year 2000
title The Strategic Use of Complex Computer Systems
source Human-Computer Interaction 15 (2000), 107-137
summary Several studies show that despite experience, many users with basic command knowledge do not progress to an efficient use of complex computer applications. These studies suggest that knowledge of tasks and knowledge of tools are insufficient to lead users to become efficient. To address this problem, we argue that users also need to learn strategies in the intermediate layers of knowledge lying between tasks and tools. These strategies are (a) efficient because they exploit specific powers of computers, (b) difficult to acquire because they are suggested by neither tasks nor tools, and (c) general in nature having wide applicability. The above characteristics are first demonstrated in the context of aggregation strategies that exploit the iterative power of computers.Acognitive analysis of a real-world task reveals that even though such aggregation strategies can have large effects on task time, errors, and on the quality of the final product, they are not often used by even experienced users. We identify other strategies beyond aggregation that can be efficient and useful across computer applications and show how they were used to develop a new approach to training with promising results.We conclude by suggesting that a systematic analysis of strategies in the intermediate layers of knowledge can lead not only to more effective ways to design training but also to more principled approaches to design systems. These advances should lead users to make more efficient use of complex computer systems.
series other
email
last changed 2003/11/21 15:16

_id 8802
authors Burry, Mark, Dawson, Tony and Woodbury, Robert
year 1999
title Learning about Architecture with the Computer, and Learning about the Computer in Architecture
doi https://doi.org/10.52842/conf.ecaade.1999.374
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 374-382
summary Most students commencing their university studies in architecture must confront and master two new modes of thought. The first, widely known as reflection-in-action, is a continuous cycle of self-criticism and creation that produces both learning and improved work. The second, which we call here design making, is a process which considers building construction as an integral part of architectural designing. Beginning students in Australia tend to do neither very well; their largely analytic secondary education leaves the majority ill-prepared for these new forms of learning and working. Computers have both complicated and offered opportunities to improve this situation. An increasing number of entering students have significant computing skill, yet university architecture programs do little in developing such skill into sound and extensible knowledge. Computing offers new ways to engage both reflection-in-action and design making. The collaboration between two Schools in Australia described in detail here pools computer-based learning resources to provide a wider scope for the education in each institution, which we capture in the phrase: Learn to use computers in architecture (not use computers to learn architecture). The two shared learning resources are Form Making Games (Adelaide University), aimed at reflection-in-action and The Construction Primer (Deakin University and Victoria University of Wellington), aimed at design making. Through contributing to and customising the resources themselves, students learn how designing and computing relate. This paper outlines the collaborative project in detail and locates the initiative at a time when the computer seems to have become less self-consciously assimilated within the wider architectural program.
keywords Reflection-In-Action, Design Making, Customising Computers
series eCAADe
email
last changed 2022/06/07 07:54

_id b966
authors Ceccato, Cristiano and Janssen, Patrick
year 2000
title GORBI: Autonomous Intelligent Agents Using Distributed Object-Oriented Graphics
doi https://doi.org/10.52842/conf.ecaade.2000.297
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 297-300
summary Autonomous agents represent a new form of thinking that is of primary importance in the age of the Internet and distributed networks, and provide a platform on which Turing’s model of sequential instruction-executing machines and von NeumannÕs connectionist vision of interconnected, concurrent fine-grain processors may be reconciled. In this paper we map this emergent paradigm to design and design intelligence by to illustrating examples of decentralised interacting agents projects.
keywords Graphics, CAD, Internet, Evolutionary, Generative, Distributed, Decentralised, Object, Request, Broker, CORBA, OpenGL, Java, C++
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:55

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id ga0025
id ga0025
authors Chiodi , Andrea and Vernillo, Marco M.
year 2000
title Deep Architectures and Exterior Communication in Generative Art
source International Conference on Generative Art
summary Human beings formulate their thoughts through their own language. To use a sentence by Ezra Pound: “The thought hinges on word definition.” Software beings formulate their thoughts through data structures. Not through a specific expressive means, but directly through concepts and relations. Human beings formulate their thoughts in a context, which does not require any further translation. If software beings want to be appreciated by human beings, they are forced to translate their thoughts in one of the languages the human beings are able to understand. On the contrary, when a software being communicates with another software being, this unnatural translation is not justified: communication takes place directly through data structures, made uniform by opportune communication protocols. The Generative Art prospect gives the software beings the opportunity to create works according to their own nature. But, if the result of such a creation must be expressed in a language human beings are able to comprehend, then this result is a sort of circus performance and not a free thought. Let’s give software beings the dignity they deserve and therefore allow them to express themselves according to their own nature: by data structures. This work studies in depth the opportunity to divide the software ‘thought’ communication from its translation in a human language. The recent introduction of XML leads to formal languages definition oriented to data structure representation. Intrinsically data and program, XML allows, through subsequent executions and validations, the realization of typical contextual grammars descriptions, allowing the management of high complexities. The translation from a data structure into a human language can take place later on and be oriented to different alternative kind of expression: lexical (according to national languages), graphical, musical, plastic. The direct expression of data structures promises further communication opportunities also for human beings. One of these is the definition of a non-national language, as free as possible from lexical ambiguities, extremely precise. Another opportunity concerns the possibility to express concepts usually hidden by their own representation. A Roman bridge, the adagio “Music for strings, celesta and drums” by Bartok and Kafka’s short novel “In the gallery” have something in common; a work of Generative Art, first expressed in terms of structure and then translated into an architectural, musical, or literary work can express this explicit community.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 37HOMELOGIN (you are user _anon_104885 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002