CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 554

_id 48db
authors Proctor, George
year 2001
title CADD Curriculum - The Issue of Visual Acuity
source Architectural Information Management [19th eCAADe Conference Proceedings / ISBN 0-9523687-8-1] Helsinki (Finland) 29-31 August 2001, pp. 192-200
doi https://doi.org/10.52842/conf.ecaade.2001.192
summary Design educators attempt to train the eyes and minds of students to see and comprehend the world around them with the intention of preparing those students to become good designers, critical thinkers and ultimately responsible architects. Over the last eight years we have been developing the digital media curriculum of our architecture program with these fundamental values. We have built digital media use and instruction on the foundation of our program which has historically been based in physical model making. Digital modeling has gradually replaced the capacity of physical models as an analytical and thinking tool, and as a communication and presentation device. The first year of our program provides a foundation and introduction to 2d and 3d design and composition, the second year explores larger buildings and history, the third year explores building systems and structure through design studies of public buildings, fourth year explores urbanism, theory and technology through topic studios and, during the fifth year students complete a capstone project. Digital media and CADD have and are being synchronized with the existing NAAB accredited regimen while also allowing for alternative career options for students. Given our location in the Los Angeles region, many students with a strong background in digital media have gone on to jobs in video game design and the movie industry. Clearly there is much a student of architecture must learn to attain a level of professional competency. A capacity to think visually is one of those skills and is arguably a skill that distinguishes members of the visual arts (including Architecture) from other disciplines. From a web search of information posted by the American Academy of Opthamology, Visual Acuity is defined as an ability to discriminate fine details when looking at something and is often measured with the Snellen Eye Chart (the 20/20 eye test). In the context of this paper visual acuity refers to a subject’s capacity to discriminate useful abstractions in a visual field for the purposes of Visual Thinking- problem solving through seeing (Arnheim, 1969, Laseau 1980, Hoffman 1998). The growing use of digital media and the expanding ability to assemble design ideas and images through point-and-click methods makes the cultivation and development of visual skills all the more important to today’s crop of young architects. The advent of digital media also brings into question the traditional, static 2d methods used to build visual skills in a design education instead of promoting active 3d methods for teaching, learning and developing visual skills. Interactive digital movies provide an excellent platform for promoting visual acuity, and correlating the innate mechanisms of visual perception with the abstractions and notational systems used in professional discourse. In the context of this paper, pedagogy for building visual acuity is being considered with regard to perception of the real world, for example the visual survey of an environment, a site or a street scene and how that visual survey works in conjunction with practice.
keywords Curriculum, Seeing, Abstracting, Notation
series eCAADe
email
last changed 2022/06/07 08:00

_id 473f
authors Bartnicka, Malgorzata
year 1998
title The Influence of Light upon the Spatial Perception of Image
source Cyber-Real Design [Conference Proceedings / ISBN 83-905377-2-9] Bialystock (Poland), 23-25 April 1998, pp. 21-26
summary With regard to mental perception, light is one of the basic and strongest experiences influencing man. It is a phenomenon unchanged since the beginning of human kind, regardless of the fact what form or shape it was transmitted in. We are so used to light that we have stopped noticing how much we owe to it. It is the basic source and condition of our visual perception. Without light, illumination, we would not be able to see anything as it is light that transmits the shapes, distances and colours seen by us. The light which we perceive is a specific sight stimulus. It constitutes of only a small range of the spectrum of electromagnetic radiation existing in nature. The visible radiation encompasses the wave length from 400 to 800 nm. When the whole range of the visible wave spectrum enters the eye, the impression of seeing white light is produced. The light rays entering the sight receptors are subject to reflection, absorption and transmission. In the retina of the eye, the light energy is transformed into nerve impulses. The reception of light is dependent on the degree of absorption of the length of certain waves and the concentration of light. A ray of light entering the eye pupil is the proper eye stimulus which stimulates the receptors of the retina and causes visual impressions.
series plCAD
last changed 1999/04/08 17:16

_id 4ea3
authors Johnson, S.
year 1998
title What's in a representation, why do we care, and what does it mean? Examining evidence from psychology
source Automation in Construction 8 (1) (1998) pp. 15-24
summary This paper examines psychological evidence on the nature and role of representations in cognition. Both internal (mental) and external (physical or digital) representations are considered. It is discovered that both types of representation are deeply linked to thought processes. They are linked to learning, the ability to use existing knowledge, and problem solving strategies. The links between representations, thought processes, and behavior are so deep that even eye movements are partly governed by representations. Choice of representations can affect limited cognitive resources like attention and short-term memory by forcing a person to try to utilize poorly organized information or perform 'translations' from one representation to another. The implications of this evidence are discussed. Based on these findings, a set of guidelines are presented, for digital representations which minimize drain of cognitive resources. These guidelines describe what sorts of characteristics and behaviors a representation should exhibit, and what sorts of information it should contain in order to accommodate and facilitate design. Current attempts to implement such representations are discussed.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 50a1
authors Hoffman, Donald
year 1998
title Visual Intelligence
source Norton Publishing, New York
summary After his stroke, Mr. P still had outstanding memory and intelligence. He could still read and talk, and mixed well with the other patients on his ward. His vision was in most respects normal---with one notable exception: He couldn't recognize the faces of people or animals. As he put it himself, "I can see the eyes, nose, and mouth quite clearly, but they just don't add up. They all seem chalked in, like on a blackboard ... I have to tell by the clothes or by the voice whether it is a man or a woman ...The hair may help a lot, or if there is a mustache ... ." Even his own face, seen in a mirror, looked to him strange and unfamiliar. Mr. P had lost a critical aspect of his visual intelligence. We have long known about IQ and rational intelligence. And, due in part to recent advances in neuroscience and psychology, we have begun to appreciate the importance of emotional intelligence. But we are largely ignorant that there is even such a thing as visual intelligence---that is, until it is severely impaired, as in the case of Mr. P, by a stroke or other insult to visual cortex. The culprit in our ignorance is visual intelligence itself. Vision is normally so swift and sure, so dependable and informative, and apparently so effortless that we naturally assume that it is, indeed, effortless. But the swift ease of vision, like the graceful ease of an Olympic ice skater, is deceptive. Behind the graceful ease of the skater are years of rigorous training, and behind the swift ease of vision is an intelligence so great that it occupies nearly half of the brain's cortex. Our visual intelligence richly interacts with, and in many cases precedes and drives, our rational and emotional intelligence. To understand visual intelligence is to understand, in large part, who we are. It is also to understand much about our highly visual culture in which, as the saying goes, image is everything. Consider, for instance, our entertainment. Visual effects lure us into theaters, and propel films like Star Wars and Jurassic Park to record sales. Music videos usher us before surreal visual worlds, and spawn TV stations like MTV and VH-1. Video games swallow kids (and adults) for hours on end, and swell the bottom lines of companies like Sega and Nintendo. Virtual reality, popularized in movies like Disclosure and Lawnmower Man, can immerse us in visual worlds of unprecedented realism, and promises to transform not only entertainment but also architecture, education, manufacturing, and medicine. As a culture we vote with our time and wallets and, in the case of entertainment, our vote is clear. Just as we enjoy rich literature that stimulates our rational intelligence, or a moving story that engages our emotional intelligence, so we also seek out and enjoy new media that challenge our visual intelligence. Or consider marketing and advertisement, which daily manipulate our buying habits with sophisticated images. Corporations spend millions each year on billboards, packaging, magazine ads, and television commercials. Their images can so powerfully influence our behavior that they sometimes generate controversy---witness the uproar over Joe Camel. If you're out to sell something, understanding visual intelligence is, without question, critical to the design of effective visual marketing. And if you're out to buy something, understanding visual intelligence can help clue you in to what is being done to you as a consumer, and how it's being done. This book is a highly illustrated and accessible introduction to visual intelligence, informed by the latest breakthroughs in vision research. Perhaps the most surprising insight that has emerged from vision research is this: Vision is not merely a matter of passive perception, it is an intelligent process of active construction. What you see is, invariably, what your visual intelligence constructs. Just as scientists intelligently construct useful theories based on experimental evidence, so vision intelligently constructs useful visual worlds based on images at the eyes. The main difference is that the constructions of scientists are done consciously, but those of vision are done, for the most part, unconsciously.
series other
last changed 2003/04/23 15:14

_id 094b
authors O´Rourke, J.
year 1998
title Computational Geometry in C
source Cambridge: Cambridge University Press
summary The first edition of this book is recognised as one of the definitive sources on the subject of Computational Geometry. In fact, O'Rourke has a long history in the field, has published many papers on the subject and is responsible for the computer graphics algorithms newsgroup which is where all computer geometers meet to discuss their ideas and problems. Typical problems discussed include how a polygon can be represented, how to calculate its area, how to detect if two polygons intersect and how to calculate the convex hull of a polygon. This leads onto more complex issues such as motion planning and seeing if a robot is able navigate from point x to point y without bumping into objects. The algorithms for these (and other) problems are discussed and many are implemented. In addition, many of the ideas are also discussed from the point of view of three and more dimensions. The only disappointment is that many problems are posed as questions at the end of the chapters and, as far as I could see, you cannot get the answers in the forms of a lecturer's supplement. This is fine in academia but not a lot of use for the commercial world. Due to the range of problems that incorporate computational geometry this book cannot be expected to answer every problem you might have. You will undoubtedly need access to other textbooks but I have been using the first edition of this book for many years and the second edition is a welcome addition to my bookshelf. If I was only allowed one computational geometry book then it would undoubtedly be this one.
series other
last changed 2003/04/23 15:14

_id ddssar0031
id ddssar0031
authors Witt, Tom
year 2000
title Indecision in quest of design
source Timmermans, Harry (Ed.), Fifth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Nijkerk, the Netherlands)
summary Designers all start with a solution (Darke, 1984), with what is known (Rittel, 1969, 1970). Hans Menghol, Svein Gusrud and Peter Opvik did so with the chair in the 1970s. Not content with the knowledge of the chair, however, they walked backward to the ignorance of the question that has always elicited the solution of chair and asked themselves the improbable question, “What is a chair?” Their answer was the Balans chair. “Until the introduction of the Norwegian Balans (balance) chair, the multi-billion dollar international chair industry had been surprisingly homogeneous. This chair is the most radical of the twentieth century and probably since the invention of the chair-throne itself (Cranz 1998). Design theorists have tried to understand in a measurable way what is not measurable: the way that designers think. Rather than attempt to analyze something that cannot be taken apart, I attempt to illuminate methods for generating new knowledge through ways of seeing connections that are not logical, and in fact are sometimes ironic. Among the possibilities discussed in this dialogue are the methodological power of language in the form of metaphor, the power of the imagination in mind experiments, the power of mythological story telling, and the power of immeasurable intangibles in the generation of the new knowledge needed to design.
series DDSS
last changed 2003/08/07 16:36

_id acdf
authors Dijkstra, J., Timmermans, H. and Roelen, W.
year 1998
title Eye Tracking as a User Behavior Registration Tool in Virtual Environments
source CAADRIA ‘98 [Proceedings of The Third Conference on Computer Aided Architectural Design Research in Asia / ISBN 4-907662-009] Osaka (Japan) 22-24 April 1998, pp. 57-66
doi https://doi.org/10.52842/conf.caadria.1998.057
summary Registration of user behavior in a virtual environment is a particular aspect of an ongoing research project which aims to develop a conjoint analysis - virtual reality system. In this paper, the registration of user behavior by eye tracking techniques will be described. It will be advocated that eye-tracking techniques offer interesting possibilities for recording user behavior.
keywords User Behavior, Conjoint Analysis, Virtual Reality, Decision Making
series CAADRIA
email
more http://www.caadria.org
last changed 2022/06/07 07:55

_id ga9811
id ga9811
authors Feuerstein, Penny L.
year 1998
title Collage, Technology, and Creative Process
source International Conference on Generative Art
summary Since the turn of the twentieth century artists have been using collage to suggest new realities and changing concepts of time. Appropriation and simulation can be found in the earliest recycled scraps in Cubist collages. Picasso and Braque liberated the art world with cubism, which integrated all planes and surfaces of the artists' subjects and combined them into a new, radical form. The computer is a natural extension of their work on collage. The identifying characteristics of the computer are integration, simultaneity and evolution which are inherent in collage. Further, the computer is about "converting information". There is something very facinating about scanning an object into the computer, creating a texture brush and drawing with the object's texture. It is as if the computer not only integrates information but different levels of awareness as well. In the act of converting the object from atoms to bits the object is portrayed at the same conscious level as the spiritual act of drawing. The speed and malleability of transforming an image on the computer can be compared to the speed and malleability of thought processes of the mind. David Salle said, "one of the impulses in new art is the desire to be a mutant, whether it involves artificial intelligence, gender or robotic parts. It is about the desire to get outside the self and the desire to trandscend one's place." I use the computer to transcend, to work in different levels of awareness at the same time - the spiritual and the physical. In the creative process of working with computer, many new images are generated from previous ones. An image can be processed in unlimited ways without degradation of information. There is no concept of original and copy. The computer alters the image and changes it back to its original in seconds. Each image is not a fixed object in time, but the result of dynamic aspects which are acquired from previous works and each new moment. In this way, using the computer to assist the mind in the creative processes of making art mirrors the changing concepts of time, space, and reality that have evolved as the twentieth century has progressed. Nineteenth-century concepts of the monolithic truth have been replaced with dualism and pluralism. In other words, the objective world independent of the observer, that assumes the mind is separate from the body, has been replaced with the mind and body as inseparable, connected to the objective world through our perception and awareness. Marshall Mcluhan said, "All media as extensions of ourselves serve to provide new transforming vision and awareness." The computer can bring such complexities and at the same time be very calming because it can be ultrafocused, promoting a higher level of awareness where life can be experienced more vividly. Nicholas Negroponte pointed out that "we are passing into a post information age, often having an audience of just one." By using the computer to juxtapose disparate elements, I create an impossible coherence, a hodgepodge of imagery not wholly illusory. Interestingly, what separates the elements also joins them. Clement Greenberg states that "the collage medium has played a pivotal role in twentieth century painting and sculpture"(1) Perspective, developed by the renaissance archetect Alberti, echoed the optically perceived world as reality was replaced with Cubism. Cubism brought about the destruction of the illusionist means and effects that had characterized Western painting since the fifteenth century.(2) Clement Greenberg describes the way in which physical and spiritual realities are combined in cubist collages. "By pasting a piece of newspaper lettering to the canvas one called attention to the physical reality of the work of art and made that reality the same as the art."(3) Before I discuss some of the concepts that relate collage to working with computer, I would like to define some of the theories behind them. The French word collage means pasting, or gluing. Today the concept may include all forms of composite art and processes of photomontage and assemblage. In the Foreword on Katherine Hoffman's book on Collage Kim Levin writes: "This technique - which takes bits and pieces out of context to patch them into new contexts keeps changeng, adapting to various styles and concerns. And it's perfectly apt that interpretations of collage have varied according to the intellectual inquiries of the time. From our vantage point near the end of the century we can now begin to see that collage has all along carried postmodern genes."(4) Computer, on the other hand is not another medium. It is a visual tool that may be used in the creative process. Patrick D. Prince's views are," Computer art is not concrete. There is no artifact in digital art. The images exist in the computer's memory and can be viewed on a monitor: they are pure visual information."(5) In this way it relates more to conceptual art such as performance art. Timothy Binkley explains that,"I believe we will find the concept of the computer as a medium to be more misleading than useful. Computer art will be better understood and more readily accepted by a skeptical artworld if we acknowledge how different it is from traditional tools. The computer is an extension of the mind, not of the hand or eye,and ,unlike cinema or photography, it does not simply add a new medium to the artist's repertoire, based on a new technology.(6) Conceptual art marked a watershed between the progress of modern art and the pluralism of postmodernism(7) " Once the art is comes out of the computer, it can take a variety of forms or be used with many different media. The artist does not have to write his/her own program to be creative with the computer. The work may have the thumbprint of a specific program, but the creative possibilities are up to the artist. Computer artist John Pearson feels that,"One cannot overlook the fact that no matter how technically interesting the artwork is it has to withstand analysis. Only the creative imagination of the artist, cultivated from a solid conceptual base and tempered by a sophisticsated visual sensitivity, can develop and resolve the problems of art."(8) The artist has to be even more focused and selective by using the computer in the creative process because of the multitude of options it creates and its generative qualities.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 55
authors Capellaro, Marcelo
year 1998
title Portabilidad de Documentos en Multiples Plataformas (Portability of Documents in Multiple Platforms)
source II Seminario Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings / ISBN 978-97190-0-X] Mar del Plata (Argentina) 9-11 september 1998, pp. 412-417
summary Exhibition with respect to handling of documents in multiple platforms and portability of files for the net and other means. Generation of documents in programs of frequent use for Digital Graphic Edition (CorelDraw, Illustrator, Freehand, PageMaker, QuarkXPress, Word, etc.) and conversion to portable documents of under capable weight to transport via modem with used sources absorbed in the document that facilitates their reading in multiple platforms. Possibility to publish and to incorporate links, audio files and video, annotations, to generate indexes of contents and margin notes. Professional use of portable documents for generation of color separations (cuatricromia, hexacromia and plane inks [Pantone or other systems colorimÈtricos]) for filming of movies in preimpresiÛn offset or other graphic systems of impression. Generation of electronic notes of class dedicated to be lowered by the net for any platform that the student used, education and evaluation at distance through the net. Interactividad level in communication off line.
series SIGRADI
email
last changed 2016/03/10 09:47

_id a2b0
id a2b0
authors Charitos, Dimitrios
year 1998
title The architectural aspect of designing space in virtual environments
source University of Strathclyde, Dept. of Architecure and Building Science
summary This thesis deals with the architectural aspect of virtual environment design. It aims at proposing a framework, which could inform the design of three-dimensional content for defining space in virtual environments, in order to aid navigation and wayfinding. The use of such a framework in the design of certain virtual environments is considered necessary for imposing a certain form and structure to our spatial experience in there.

Firstly, this thesis looks into literature from the fields of architectural and urban design theory, philosophy, environmental cognition, perceptual psychology and geography for the purpose of identifying a taxonomy of spatial elements and their structure in the real world, on the basis of the way that humans think about and remember real environments. Consequently, the taxonomy, proposed for space in the real world is adapted to the intrinsic characteristics of space in virtual environments, on the basis of human factors aspects of virtual reality technology. As a result, the thesis proposes a hypothetical framework consisting of a taxonomy of spatial and space-establishing elements that a virtual environment may comprise and of the possible structure of these elements.

Following this framework, several pilot virtual environments are designed, for the purpose of identifying key design issues for evaluation. As it was impossible to evaluate the whole framework, six specific design issues, which have important implications for the design of space in virtual environments, are investigated by experimental methods of research. Apart from providing answers to these specific design issues, the experimental phase leads to a better understanding of the nature of space in virtual environments and to several hypotheses for future empirical research.

series thesis:PhD
email
last changed 2003/10/29 21:37

_id 27
authors De Gregorio, R., Carmena, S., Morelli, R.D., AvendaÒo, C. and Lioi, C.
year 1998
title La Construccion del Espacio del Poder. Museo de la Casa Rosada (The Construction of the Space of Power. Museum of the "Casa Rosada" (Argentinean Presidential House))
source II Seminario Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings / ISBN 978-97190-0-X] Mar del Plata (Argentina) 9-11 september 1998, pp. 212-217
summary The present work is part of the exposition "Francesco Tamburini, La ConstrucciÛn del Espacio del Poder I", exhibited in Rivadavia Cultural Center ( Rosario city), and in Casa Rosada Museum during 1997. The Exposition is based on an investigation program of the space that involves Casa Rosada, determining this space as the first piece of its collection. In 1995, when a group of argentines where visiting the picture gallery Pianetti (Jesi, Italy) there have been found some watercolours of Francesco Tamburini (1846-1890), planner of the main faÁades of the Government and author of many works. These watercolours have great value for architecture, and unknown by public, they have been the starting point of the Exposition. Among these argentines was Roberto De Gregorio architect, historian teacher of this school of architecture, and in charge of the historical investigation. C.I.A.D.'s specific work consists in converting in digital data Casa Rosada's faÁades. The two first stages, already completed, finished on the digital data conversion of facades, in front of Plaza de Mayo and Rivadavia street, with presidential access esplanade. Actually the work is centred on the two facades left and on the elaboration of an electronic model for the edition of a CD-ROM containing the information of the exposition.
series SIGRADI
email
last changed 2016/03/10 09:50

_id 99f2
authors Gero, J.S.
year 1998
title Concept formation in design
source Knowledge-Based Systems 10(7-8): 429-435
summary This paper presents a computationally tractable view on where simple design concepts come from by proposing a paradigm for the formation of design concepts based on the emergence of patterns in the representation of designs. It is suggested that these design patterns form the basis of concepts. These design patterns once learned are then added to the repertoire of known patterns so that they do not need to be learned again. This approach uses the notion called the loosely-wired brain. The paper elaborates this idea primarily through implemented examples drawn from the genetic engineering of evolutionary systems and the qualitative representation of shapes and their multiple representations.
keywords Concept Formation, Pattern Emergence, Representation
series other
email
last changed 2003/04/06 09:00

_id 43e5
authors Ho, Chun-Heng
year 1998
title A Computational Model for Problem-Decomposing Strategy
source CAADRIA ‘98 [Proceedings of The Third Conference on Computer Aided Architectural Design Research in Asia / ISBN 4-907662-009] Osaka (Japan) 22-24 April 1998, pp. 415-424
doi https://doi.org/10.52842/conf.caadria.1998.415
summary Conventional computational models such as Soar, Act, and Mental Models, solve problems by pattern matching. However, according to other cognitive psychology-related studies, the searching strategies employed by experts and novices in well-structured problems closely resemble each other. Restated, problem-decomposing strategies allow expert designers to perform more effectively than novices. In this study, we construct a rule-based floor-planning CAD system in Lisp to closely examine the relationship between problem-decomposing strategies and design behavior in computation. Execution results demonstrate that the larger the number of elements that the system considers implies more efficient problem-decomposing strategies.
keywords Computational Model, Rule-Based Expert System, Housing Floor Planning, Problem-Decomposing Strategy
series CAADRIA
email
more http://www.caadria.org
last changed 2022/06/07 07:50

_id 10f9
authors Kvan, Th., West, R. and Vera, A.
year 1998
title Tools and Channels of Communication
source International Journal of Virtual Reality, 3:3, 1998, pp. 21-33
summary This paper proposes a methodology to evaluate the effects of computer-mediated communication on collaboratively solving design problems. When setting up a virtual design community; choices must be made between a variety of tools; choices dictated by budget; bandwidth; ability and availability. How do you choose between the tools; which is useful and how will each affect the outcome of the design exchanges you plan? A commonly used method is to analyze the work done and to identify tools which support this type of work. In general; research on the effects of computer-mediation on collaborative work has concentrated mainly on social-psychological factors such as deindividuation and attitude polarization; and used qualitative methods. In contrast; we propose to examine the process of collaboration itself; focusing on separating those component processes which primarily involve individual work from those that involve genuine interaction. Extending the cognitive metaphor of the brain as a computer; we view collaboration in terms of a network process; and examine issues of control; coordination; and delegation to separate sub-processors. Through this methodology we attempt to separate the individual problem-solving component from the larger process of collaboration.
keywords Expertise; Collaboration; Novice
series journal paper
email
last changed 2002/11/15 18:29

_id e17e
authors Liu, Yu-Tung
year 1998
title A Dual Generate-and-Test Model for Design Creativity
source CAADRIA ‘98 [Proceedings of The Third Conference on Computer Aided Architectural Design Research in Asia / ISBN 4-907662-009] Osaka (Japan) 22-24 April 1998, pp. 395-404
doi https://doi.org/10.52842/conf.caadria.1998.395
summary This paper proposes a broader framework for understanding creativity by distinguishing different levels of creativity, namely personal and social/cultural creativity, and their interaction. Within this framework, the possible role that the computer can play could be further explored by analyzing the procedure of rule formation and the phenomena of seeing emergent subshapes.
keywords Model of Design Creativity, Problem-Solving, Generate-and-Test Paradigm, Search Model, Social/Cultural Paradigm
series CAADRIA
email
more http://www.caadria.org
last changed 2022/06/07 07:59

_id 791e
id 791e
authors Monreal, Amadeo; Jacas, Joan
year 2004
title COMPUTER AIDED GENERATION OF ARCHITECTURAL TYPOLOGIES
source Proceedings of the Fourth International Conference of Mathematics & Design, Special Edition of the Journal of Mathematics & Design, Volume 4, No.1, pp. 73-82.
summary The work we present may be considered as the consolidation of a methodology that was already outlined in the paper presented in the second M&D congress held in San Sebastian (1998). We establish that, in architectural design, the computer is only used in the last step in order to achieve the traditional Euclidean design in a more precise and comfortable way and to improve the quality of the handmade designs. Our proposal consists in modifying the process from the very beginning of the creative act. That is, when the design conception is born. If we want to obtain the maximum benefit of the computer possibilities, we ought to support this conception by means of a language tuned with the mentioned tool. Due to the fact that the internal language used by the computer for producing graphics is mathematical, we must incorporate, in some way, this language in the codification of the creative process. In accordance with this setting, we propose a mathematical grammar for the design based on the construction of modulated standard mathematical functions. This grammar is developed independently from the graphical software and it is specified only when a particular computer program for the effective generation on the graphical objects is selected.
series other
type normal paper
email
last changed 2005/04/07 12:49

_id 338a
authors Noble, Douglas and Hsu, Jason
year 1999
title Computer Aided Animation in Architecture: Analysis of Use and the Views of the Profession
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 109-114
summary A traditional way to present three-dimensional representations of architectural design has been through the use of manually drawn perspective drawings. The perspective representation assists in the comprehension of the forms and spaces, but is difficult to manually generate. The computer revolution made perspectives much easier to generate and led to a dramatically increased use of three-dimensional representation as a presentation technique. We are just now seeing substantial uses of animation as a communication and presentation tool in architecture. This paper documents the results of two surveys of the architectural profession that sought to discover the current and near future intentions for the use of computer animation. Our belief is that current levels of computer animation use are low, but that many firms intend to start using animation both as a design and presentation tool. In early 1998 we conducted a survey of the uses of computer animation by architectural firms. We posited a set of 14 related hypotheses. This paper represents the tabulated results from 82 completed surveys out of 620 requests. While some level of confidence can be obtained from this sample size, we are publishing in the hope of encouraging continued response to the survey.
series SIGRADI
email
last changed 2016/03/10 09:56

_id diss_prothero
id diss_prothero
authors Prothero, Jerrold D.
year 1998
title The Role of Rest Frames in Vection, Presence and Motion Sickness
source University of Washington, HIT-Lab
summary A framework is presented for comprehending partly participants' spatial percep- tion in virtual environments. Speci c hypotheses derived from that framework in- clude: simulator sickness should be reducible through visual background manipula- tions; and the sense of presence, or of \being in" a virtual environment, should be increased by manipulations that facilitate perception of a virtual scene as a perceptual rest frame. Experiments to assess the simulator sickness reduction hypothesis demon- strated that congruence between the visual background and inertial cues decreased reported simulator sickness and per-exposure postural instability. Experiments to assess the presence hypothesis used two measures: self-reported presence and visual- inertial nulling. Results indicated that a meaningful virtual scene, as opposed to a random one, increased both reported presence and the level of inertial motion re- quired to overcome perceived self-motion elicited by scene motion. The simulator sickness research implies that visual background manipulations may be a means to reduce the prevalent unwanted side-e ects of simulators. The presence research intro- duces a procedure, possibly based on brain-stem level neural processing, to measure the salience of virtual environments. Both lines of research are central to developing e ective virtual interfaces which have the potential to increase the human-computer bandwidth, and thus to partially address the information explosion.
series thesis:MSc
more http://www.hitl.washington.edu/publications/r-98-11/
last changed 2003/11/28 07:35

_id aaa9
authors Shneiderman, Ben
year 1998
title Designing the User Interface (3rd Ed.)
source Addison Wesley, 650 p. [ISBN: 0201694972 ]
summary Designing the User Interface is intended primarily for designers, managers, and evaluators of interactive systems. It presents a broad survey of designing, implementing, managing, maintaining, training, and refining the user interface of interactive systems. The book's second audience is researchers in human-computer interaction, specifically those who are interested in human performance with interactive systems. These researchers may have backgrounds in computer science, psychology, information systems, library science, business, education, human factors, ergonomics, or industrial engineering; all share a desire to understand the complex interaction of people with machines. Students in these fields also will benefit from the contents of this book. It is my hope that this book will stimulate the introduction of courses on user-interface design in all these and other disciplines. Finally, serious users of interactive systems will find that the book gives them a more thorough understanding of the design questions for user interfaces. My goals are to encourage greater attention to the user interface and to help develop a rigorous science of user-interface design.
series other
email
more http://www.awl.com/dtui/
last changed 2003/04/02 08:00

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 27HOMELOGIN (you are user _anon_391823 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002