CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 522

_id 536e
authors Bouman, Ole
year 1997
title RealSpace in QuickTimes: architecture and digitization
source Rotterdam: Nai Publishers
summary Time and space, drastically compressed by the computer, have become interchangeable. Time is compressed in that once everything has been reduced to 'bits' of information, it becomes simultaneously accessible. Space is compressed in that once everything has been reduced to 'bits' of information, it can be conveyed from A to B with the speed of light. As a result of digitization, everything is in the here and now. Before very long, the whole world will be on disk. Salvation is but a modem away. The digitization process is often seen in terms of (information) technology. That is to say, one hears a lot of talk about the digital media, about computer hardware, about the modem, mobile phone, dictaphone, remote control, buzzer, data glove and the cable or satellite links in between. Besides, our heads are spinning from the progress made in the field of software, in which multimedia applications, with their integration of text, image and sound, especially attract our attention. But digitization is not just a question of technology, it also involves a cultural reorganization. The question is not just what the cultural implications of digitization will be, but also why our culture should give rise to digitization in the first place. Culture is not simply a function of technology; the reverse is surely also true. Anyone who thinks about cultural implications, is interested in the effects of the computer. And indeed, those effects are overwhelming, providing enough material for endless speculation. The digital paradigm will entail a new image of humankind and a further dilution of the notion of social perfectibility; it will create new notions of time and space, a new concept of cause and effect and of hierarchy, a different sort of public sphere, a new view of matter, and so on. In the process it will indubitably alter our environment. Offices, shopping centres, dockyards, schools, hospitals, prisons, cultural institutions, even the private domain of the home: all the familiar design types will be up for review. Fascinated, we watch how the new wave accelerates the process of social change. The most popular sport nowadays is 'surfing' - because everyone is keen to display their grasp of dirty realism. But there is another way of looking at it: under what sort of circumstances is the process of digitization actually taking place? What conditions do we provide that enable technology to exert the influence it does? This is a perspective that leaves room for individual and collective responsibility. Technology is not some inevitable process sweeping history along in a dynamics of its own. Rather, it is the result of choices we ourselves make and these choices can be debated in a way that is rarely done at present: digitization thanks to or in spite of human culture, that is the question. In addition to the distinction between culture as the cause or the effect of digitization, there are a number of other distinctions that are accentuated by the computer. The best known and most widely reported is the generation gap. It is certainly stretching things a bit to write off everybody over the age of 35, as sometimes happens, but there is no getting around the fact that for a large group of people digitization simply does not exist. Anyone who has been in the bit business for a few years can't help noticing that mum and dad are living in a different place altogether. (But they, at least, still have a sense of place!) In addition to this, it is gradually becoming clear that the age-old distinction between market and individual interests are still relevant in the digital era. On the one hand, the advance of cybernetics is determined by the laws of the marketplace which this capital-intensive industry must satisfy. Increased efficiency, labour productivity and cost-effectiveness play a leading role. The consumer market is chiefly interested in what is 'marketable': info- and edutainment. On the other hand, an increasing number of people are not prepared to wait for what the market has to offer them. They set to work on their own, appropriate networks and software programs, create their own domains in cyberspace, domains that are free from the principle whereby the computer simply reproduces the old world, only faster and better. Here it is possible to create a different world, one that has never existed before. One, in which the Other finds a place. The computer works out a new paradigm for these creative spirits. In all these distinctions, architecture plays a key role. Owing to its many-sidedness, it excludes nothing and no one in advance. It is faced with the prospect of historic changes yet it has also created the preconditions for a digital culture. It is geared to the future, but has had plenty of experience with eternity. Owing to its status as the most expensive of arts, it is bound hand and foot to the laws of the marketplace. Yet it retains its capacity to provide scope for creativity and innovation, a margin of action that is free from standardization and regulation. The aim of RealSpace in QuickTimes is to show that the discipline of designing buildings, cities and landscapes is not only a exemplary illustration of the digital era but that it also provides scope for both collective and individual activity. It is not just architecture's charter that has been changed by the computer, but also its mandate. RealSpace in QuickTimes consists of an exhibition and an essay.
series other
email
last changed 2003/04/23 15:14

_id 8735
authors James, Stephen
year 1999
title An Allegorical Architecture: A Proposed Interpretive Center for the Bonneville Salt Flats
source ACADIA Quarterly, vol. 18, no. 1, pp. 18-19
doi https://doi.org/10.52842/conf.acadia.1999.018
summary Architecture is the physical expression of man's relationship to the landscape- an emblem of our heritage. Such a noble statement sounds silly into today's context, because civilized society has largely disassociated itself from raw nature. We have tamed the elements with our environmental controls and turned the deserts into pasture. I find much of the built environment distracting. Current architecture is trite, compared to geologic form and order. I visited the Bonneville Salt Flats- (Utah's anti-landscape) in the summer of 1997. The experience of arriving at the flats exceeded my expectations. I was overpowered by a sense of personal insignificance - a small spot floating on a sea of salt. The horizon seemed to swallow up the sky. Off in the distance I noticed a dark fleck. It looked as foreign as I felt on this pure white plane. I drove across the sticky salt toward it, only to discover an old rusty oil barrel half submerged in salt. In my mind, the barrel has a history. It tells the story of a man's attempt at achieving a goal, or maybe it represents a broken dream left to corrode in the alkali flats. The barrel remains planted in the salt as a relic for those who venture into the white wilderness. This experience left me to ponder whether or not architecture can serve the same purpose - telling the story of a place through its relationship to a landscape, and connection to events.
series ACADIA
email
last changed 2022/06/07 07:52

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id e336
authors Achten, H., Roelen, W., Boekholt, J.-Th., Turksma, A. and Jessurun, J.
year 1999
title Virtual Reality in the Design Studio: The Eindhoven Perspective
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 169-177
doi https://doi.org/10.52842/conf.ecaade.1999.169
summary Since 1991 Virtual Reality has been used in student projects in the Building Information Technology group. It started as an experimental tool to assess the impact of VR technology in design, using the environment of the associated Calibre Institute. The technology was further developed in Calibre to become an important presentation tool for assessing design variants and final design solutions. However, it was only sporadically used in student projects. A major shift occurred in 1997 with a number of student projects in which various computer technologies including VR were used in the whole of the design process. In 1998, the new Design Systems group started a design studio with the explicit aim to integrate VR in the whole design process. The teaching effort was combined with the research program that investigates VR as a design support environment. This has lead to increasing number of innovative student projects. The paper describes the context and history of VR in Eindhoven and presents the current set-UP of the studio. It discusses the impact of the technology on the design process and outlines pedagogical issues in the studio work.
keywords Virtual Reality, Design Studio, Student Projects
series eCAADe
email
last changed 2022/06/07 07:54

_id 0c91
authors Asanowicz, Aleksander
year 1997
title Computer - Tool vs. Medium
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.b2e
summary We have arrived an important juncture in the history of computing in our profession: This history is long enough to reveal clear trends in the use of computing, but not long to institutionalize them. As computers peremate every area of architecture - from design and construction documents to project administration and site supervision - can “virtual practice” be far behind? In the old days, there were basically two ways of architects working. Under stress. Or under lots more stress. Over time, someone forwarded the radical motion that the job could be easier, you could actually get more work done. Architects still have been looking for ways to produce more work in less time. They need a more productive work environment. The ideal environment would integrate man and machine (computer) in total harmony. As more and more architects and firms invest more and more time, money, and effort into particular ways of using computers, these practices will become resistant to change. Now is the time to decide if computing is developing the way we think it should. Enabled and vastly accelerated by technology, and driven by imperatives for cost efficiency, flexibility, and responsiveness, work in the design sector is changing in every respect. It is stands to reason that architects must change too - on every level - not only by expanding the scope of their design concerns, but by altering design process. Very often we can read, that the recent new technologies, the availability of computers and software, imply that use of CAAD software in design office is growing enormously and computers really have changed the production of contract documents in architectural offices.
keywords Computers, CAAD, Cyberreal, Design, Interactive, Medium, Sketches, Tools, Virtual Reality
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/asan/asanowic.htm
last changed 2022/06/07 07:50

_id 411c
authors Ataman, Osman and Bermúdez (Ed.)
year 1999
title Media and Design Process [Conference Proceedings]
source ACADIA ‘99 Proceedings / ISBN 1-880250-08-X / Salt Lake City 29-31 October 1999, 353 p.
doi https://doi.org/10.52842/conf.acadia.1999
summary Throughout known architectural history, representation, media and design have been recognized to have a close relationship. This relationship is inseparable; representation being a means for engaging in design thinking and making and this activity requiring media. Interpretations as to what exactly this relationship is or means have been subject to debate, disagreement and change along the ages. Whereas much has been said about the interactions between representation and design, little has been elaborated on the relationship between media and design. Perhaps, it is not until now, surrounded by all kinds of media at the turn of the millennium, as Johnson argues (1997), that we have enough context to be able to see and address the relationship between media and human activities with some degree of perspective.
series ACADIA
email
more http://www.acadia.org
last changed 2022/06/07 07:49

_id sigradi2006_e131c
id sigradi2006_e131c
authors Ataman, Osman
year 2006
title Toward New Wall Systems: Lighter, Stronger, Versatile
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 248-253
summary Recent developments in digital technologies and smart materials have created new opportunities and are suggesting significant changes in the way we design and build architecture. Traditionally, however, there has always been a gap between the new technologies and their applications into other areas. Even though, most technological innovations hold the promise to transform the building industry and the architecture within, and although, there have been some limited attempts in this area recently; to date architecture has failed to utilize the vast amount of accumulated technological knowledge and innovations to significantly transform the industry. Consequently, the applications of new technologies to architecture remain remote and inadequate. One of the main reasons of this problem is economical. Architecture is still seen and operated as a sub-service to the Construction industry and it does not seem to be feasible to apply recent innovations in Building Technology area. Another reason lies at the heart of architectural education. Architectural education does not follow technological innovations (Watson 1997), and that “design and technology issues are trivialized by their segregation from one another” (Fernandez 2004). The final reason is practicality and this one is partially related to the previous reasons. The history of architecture is full of visions for revolutionizing building technology, ideas that failed to achieve commercial practicality. Although, there have been some adaptations in this area recently, the improvements in architecture reflect only incremental progress, not the significant discoveries needed to transform the industry. However, architectural innovations and movements have often been generated by the advances of building materials, such as the impact of steel in the last and reinforced concrete in this century. There have been some scattered attempts of the creation of new materials and systems but currently they are mainly used for limited remote applications and mostly for aesthetic purposes. We believe a new architectural material class is needed which will merge digital and material technologies, embedded in architectural spaces and play a significant role in the way we use and experience architecture. As a principle element of architecture, technology has allowed for the wall to become an increasingly dynamic component of the built environment. The traditional connotations and objectives related to the wall are being redefined: static becomes fluid, opaque becomes transparent, barrier becomes filter and boundary becomes borderless. Combining smart materials, intelligent systems, engineering, and art can create a component that does not just support and define but significantly enhances the architectural space. This paper presents an ongoing research project about the development of new class of architectural wall system by incorporating distributed sensors and macroelectronics directly into the building environment. This type of composite, which is a representative example of an even broader class of smart architectural material, has the potential to change the design and function of an architectural structure or living environment. As of today, this kind of composite does not exist. Once completed, this will be the first technology on its own. We believe this study will lay the fundamental groundwork for a new paradigm in surface engineering that may be of considerable significance in architecture, building and construction industry, and materials science.
keywords Digital; Material; Wall; Electronics
series SIGRADI
email
last changed 2016/03/10 09:47

_id 34b8
authors Batie, D.L.
year 1997
title The incorporation of construction history in architectural history: the HISTCON interactive computer program
source Automation in Construction 6 (4) (1997) pp. 275-285
summary Current teaching methods for architectural history seldom embrace building technology as an essential component of study. Accepting the premise that architectural history is a fundamental component to the overall architectural learning environment, it is argued that the study of construction history will further enhance student knowledge. This hypothesis created an opportunity to investigate how the study of construction history could be incorporated to strengthen present teaching methods. Strategies for teaching architectural history were analyzed with the determination that an incorporation of educational instructional design applications using object-oriented programming and hypermedia provided the optimal solution. This evaluation led to the development of the HISTCON interactive, multimedia educational computer program. Used initially to teach 19th Century iron and steel construction history, the composition of the program provides the mechanism to test the significance of construction history in the study of architectural history. Future development of the program will provide a method to illustrate construction history throughout the history of architecture. The study of architectural history, using a construction oriented methodology, is shown to be positively correlated to increased understanding of architectural components relevant to architectural history and building construction.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 76ba
authors Bermudez, Julio
year 1997
title Cyber(Inter)Sections: Looking into the Real Impact of The Virtual in the Architectural Profession
source Proceedings of the Symposium on Architectural Design Education: Intersecting Perspectives, Identities and Approaches. Minneapolis, MN: College of Architecture & Landscape Architecture, pp. 57-63
summary As both the skepticism and 'hype' surrounding cyberspace vanish under the weight of ever increasing power, demand, and use of information, the architectural discipline must prepare for significant changes. For cyberspace is remorselessly cutting through the dearest structures, rituals, roles, and modes of production in our profession. Yet, this section is not just a detached cut through the existing tissues of the discipline. Rather it is an inter-section, as cyberspace becomes also transformed in the act of piercing. This phenomenon is causing major transformations in at least three areas: 1. Cyberspace is substantially altering the way we produce and communicate architectural information. The arising new working environment suggests highly hybrid and networked conditions that will push the productive and educational landscape of the discipline towards increasing levels of fluidity, exchanges, diversity and change. 2. It has been argued that cyberspace-based human and human-data interactions present us with the opportunity to foster a more free marketplace of ideologies, cultures, preferences, values, and choices. Whether or not the in-progress cyberincisions have the potential to go deep enough to cure the many illnesses afflicting the body of our discipline need to be considered seriously. 3. Cyberspace is a new place or environment wherein new kinds of human activities demand unprecedented types of architectural services. Rather than being a passing fashion, these new architectural requirements are destined to grow exponentially. We need to consider the new modes of practice being created by cyberspace and the education required to prepare for them. This paper looks at these three intersecting territories showing that it is academia and not practice that is leading the profession in the incorporation of virtuality into architecture. Rafael Moneo's words come to mind. [2]
series other
email
last changed 2003/11/21 15:16

_id 600e
authors Gavin, Lesley
year 1999
title Architecture of the Virtual Place
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 418-423
doi https://doi.org/10.52842/conf.ecaade.1999.418
summary The Bartlett School of Graduate Studies, University College London (UCL), set up the first MSc in Virtual Environments in the UK in 1995. The course aims to synthesise and build on research work undertaken in the arts, architecture, computing and biological sciences in exploring the realms of the creation of digital and virtual immersive spaces. The MSc is concerned primarily with equipping students from design backgrounds with the skills, techniques and theories necessary in the production of virtual environments. The course examines both virtual worlds as prototypes for real urban or built form and, over the last few years, has also developed an increasing interest in the the practice of architecture in purely virtual contexts. The MSc course is embedded in the UK government sponsored Virtual Reality Centre for the Built Environment which is hosted by the Bartlett School of Architecture. This centre involves the UCL departments of architecture, computer science and geography and includes industrial partners from a number of areas concerned with the built environment including architectural practice, surveying and estate management as well as some software companies and the telecoms industry. The first cohort of students graduated in 1997 and predominantly found work in companies working in the new market area of digital media. This paper aims to outline the nature of the course as it stands, examines the new and ever increasing market for designers within digital media and proposes possible future directions for the course.
keywords Virtual Reality, Immersive Spaces, Digital Media, Education
series eCAADe
email
more http://www.bartlett.ucl.ac.uk/ve/
last changed 2022/06/07 07:51

_id 2483
authors Gero, J.S. and Kazakov, V.
year 1997
title Learning and reusing information in space layout problems using genetic engineering
source Artificial Intelligence in Engineering 11(3):329-334
summary The paper describes the application of a genetic engineering based extension to genetic algorithms to the layout planning problem. We study the gene evolution which takes place when an algorithm of this type is running and demonstrate that in many cases it effectively leads to the partial decomposition of the layout problem by grouping some activit ies together and optimally placing these groups during the first stage of the computation. At a second stage it optimally places activities within these groups. We show that the algorithm finnds the solution faster than standard evolutionary methods and that evolved genes represent design features that can be re-used later in a range of similar problems.
keywords Genetic Engineering, Learning
series other
email
last changed 2001/09/08 12:04

_id c4a6
authors Haapasalo, Harri
year 1997
title The Role of CAD In Creative Architectural Sketching
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.o2b
summary The history of computers in architectural design is very short; only a few decades; when compared to the development of methods in practical design (Gero 1983). However, the development of the user interfaces has been very fast. According to the practical observations of over one hundred architects, user interfaces are at present inflexible in sketching, although computers can make drafts and the creation of alternatives quicker and more effective in the final stages of designing (Haapasalo 1997). Based on our research in the field of practical design we would wish to stimulate a wider debate about the theory of design. More profound perusal compels us to examine human modes, pre-eminently different levels of thinking and manners of inference. What is the meaning of subconscious and conscious thinking in design? What is the role of intuition in practical design? Do the computer aided design programs apply to creative architectural sketching? To answer such questions, distinct, profound and broad understanding from different disciplines is required. Even then, in spite of such specialist knowledge we cannot hope to unambiguously and definitively answer such questions.
keywords Creativity, Design Process, Architectural Design, Sketching, Computer Aided Design
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/haapas/haapas.htm
last changed 2022/06/07 07:50

_id 4b2a
id 4b2a
authors Jabi, Wassim
year 2004
title A FRAMEWORK FOR COMPUTER-SUPPORTED COLLABORATION IN ARCHITECTURAL DESIGN
source University of Michigan
summary The development of appropriate research frameworks and guidelines for the construction of software aids in the area of architectural design can lead to a better understanding of designing and computer support for designing (Gero and Maher 1997). The field of research and development in computer-supported collaborative architectural design reflects that of the early period in the development of the field of computersupported cooperative work (CSCW). In the early 1990s, the field of CSCW relied on unsystematic attempts to generate software that increases the productivity of people working together (Robinson 1992). Furthermore, a shift is taking place by which researchers in the field of architecture are increasingly becoming consumers of rather than innovators of technology (Gero and Maher . In particular, the field of architecture is rapidly becoming dependent on commercial software implementations that are slow to respond to new research or to user demands. Additionally, these commercial systems force a particular view of the domain they serve and as such might hinder rather than help its development. The aim of this dissertation is to provide information to architects and others to help them build their own tools or, at a minimum, be critical of commercial solutions.
series thesis:PhD
type normal paper
email
last changed 2004/10/24 22:35

_id 2c17
authors Junge, Richard and Liebich, Thomas
year 1997
title Product Data Model for Interoperability in an Distributed Environment
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 571-589
summary This paper belongs to a suite of three interrelated papers. The two others are 'The VEGA Platform' and 'A Dynamic Product Model'. These two companion papers are also based on the VEGA project. The ESPRIT project VEGA (Virtual Enterprises using Groupware tools and distributed Architectures) has the objective to develop IT solutions enabling virtual enterprises, especially in the domain of architectural design and building engineering. VEGA shall give answers to many questions of what is needed for enabling such virtual enterprise from the IT side. The questions range from technologies for networks, communication between distributed applications, control, management of information flow to implementation and model architectures to allow distribution of information in the virtual enterprises. This paper is focused on the product model aspect of VEGA. So far modeling experts have followed a more or less centralized architecture (central or central with 4 satellites'). Is this also the architecture for the envisaged goal? What is the architecture for such a distributed model following the paradigm of modeling the , natural human' way of doing business? What is the architecture enabling most effective the filtering and translation in the communication process. Today there is some experience with 'bulk data' of the document exchange type. What is with incremental information (not data) exchange? Incremental on demand only the really needed information not a whole document. The paper is structured into three parts. First there is description of the modeling history or background. the second a vision of interoperability in an distributed environment from the users coming from architectural design and building engineering view point. Third is a description of work undertaken by the authors in previous project forming the direct basis for the VEGA model. Finally a short description of the VEGA project, especially the VEGA model architecture.
series CAAD Futures
email
last changed 1999/04/06 09:19

_id 8504
authors Junge, Richard. (Ed.)
year 1997
title CAAD futures 1997 [Conference Proceedings]
source 7th International Conference on Computer-Aided Architectural Design/ ISBN 0-7923-4726-9 / München / Germany, 4-6 August 1997, 931 p.
summary Since the establishment of the CAAD futures Foundation in 1985 CAAD experts from all over the world meet every two years to present and at the same time document the state of art of research in Computer Aided Architectural Design. The history of CAAD futures started in the Netherlands at the Technical Universities of Eindhoven and Delft, where the CAAD futures Foundation came into being. Then CAAD futures crossed the oceans for the first time, the third CAAD futures in '89 was held at Harvard University. Next stations in the evolution where in '91 Swiss Federal Institute of Technology, the ETH Zürich. In '93 the conference was organized by Carnegie Mellon University, Pittsburgh and in '95 by National University Singapore. CAAD futures '95 marked the world wide nature by organizing it for the first time in Asia. The seventh CAAD futures is the first being organized by a German University. For the as small as newly and only provisional established CAAD group at the Faculty for Architecture at Technical University München it is honor and challenge at the same time to be the organizer of CAAD futures '97.
series CAAD Futures
email
last changed 1999/04/06 09:19

_id 0bc0
authors Kellett, R., Brown, G.Z., Dietrich, K., Girling, C., Duncan, J., Larsen, K. and Hendrickson, E.
year 1997
title THE ELEMENTS OF DESIGN INFORMATION FOR PARTICIPATION IN NEIGHBORHOOD-SCALE PLANNING
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 295-304
doi https://doi.org/10.52842/conf.acadia.1997.295
summary Neighborhood scale planning and design in many communities has been evolving from a rule-based process of prescriptive codes and regulation toward a principle- and performance-based process of negotiated priorities and agreements. Much of this negotiation takes place in highly focused and interactive workshop or 'charrette' settings, the best of which are characterized by a fluid and lively exchange of ideas, images and agendas among a diverse mix of citizens, land owners, developers, consultants and public officials. Crucial to the quality and effectiveness of the exchange are techniques and tools that facilitate a greater degree of understanding, communication and collaboration among these participants.

Digital media have a significant and strategic role to play toward this end. Of particular value are representational strategies that help disentangle issues, clarify alternatives and evaluate consequences of very complex and often emotional issues of land use, planning and design. This paper reports on the ELEMENTS OF NEIGHBORHOOD, a prototype 'electronic notebook' (relational database) tool developed to bring design information and example 'to the table' of a public workshop. Elements are examples of the building blocks of neighborhood (open spaces, housing, commercial, industrial, civic and network land uses) derived from built examples, and illustrated with graphic, narrative and numeric representations relevant to planning, design, energy, environmental and economic performance. Quantitative data associated with the elements can be linked to Geographic Information based maps and spreadsheet based-evaluation models.

series ACADIA
type normal paper
email
last changed 2022/06/07 07:52

_id diss_kim
id diss_kim
authors Kim, S.
year 1997
title Version Management in Computer-Aided Architectural Design
source Harvard University, Cambridge, Massachusetts
summary This thesis introduces the requirements for version support in a computer-aided architectural design system which seeks to support the work of designers in the early stages of design. It addresses the problems of current computer-aided design systems when they are used for conceptual design. Perceiving the implications of mature technology, this thesis provides a model of version management. The model makes use of object-oriented technology to link the design process and the design artifacts in a dynamic manner, providing a powerful tool for conceptual design. By capturing design versions, and keeping track of multiple design sessions, designers will be able to reuse design ideas, and check on the progress of current design while the interruption of design thinking is minimized. The creation of the design history is considered to be the creation of the version history. By being able to navigate and modify the design history, the issues of design reuse, alternative designs, and the preservation of design information can be facilitated. This thesis presents a working prototype based on the version management model.
series thesis:PhD
more http://archmedia.yonsei.ac.kr/pdf/
last changed 2003/11/28 07:38

_id ce1b
authors Kvan, Th., Lee, A. and Ho, L.
year 2000
title Anthony Ng Architects Limited: Building Towards a Paperless Future
source Case Study and Teaching Notes number 99/65, 10 pages, distributed by HKU Centre for Asian Business Cases, Harvard Business School Publishing (HBSP) and The European Case Clearing House (ECCH), June 2000
summary In early 1997; Mr. Anthony Ng; managing director of Anthony Ng Architects Ltd.; was keenly looking forward to a high-tech; paperless new office; which would free his designers from paperwork and greatly improve internal and external communication – a vision that he had had for a couple of years. In 1996; he brought on board a friend and expert in Internet technology to help him realise his vision. In July 1997; his company was to move into its new office in Wan Chai. Their plan was to have in place an Intranet and a web-based document management system when they moved into the new office. But he had to be mindful of resulting changes in communication patterns; culture and expectations. Resistance from within his company was also threatening to ruin the grand plan. Several senior executives were fiercely opposed to the proposal and refused to read a document off a computer screen. But Ng knew it was an important initiative to move his practice forward. Once the decision was made there would be no chance to reconsider; given the workload demands of the new HK$12 billion project. And this decision would mark a watershed in the company’s evolution. This case study examines the challenges and implications of employing IT to support an architectural office.
keywords IT In Practice; Professional Practice; Archives
series other
email
last changed 2002/11/15 18:29

_id a129
authors Lee, E., Woo, S. and Sasada, T.
year 1997
title Experimental Study in inter-University Collaboration collaboration
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.q2n
summary The architectural design requires collaboration among various participants, such as architects, clients, engineers in the stages of the design process. The Sasada laboratory has been involved in the various collaborative architectural design projects. The authors found several important issues in the process of those projects. Firstly, the presentation data is composed of different kinds of data such as documents, computer generated still images, movies and 3D objects. The participants involved in those projects need to access these data as necessary. Secondly, it is virtually impossible for all participants to attend at the same time and place. Therefore, computer networked collaborative design work is essential, in particular, for an international project and for a complex architectural design project.
keywords Collaboration
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/lee/lee.htm
last changed 2022/06/07 07:50

_id 6cb4
authors Leupen, B., Grafe, C., Körnig, N., Lampe, M. and De Zeeuw, P.
year 1997
title Design and Analysis
source New York: Van Nostrand Reinhold
summary Design and Analysis by Bernard Leupen, Christoph Grafe, Nicola Körnig, Marc Lampe, and Peter de Zeeuw Design and Analysis is an insightful, interdisciplinary exploration of the diversity of analytic methods used by architects, designers, urban planners, and landscape architects to understand the structure and principles of the built environment. Developed by a team headed by Bernard Leupen at Delft University of Technology, The Netherlands, Design and Analysis defies borders of history, geography, and discipline, tracing the evolution of design principles from ancient Greece to the 20th century. "Only methodical analysis gives us an insight into the design process," states architect Bernard Tschumi. Using historical examples from architecture, urban design, and landscape architecture, Design and Analysis defines an ordered system that enables the design student or professional to identify the factors that influence designers' decisions, and shows how to relate them to the finished project. Design and Analysis is organized into six chapters that correspond to these factors: order and composition, functionality, structure, typology, context, and analytical techniques. The authors introduce the analytical drawing as a time-tested means to obtaining insight into the design process. Over 100 line drawings are featured in all. Using contemporary architectural examples to teach ancient design principles, Design and Analysis is more than just an introduction to analytical methods. The authors give an outline of space design as a whole, from individual buildings to urban and landscape ensembles. Though primarily intended for design students to help them appreciate many of the issues that they will face as professionals, Design and Analysis's broad, easy-to-read approach makes it an invaluable handbook for designers of all disciplines.
series other
last changed 2003/04/23 15:14

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 26HOMELOGIN (you are user _anon_363541 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002