CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 513

_id b3b1
authors Ebrahim, Mostafa Abdel-Bary
year 1997
title Application and evaluation of digital image techniques in close range photogrammetry
source University of Innsbruck
summary Most of the orthomapping techniques that are used in the present are restricted to surfaces that arise from a function of 'ground co- ordinates' z = f (x, y) , so-called 2.5D objects. Some techniques are also restricted to surfaces with kind of smooth shape or even to regular surfaces, but all of them are established to rectify images (although increasingly digitally). A new approach has been established for digital restitution and orthomapping of close range objects of almost any shape and size and with almost no restriction to images or objects. The idea of this approach is an inversion of the photographic technique and is (on the contrary to the 'rectification approach') strictly object oriented. All of the objects are regarded to be describable in their geometrical shape by a number of particular faces that can be regular or irregular but can anyway be created in a CAD environment. The data needed to get this surface can come from any photogrammetric, tachometric or other source with any particular one wants to have for the results. All the details that lie on that surface don't have to be restituted by analog or analytical point measurement but can after that be projected onto this surface from any photo, from any side and with any camera they have been taken. A 'Digital Projector' does the projection of the photos from the same positions and with the same inner orientation as of photographic camera. Using this approach any measurements of any details on the facades can be done easily. No details of the object can be neglected, none can be forgotten, no prior filtering of details has preceded this using. The full information of the original photos is available in the results. The results of the restitution can be presented in many ways. One of them is create orthoimages in any scale. Other results are any perspective or parallel view of the object. Other use of the strict 3D map-covered object for visualization (e.g. in architecture and archaeology application) is possible  
keywords Digital Image; Digital Projector; Close Range Photogrammetry; Architectural Photogrammetry; 2.5d Objects; Visualization
series thesis:PhD
email
more http://www.arcs.ac.at/dissdb/rn027356
last changed 2003/02/12 22:37

_id 4eea
authors Sook Lee, Y. and Kyung Shin, H.
year 1997
title Development and visualization of interior space models for university professor's office.
source Architectural and Urban Simulation Techniques in Research and Education [3rd EAEA-Conference Proceedings]
summary When visualization is required in academic area, the sound mundane realism ideally defined through scientific research is a requirement to make the testing of the visualized model worthy. Spatial model development is an essential part in every space type. Without space standards, architecture can not be existed. Lack of space standards causes some confusion, delay of decision, and trials and errors in building practice. This research deals with university professor's office space model. Currently in Korea, university building construction has been increased because of rapidly growing quantitative and qualitative needs for better education. There has been a wide range of size preference of the office space. Because of Korea's limited land availability, deliberate consideration in suggesting the minimum space standards without sacrificing the function is needed. This is especially important since professors traditionally have been highly respected from society, thereby rather authoritative with strong territoriality and privacy need and relatively sensitive to space size. Thus, presenting the 3D visual models to convince professors that the models accommodate their needs is important as well as the search process for ideal space models. The aim of the project was to develop a set of interior space models for university professor's office. To achieve the goal, 3 research projects and 1 design simulation project were implemented. Objectives of the 4 projects were 1) to identify the most popular office space conditions that is architectural characteristics, 2) to identify the most popular office space use type, 3) to identify user needs for spatial improvement, 4) to develop and suggest interior design alternatives systematically and present them in 3 dimentional computer simulation. This simulated images will be a basis of scaled model construction for endoscopy research and of full scale modelling in the future.
keywords Architectural Endoscopy, Endoscopy, Simulation, Visualisation, Visualization, Real Environments
series EAEA
email
more http://www.bk.tudelft.nl/media/eaea/eaea97.html
last changed 2005/09/09 10:43

_id 0c91
authors Asanowicz, Aleksander
year 1997
title Computer - Tool vs. Medium
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.b2e
summary We have arrived an important juncture in the history of computing in our profession: This history is long enough to reveal clear trends in the use of computing, but not long to institutionalize them. As computers peremate every area of architecture - from design and construction documents to project administration and site supervision - can “virtual practice” be far behind? In the old days, there were basically two ways of architects working. Under stress. Or under lots more stress. Over time, someone forwarded the radical motion that the job could be easier, you could actually get more work done. Architects still have been looking for ways to produce more work in less time. They need a more productive work environment. The ideal environment would integrate man and machine (computer) in total harmony. As more and more architects and firms invest more and more time, money, and effort into particular ways of using computers, these practices will become resistant to change. Now is the time to decide if computing is developing the way we think it should. Enabled and vastly accelerated by technology, and driven by imperatives for cost efficiency, flexibility, and responsiveness, work in the design sector is changing in every respect. It is stands to reason that architects must change too - on every level - not only by expanding the scope of their design concerns, but by altering design process. Very often we can read, that the recent new technologies, the availability of computers and software, imply that use of CAAD software in design office is growing enormously and computers really have changed the production of contract documents in architectural offices.
keywords Computers, CAAD, Cyberreal, Design, Interactive, Medium, Sketches, Tools, Virtual Reality
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/asan/asanowic.htm
last changed 2022/06/07 07:50

_id sigradi2006_e131c
id sigradi2006_e131c
authors Ataman, Osman
year 2006
title Toward New Wall Systems: Lighter, Stronger, Versatile
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 248-253
summary Recent developments in digital technologies and smart materials have created new opportunities and are suggesting significant changes in the way we design and build architecture. Traditionally, however, there has always been a gap between the new technologies and their applications into other areas. Even though, most technological innovations hold the promise to transform the building industry and the architecture within, and although, there have been some limited attempts in this area recently; to date architecture has failed to utilize the vast amount of accumulated technological knowledge and innovations to significantly transform the industry. Consequently, the applications of new technologies to architecture remain remote and inadequate. One of the main reasons of this problem is economical. Architecture is still seen and operated as a sub-service to the Construction industry and it does not seem to be feasible to apply recent innovations in Building Technology area. Another reason lies at the heart of architectural education. Architectural education does not follow technological innovations (Watson 1997), and that “design and technology issues are trivialized by their segregation from one another” (Fernandez 2004). The final reason is practicality and this one is partially related to the previous reasons. The history of architecture is full of visions for revolutionizing building technology, ideas that failed to achieve commercial practicality. Although, there have been some adaptations in this area recently, the improvements in architecture reflect only incremental progress, not the significant discoveries needed to transform the industry. However, architectural innovations and movements have often been generated by the advances of building materials, such as the impact of steel in the last and reinforced concrete in this century. There have been some scattered attempts of the creation of new materials and systems but currently they are mainly used for limited remote applications and mostly for aesthetic purposes. We believe a new architectural material class is needed which will merge digital and material technologies, embedded in architectural spaces and play a significant role in the way we use and experience architecture. As a principle element of architecture, technology has allowed for the wall to become an increasingly dynamic component of the built environment. The traditional connotations and objectives related to the wall are being redefined: static becomes fluid, opaque becomes transparent, barrier becomes filter and boundary becomes borderless. Combining smart materials, intelligent systems, engineering, and art can create a component that does not just support and define but significantly enhances the architectural space. This paper presents an ongoing research project about the development of new class of architectural wall system by incorporating distributed sensors and macroelectronics directly into the building environment. This type of composite, which is a representative example of an even broader class of smart architectural material, has the potential to change the design and function of an architectural structure or living environment. As of today, this kind of composite does not exist. Once completed, this will be the first technology on its own. We believe this study will lay the fundamental groundwork for a new paradigm in surface engineering that may be of considerable significance in architecture, building and construction industry, and materials science.
keywords Digital; Material; Wall; Electronics
series SIGRADI
email
last changed 2016/03/10 09:47

_id 536e
authors Bouman, Ole
year 1997
title RealSpace in QuickTimes: architecture and digitization
source Rotterdam: Nai Publishers
summary Time and space, drastically compressed by the computer, have become interchangeable. Time is compressed in that once everything has been reduced to 'bits' of information, it becomes simultaneously accessible. Space is compressed in that once everything has been reduced to 'bits' of information, it can be conveyed from A to B with the speed of light. As a result of digitization, everything is in the here and now. Before very long, the whole world will be on disk. Salvation is but a modem away. The digitization process is often seen in terms of (information) technology. That is to say, one hears a lot of talk about the digital media, about computer hardware, about the modem, mobile phone, dictaphone, remote control, buzzer, data glove and the cable or satellite links in between. Besides, our heads are spinning from the progress made in the field of software, in which multimedia applications, with their integration of text, image and sound, especially attract our attention. But digitization is not just a question of technology, it also involves a cultural reorganization. The question is not just what the cultural implications of digitization will be, but also why our culture should give rise to digitization in the first place. Culture is not simply a function of technology; the reverse is surely also true. Anyone who thinks about cultural implications, is interested in the effects of the computer. And indeed, those effects are overwhelming, providing enough material for endless speculation. The digital paradigm will entail a new image of humankind and a further dilution of the notion of social perfectibility; it will create new notions of time and space, a new concept of cause and effect and of hierarchy, a different sort of public sphere, a new view of matter, and so on. In the process it will indubitably alter our environment. Offices, shopping centres, dockyards, schools, hospitals, prisons, cultural institutions, even the private domain of the home: all the familiar design types will be up for review. Fascinated, we watch how the new wave accelerates the process of social change. The most popular sport nowadays is 'surfing' - because everyone is keen to display their grasp of dirty realism. But there is another way of looking at it: under what sort of circumstances is the process of digitization actually taking place? What conditions do we provide that enable technology to exert the influence it does? This is a perspective that leaves room for individual and collective responsibility. Technology is not some inevitable process sweeping history along in a dynamics of its own. Rather, it is the result of choices we ourselves make and these choices can be debated in a way that is rarely done at present: digitization thanks to or in spite of human culture, that is the question. In addition to the distinction between culture as the cause or the effect of digitization, there are a number of other distinctions that are accentuated by the computer. The best known and most widely reported is the generation gap. It is certainly stretching things a bit to write off everybody over the age of 35, as sometimes happens, but there is no getting around the fact that for a large group of people digitization simply does not exist. Anyone who has been in the bit business for a few years can't help noticing that mum and dad are living in a different place altogether. (But they, at least, still have a sense of place!) In addition to this, it is gradually becoming clear that the age-old distinction between market and individual interests are still relevant in the digital era. On the one hand, the advance of cybernetics is determined by the laws of the marketplace which this capital-intensive industry must satisfy. Increased efficiency, labour productivity and cost-effectiveness play a leading role. The consumer market is chiefly interested in what is 'marketable': info- and edutainment. On the other hand, an increasing number of people are not prepared to wait for what the market has to offer them. They set to work on their own, appropriate networks and software programs, create their own domains in cyberspace, domains that are free from the principle whereby the computer simply reproduces the old world, only faster and better. Here it is possible to create a different world, one that has never existed before. One, in which the Other finds a place. The computer works out a new paradigm for these creative spirits. In all these distinctions, architecture plays a key role. Owing to its many-sidedness, it excludes nothing and no one in advance. It is faced with the prospect of historic changes yet it has also created the preconditions for a digital culture. It is geared to the future, but has had plenty of experience with eternity. Owing to its status as the most expensive of arts, it is bound hand and foot to the laws of the marketplace. Yet it retains its capacity to provide scope for creativity and innovation, a margin of action that is free from standardization and regulation. The aim of RealSpace in QuickTimes is to show that the discipline of designing buildings, cities and landscapes is not only a exemplary illustration of the digital era but that it also provides scope for both collective and individual activity. It is not just architecture's charter that has been changed by the computer, but also its mandate. RealSpace in QuickTimes consists of an exhibition and an essay.
series other
email
last changed 2003/04/23 15:14

_id 2e36
authors Bourdakis, Vassilis
year 1997
title Making Sense of the City
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 663-678
summary Large-scale, three dimensional, interactive computer models of cities are becoming feasible making it possible to test their suitability as a visualisation tool for the design and planning process, for data visualisation where socio-economic and physical data can be mapped on to the 3D form of the city and as an urban information repository. The CASA developed models of the City of Bath and London's West End in VRML format, are used as examples to illustrate the problems arising. The aim of this paper is to reflect on key issues related to interaction within urban models, data mapping techniques and appropriate metaphors for presenting information.
keywords 3D City modeling, Urban Modelling, Virtual Environments, Navigation, Data Mapping, VRML
series CAAD Futures
email
last changed 2003/11/21 15:16

_id 598d
authors Davies, P.
year 1997
title Case study - Multiprofessional
source Automation in Construction 6 (1) (1997) pp. 51-57
summary IT is just a tool, but the most powerful one ever to be offered to us. This case study deals with the areas at which IT can be targeted within the Building Design Partnership. Firstly, should anything be done and if so, what criteria should be used to choose the priorities? A SWOT analysis is one way way to identify goals. Strengths/weeknesses, opportunities/threats are the positive/negative pairs. We have to build our strength and perceive and take opportunities while at the same time countering weaknesses and threats. It is a threat that the industry sets a moving target of IT capability without wanting to meet its cost. It is an opportunity that only a few practices will be at the leading edge and that they will secure the key projects. IT could help us to overcome technical weaknesses and liability and reduce staff and premises costs. It could also increase our exposure to fixed capital costs in a cyclical business. IT could increase the success of integrated practice or it could make it easier for separate firms. We believe the likelihood is that IT will do as it has for financial services and favour the large, multi-national, well prepared and technologically advanced firms. New services will emerge and become assential and will separate the `sheep from the goats.'
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id db13
authors Jacobsen, K., Eastman, C. and Tay, S.J.
year 1997
title Information management in creative engineering design and capabilities of database transactions
source Automation in Construction 7 (1) (1997) pp. 55-69
summary This paper examines the information management requirements and sets forth the general criteria for collaboration and concurrency control in creative engineering design. Our work attempts to recognize the full range of concurrency, collaboration and complex transactions structure now practiced in manual and semi-automated design and the range of capabilities needed as the demands for enhanced but flexible electronic information management unfolds. The objective of this paper is to identify new issues that may advance the use of databases to support creative engineering design. We start with a generalized description of the structure of design tasks and how information management in design is dealt with today. After this review, we identify extensions to current information management capabilities that have been realized and/or proposed to support/augment what designers can do now. Given this capability-based starting point, we review existing database and information management capabilities, as presented in the literature. In the review, we identify the gaps between current concurrency and collaboration technology and what is needed or would be desirable. Our objective is to assess current research and to identify new issues that may advance the use of databases to support creative engineering design.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 0bc0
authors Kellett, R., Brown, G.Z., Dietrich, K., Girling, C., Duncan, J., Larsen, K. and Hendrickson, E.
year 1997
title THE ELEMENTS OF DESIGN INFORMATION FOR PARTICIPATION IN NEIGHBORHOOD-SCALE PLANNING
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 295-304
doi https://doi.org/10.52842/conf.acadia.1997.295
summary Neighborhood scale planning and design in many communities has been evolving from a rule-based process of prescriptive codes and regulation toward a principle- and performance-based process of negotiated priorities and agreements. Much of this negotiation takes place in highly focused and interactive workshop or 'charrette' settings, the best of which are characterized by a fluid and lively exchange of ideas, images and agendas among a diverse mix of citizens, land owners, developers, consultants and public officials. Crucial to the quality and effectiveness of the exchange are techniques and tools that facilitate a greater degree of understanding, communication and collaboration among these participants.

Digital media have a significant and strategic role to play toward this end. Of particular value are representational strategies that help disentangle issues, clarify alternatives and evaluate consequences of very complex and often emotional issues of land use, planning and design. This paper reports on the ELEMENTS OF NEIGHBORHOOD, a prototype 'electronic notebook' (relational database) tool developed to bring design information and example 'to the table' of a public workshop. Elements are examples of the building blocks of neighborhood (open spaces, housing, commercial, industrial, civic and network land uses) derived from built examples, and illustrated with graphic, narrative and numeric representations relevant to planning, design, energy, environmental and economic performance. Quantitative data associated with the elements can be linked to Geographic Information based maps and spreadsheet based-evaluation models.

series ACADIA
type normal paper
email
last changed 2022/06/07 07:52

_id cf2011_p016
id cf2011_p016
authors Merrick, Kathryn; Gu Ning
year 2011
title Supporting Collective Intelligence for Design in Virtual Worlds: A Case Study of the Lego Universe
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 637-652.
summary Virtual worlds are multi-faceted technologies. Facets of virtual worlds include graphical simulation tools, communication, design and modelling tools, artificial intelligence, network structure, persistent object-oriented infrastructure, economy, governance and user presence and interaction. Recent studies (Merrick et al., 2010) and applications (Rosenman et al., 2006; Maher et al., 2006) have shown that the combination of design, modelling and communication tools, and artificial intelligence in virtual worlds makes them suitable platforms for supporting collaborative design, including human-human collaboration and human-computer co-creativity. Virtual worlds are also coming to be recognised as a platform for collective intelligence (Levy, 1997), a form of group intelligence that emerges from collaboration and competition among large numbers of individuals. Because of the close relationship between design, communication and virtual world technologies, there appears a strong possibility of using virtual worlds to harness collective intelligence for supporting upcoming “design challenges on a much larger scale as we become an increasingly global and technological society” (Maher et al, 2010), beyond the current support for small-scale collaborative design teams. Collaborative design is relatively well studied and is characterised by small-scale, carefully structured design teams, usually comprising design professionals with a good understanding of the design task at hand. All team members are generally motivated and have the skills required to structure the shared solution space and to complete the design task. In contrast, collective design (Maher et al, 2010) is characterised by a very large number of participants ranging from professional designers to design novices, who may need to be motivated to participate, whose contributions may not be directly utilised for design purposes, and who may need to learn some or all of the skills required to complete the task. Thus the facets of virtual worlds required to support collective design differ from those required to support collaborative design. Specifically, in addition to design, communication and artificial intelligence tools, various interpretive, mapping and educational tools together with appropriate motivational and reward systems may be required to inform, teach and motivate virtual world users to contribute and direct their inputs to desired design purposes. Many of these world facets are well understood by computer game developers, as level systems, quests or plot and achievement/reward systems. This suggests the possibility of drawing on or adapting computer gaming technologies as a basis for harnessing collective intelligence in design. Existing virtual worlds that permit open-ended design – such as Second Life and There – are not specifically game worlds as they do not have extensive level, quest and reward systems in the same way as game worlds like World of Warcraft or Ultima Online. As such, while Second Life and There demonstrate emergent design, they do not have the game-specific facets that focus users towards solving specific problems required for harnessing collective intelligence. However, a new massively multiplayer virtual world is soon to be released that combines open-ended design tools with levels, quests and achievement systems. This world is called Lego Universe (www.legouniverse.com). This paper presents technology spaces for the facets of virtual worlds that can contribute to the support of collective intelligence in design, including design and modelling tools, communication tools, artificial intelligence, level system, motivation, governance and other related facets. We discuss how these facets support the design, communication, motivational and educational requirements of collective intelligence applications. The paper concludes with a case study of Lego Universe, with reference to the technology spaces defined above. We evaluate the potential of this or similar tools to move design beyond the individual and small-scale design teams to harness large-scale collective intelligence. We also consider the types of design tasks that might best be addressed in this manner.
keywords collective intelligence, collective design, virtual worlds, computer games
series CAAD Futures
email
last changed 2012/02/11 19:21

_id ascaad2014_003
id ascaad2014_003
authors Parlac, Vera
year 2014
title Surface Dynamics: From dynamic surface to agile spaces
source Digital Crafting [7th International Conference Proceedings of the Arab Society for Computer Aided Architectural Design (ASCAAD 2014 / ISBN 978-603-90142-5-6], Jeddah (Kingdom of Saudi Arabia), 31 March - 3 April 2014, pp. 39-48
summary Behavior, adaptation and responsiveness are characteristics of live organisms; architecture on the other hand is structurally, materially and functionally constructed. With the shift from ‘mechanical’ towards ‘organic’ paradigm (Mae-Wan Ho, 1997) attitude towards architectural adaptation, behavior and performance is shifting as well. This change is altering a system of reference and conceptual basis for architecture by suggesting the integration of dynamics – dynamics that don’t address kinetic movement only but include flows of energies, material and information. This paper presents an ongoing research into kinetic material system with the focus on non-mechanical actuation (shape memory alloy) and the structural and material behavior. It proposes an adaptive surface capable of altering its shape and forming small occupiable spaces that respond to external and internal influences and flows of information. The adaptive structure is developed as a physical and digital prototype. Its behavior is examined at a physical level and the findings are used to digitally simulate the behavior of the larger system. The design approach is driven by an interest in adaptive systems in nature and material variability (structural and functional) of naturally constructed materials. The broader goal of the research is to test the scale at which shape memory alloy can be employed as an actuator of dynamic architectural surfaces and to speculate on and explore the capacity of active and responsive systems to produce adaptable surfaces that can form occupiable spaces and with that, added functionalities in architectural and urban environments.
series ASCAAD
email
last changed 2016/02/15 13:09

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id avocaad_2001_19
id avocaad_2001_19
authors Shen-Kai Tang, Yu-Tung Liu, Yu-Sheng Chung, Chi-Seng Chung
year 2001
title The visual harmony between new and old materials in the restoration of historical architecture: A study of computer simulation
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In the research of historical architecture restoration, scholars respectively focus on the field of architectural context and architectural archeology (Shi, 1988, 1990, 1991, 1992, 1995; Fu, 1995, 1997; Chiu, 2000) or on architecture construction and the procedure of restoration (Shi, 1988, 1989; Chiu, 1990). How to choose materials and cope with their durability becomes an important issue in the restoration of historical architecture (Dasser, 1990; Wang, 1998).In the related research of the usage and durability of materials, some scholars deem that, instead of continuing the traditional ways that last for hundreds of years (that is to replace new materials with old ones), it might be better to keep the original materials (Dasser, 1990). However, unavoidably, some of the originals are much worn. Thus we have to first establish the standard of eliminating components, and secondly to replace identical or similar materials with the old components (Lee, 1990). After accomplishing the restoration, we often unexpectedly find out that the renewed historical building is too new that the sense of history is eliminated (Dasser, 1990; Fu, 1997). Actually this is the important factor that determines the accomplishment of restoration. In the past, some scholars find out that the contrast and conflict between new and old materials are contributed to the different time of manufacture and different coating, such as antiseptic, pattern, etc., which result in the discrepancy of the sense of visual perception (Lee, 1990; Fu, 1997; Dasser, 1990).In recent years, a number of researches and practice of computer technology have been done in the field of architectural design. We are able to proceed design communication more exactly by the application of some systematic softwares, such as image processing, computer graphic, computer modeling/rendering, animation, multimedia, virtual reality and so on (Lawson, 1995; Liu, 1996). The application of computer technology to the research of the preservation of historical architecture is comparatively late. Continually some researchers explore the procedure of restoration by computer simulation technology (Potier, 2000), or establish digital database of the investigation of historical architecture (Sasada, 2000; Wang, 1998). How to choose materials by the technology of computer simulation influences the sense of visual perception. Liu (2000) has a more complete result on visual impact analysis and assessment (VIAA) about the research of urban design projection. The main subjects of this research paper focuses on whether the technology of computer simulation can extenuate the conflict between new and old materials that imposed on visual perception.The objective of this paper is to propose a standard method of visual harmony effects for materials in historical architecture (taking the Gigi Train Station destroyed by the earthquake in last September as the operating example).There are five steps in this research: 1.Categorize the materials of historical architecture and establish the information in digital database. 2.Get new materials of historical architecture and establish the information in digital database. 3.According to the mixing amount of new and old materials, determinate their proportion of the building; mixing new and old materials in a certain way. 4.Assign the mixed materials to the computer model and proceed the simulation of lighting. 5.Make experts and the citizens to evaluate the accomplished computer model in order to propose the expected standard method.According to the experiment mentioned above, we first address a procedure of material simulation of the historical architecture restoration and then offer some suggestions of how to mix new and old materials.By this procedure of simulation, we offer a better view to control the restoration of historical architecture. And, the discrepancy and discordance by new and old materials can be released. Moreover, we thus avoid to reconstructing ¡§too new¡¨ historical architecture.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id d46d
authors Takahashi, S., Shinagawa, Y. and Kunii, T.L.
year 1997
title A Feature-Based Approach for Smooth Surfaces
source Proceedings of Fourth Symposium on Solid Modeling, pp. 97-110
summary Feature-based representation has become a topic of interest in shape modeling techniques. Such feature- based techniques are, however, still restricted to polyhedral shapes, and none has been done on smooth sur- faces. This paper presents a new feature-based ap- proach for smooth surfaces. Here, the smooth surfaces are assumed to be 2-dimensional @differentiable manifolds within a theoretical framework. As the shape features, critical points such as peaks, pits, and passes are used. We also use a critical point graph called the R.eeb graph to represent the topological skeletons of a smooth object. Since the critical points have close relations with the entities of B-reps, the framework of thtx B-reps can easily be applied to our approach. In our method, the shape design process begins with specifying the topological skeletons using the Reeb graph. The Reeb graph is edited by pasting the enti- ties called cells that have one-to-one correspondences with the critical points. In addition to the topological skeletons, users also design the geometry of the objects with smooth surfaces by specifying the flow curves that run on the object surface. From these flow curves, the system automatically creates a control network that encloses the object shape. The surfaces are interpolated from the control network by minimizing the allergy function subject to the deformation of the surfaces using variational optimization.
series other
last changed 2003/04/23 15:50

_id 0286
authors Will, Barry F. and Siu-Pan Li , Thomas
year 1997
title Computers for Windows: Interactive Optimization Tools for Architects designing openings in walls (IOTA)
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.d4u
summary Size, shape and disposition of windows in walls has long been an integral expression of style in architecture. As buildings have grown taller the relationships of the windows to the ground plane and to the surrounding environments have become more complex and difficult to predict. Traditionally architects have had to use their own knowledge, experience and feelings in the design of windows. There may be few, if any, scientific bases for their decisions. The difficulty in making good design decisions is compounded because many criteria for window design, such as daylight, sunlight, ventilation, sound, view and privacy have to be considered simultaneously. It is here that computers can help, on the one hand, by providing ‘expert knowledge’ so that architects can consult the cumulative knowledge database before making a decision, whilst on the other hand, evaluations of the decisions taken can be compared with a given standard or with alternative solutions.

‘Expert knowledge’ provision has been made possible by the introduction of hypertext, the advancement of the world wide web and the development of large scale data-storage media. Much of the computer’s value to the architects lies in its ability to assist in the evaluation of a range of performance criteria. Without the help of a computer, architects are faced with impossibly complex arrays of solutions. This paper illustrates an evaluation tool for two factors which are important to the window design. The two factors to be investigated in this paper are sunlighting and views out of windows.

Sunlight is a quantitative factor that can theoretically be assessed by some mathematical formulae provided there is sufficient information for calculation but when total cumulative effects of insolation through the different seasons is required, in addition to yearly figures, a design in real-time evolution requires substantial computing power. Views out of windows are qualitative and subjective. They present difficulties in measurement by the use of conventional mathematical tools. These two fields of impact in window design are explored to demonstrate how computers can be used in assessing various options to produce optimal design solutions. This paper explains the methodologies, theories and principles underlying these evaluation tools. It also illustrates how an evaluation tool can be used as a design tool during the design process.

keywords Sunlight, View, Window Design, Performance Evaluation, Expert Systems, Simulation, Fuzzy LogicExpert Systems, Simulation, Fuzzy Logic
series eCAADe
more http://info.tuwien.ac.at/ecaade/proc/li/li.htm
last changed 2022/06/07 07:50

_id 1767
authors Loveday, D.L., Virk, G.S., Cheung, J.Y.M. and Azzi, D.
year 1997
title Intelligence in buildings: the potential of advanced modelling
source Automation in Construction 6 (5-6) (1997) pp. 447-461
summary Intelligence in buildings usually implies facilities management via building automation systems (BAS). However, present-day commercial BAS adopt a rudimentary approach to data handling, control and fault detection, and there is much scope for improvement. This paper describes a model-based technique for raising the level of sophistication at which BAS currently operate. Using stochastic multivariable identification, models are derived which describe the behaviour of air temperature and relative humidity in a full-scale office zone equipped with a dedicated heating, ventilating and air-conditioning (HVAC) plant. The models are of good quality, giving prediction accuracies of ± 0.25°C in 19.2°C and of ± 0.6% rh in 53% rh when forecasting up to 15 minutes ahead. For forecasts up to 3 days ahead, accuracies are ± 0.65°C and ± 1.25% rh, respectively. The utility of the models for facilities management is investigated. The "temperature model" was employed within a predictive on/off control strategy for the office zone, and was shown to substantially improve temperature regulation and to reduce energy consumption in comparison with conventional on/off control. Comparison of prediction accuracies for two different situations, that is, the office with and without furniture plus carpet, showed that some level of furnishing is essential during the commissioning phase if model-based control of relative humidity is contemplated. The prospects are assessed for wide-scale replication of the model-based technique, and it is shown that deterministic simulation has potential to be used as a means of initialising a model structure and hence of selecting the sensors for a BAS for any building at the design stage. It is concluded that advanced model-based methods offer significant promise for improving BAS performance, and that proving trials in full-scale everyday situations are now needed prior to commercial development and installation.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 07ae
authors Sook Lee, Y. and Mi Lee, S.
year 1997
title Analysis of mental maps for ideal apartments to develop and simulate an innovative residential interior space.
source Architectural and Urban Simulation Techniques in Research and Education [3rd EAEA-Conference Proceedings]
summary Even though results of applied research have been ideally expected to be read and used by practitioners, written suggestions have been less persuasive especially, in visual field such as environmental design, architecture, and interior design. Therefore, visualization of space has been frequently considered as an ideal alternative way of suggestions and an effective method to disseminate research results and help decision makers. In order to make the visualized target space very solid and mundane, scientific research process to define the characteristics of the space should be precedent. This presentation consists of two parts : first research part ; second design and simulation part. The purpose of the research was to identify the ideal residential interior characteristics on the basis of people's mental maps for ideal apartments. To achieve this goal, quantitative content analysis was used using an existing data set of floor plans drawn by housewives. 2,215 floorplans were randomly selected among 3,012 floorplans collected through nation-wide housing design competition for ideal residential apartments. 213 selected variables were used to analyze the floorplans. Major contents were the presentational characteristics of mental maps and the characteristics of design preference such as layout, composition, furnishing etc. As a result, current and future possible trends of ideal residence were identified. On the basis of the result, design guidelines were generated. An interior spatial model for small size unit using CAD was developed according to the guidelines. To present it in more effective way, computer simulated images were made using 3DS. This paper is expected to generate the comparison of various methods for presenting research results such as written documents, drawings, simulated images, small scaled model for endoscopy and full scale modeling.
keywords Architectural Endoscopy, Endoscopy, Simulation, Visualisation, Visualization, Real Environments
series EAEA
email
more http://www.bk.tudelft.nl/media/eaea/eaea97.html
last changed 2005/09/09 10:43

_id 75a8
authors Achten, Henri H.
year 1997
title Generic representations : an approach for modelling procedural and declarative knowledge of building types in architectural design
source Eindhoven University of Technology
summary The building type is a knowledge structure that is recognised as an important element in the architectural design process. For an architect, the type provides information about norms, layout, appearance, etc. of the kind of building that is being designed. Questions that seem unresolved about (computational) approaches to building types are the relationship between the many kinds of instances that are generally recognised as belonging to a particular building type, the way a type can deal with varying briefs (or with mixed use), and how a type can accommodate different sites. Approaches that aim to model building types as data structures of interrelated variables (so-called ‘prototypes’) face problems clarifying these questions. The research work at hand proposes to investigate the role of knowledge associated with building types in the design process. Knowledge of the building type must be represented during the design process. Therefore, it is necessary to find a representation which supports design decisions, supports the changes and transformations of the design during the design process, encompasses knowledge of the design task, and which relates to the way architects design. It is proposed in the research work that graphic representations can be used as a medium to encode knowledge of the building type. This is possible if they consistently encode the things they represent; if their knowledge content can be derived, and if they are versatile enough to support a design process of a building belonging to a type. A graphic representation consists of graphic entities such as vertices, lines, planes, shapes, symbols, etc. Establishing a graphic representation implies making design decisions with respect to these entities. Therefore it is necessary to identify the elements of the graphic representation that play a role in decision making. An approach based on the concept of ‘graphic units’ is developed. A graphic unit is a particular set of graphic entities that has some constant meaning. Examples are: zone, circulation scheme, axial system, and contour. Each graphic unit implies a particular kind of design decision (e.g. functional areas, system of circulation, spatial organisation, and layout of the building). By differentiating between appearance and meaning, it is possible to define the graphic unit relatively shape-independent. If a number of graphic representations have the same graphic units, they deal with the same kind of design decisions. Graphic representations that have such a specifically defined knowledge content are called ‘generic representations.’ An analysis of over 220 graphic representations in the literature on architecture results in 24 graphic units and 50 generic representations. For each generic representation the design decisions are identified. These decisions are informed by the nature of the design task at hand. If the design task is a building belonging to a building type, then knowledge of the building type is required. In a single generic representation knowledge of norms, rules, and principles associated with the building type are used. Therefore, a single generic representation encodes declarative knowledge of the building type. A sequence of generic representations encodes a series of design decisions which are informed by the design task. If the design task is a building type, then procedural knowledge of the building type is used. By means of the graphic unit and generic representation, it is possible to identify a number of relations that determine sequences of generic representations. These relations are: additional graphic units, themes of generic representations, and successive graphic units. Additional graphic units defines subsequent generic representations by adding a new graphic unit. Themes of generic representations defines groups of generic representations that deal with the same kind of design decisions. Successive graphic units defines preconditions for subsequent or previous generic representations. On the basis of themes it is possible to define six general sequences of generic representations. On the basis of additional and successive graphic units it is possible to define sequences of generic representations in themes. On the basis of these sequences, one particular sequence of 23 generic representations is defined. The particular sequence of generic representations structures the decision process of a building type. In order to test this assertion, the particular sequence is applied to the office building type. For each generic representation, it is possible to establish a graphic representation that follows the definition of the graphic units and to apply the required statements from the office building knowledge base. The application results in a sequence of graphic representations that particularises an office building design. Implementation of seven generic representations in a computer aided design system demonstrates the use of generic representations for design support. The set is large enough to provide additional weight to the conclusion that generic representations map declarative and procedural knowledge of the building type.
series thesis:PhD
email
more http://alexandria.tue.nl/extra2/9703788.pdf
last changed 2003/11/21 15:15

_id eea1
authors Achten, Henri
year 1997
title Generic Representations - Typical Design without the Use of Types
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 117-133
summary The building type is a (knowledge) structure that is both recognised as a constitutive cognitive element of human thought and as a constitutive computational element in CAAD systems. Questions that seem unresolved up to now about computational approaches to building types are the relationship between the various instances that are generally recognised as belonging to a particular building type, the way a type can deal with varying briefs (or with mixed functional use), and how a type can accommodate different sites. Approaches that aim to model building types as data structures of interrelated variables (so-called 'prototypes') face problems clarifying these questions. It is proposed in this research not to focus on a definition of 'type,' but rather to investigate the role of knowledge connected to building types in the design process. The basic proposition is that the graphic representations used to represent the state of the design object throughout the design process can be used as a medium to encode knowledge of the building type. This proposition claims that graphic representations consistently encode the things they represent, that it is possible to derive the knowledge content of graphic representations, and that there is enough diversity within graphic representations to support a design process of a building belonging to a type. In order to substantiate these claims, it is necessary to analyse graphic representations. In the research work, an approach based on the notion of 'graphic units' is developed. The graphic unit is defined and the analysis of graphic representations on the basis of the graphic unit is demonstrated. This analysis brings forward the knowledge content of single graphic representations. Such knowledge content is declarative knowledge. The graphic unit also provides the means to articulate the transition from one graphic representation to another graphic representation. Such transitions encode procedural knowledge. The principles of a sequence of generic representations are discussed and it is demonstrated how a particular type - the office building type - is implemented in the theoretical work. Computational work on implementation part of a sequence of generic representations of the office building type is discussed. The paper ends with a summary and future work.
series CAAD Futures
email
last changed 2003/11/21 15:15

_id 730e
authors Af Klercker, Jonas
year 1997
title Implementation of IT and CAD - what can Architect schools do?
source AVOCAAD First International Conference [AVOCAAD Conference Proceedings / ISBN 90-76101-01-09] Brussels (Belgium) 10-12 April 1997, pp. 83-92
summary In Sweden representatives from the Construction industry have put forward a research and development program called: "IT-Bygg 2002 -Implementation". It aims at making IT the vehicle for decreasing the building costs and at the same time getting better quality and efficiency out of the industry. A seminar was held with some of the most experienced researchers, developers and practitioners of CAD in construction in Sweden. The activities were recorded and annotated, analysed and put together afterwards; then presented to the participants to agree on. Co-operation is the key to get to the goals - IT and CAD are just the means to improve it. Co-operation in a phase of implementation is enough problematic without the technical difficulties in using computer programs created by the computer industry primarily for commercial reasons. The suggestion is that cooperation between software companies within Sweden will make a greater market to share than the sum of all individual efforts. In the short term, 2 - 5 years, implementation of CAD and IT will demand a large amount of educational efforts from all actors in the construction process. In the process of today the architect is looked upon as a natural coordinator of the design phase. In the integrated process the architect's methods and knowledge are central and must be spread to other categories of actors - what a challenge! At least in Sweden the number of researchers and educators in CAAD is easily counted. How do we make the most of it?
series AVOCAAD
last changed 2005/09/09 10:48

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 25HOMELOGIN (you are user _anon_999458 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002