CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 520

_id 536e
authors Bouman, Ole
year 1997
title RealSpace in QuickTimes: architecture and digitization
source Rotterdam: Nai Publishers
summary Time and space, drastically compressed by the computer, have become interchangeable. Time is compressed in that once everything has been reduced to 'bits' of information, it becomes simultaneously accessible. Space is compressed in that once everything has been reduced to 'bits' of information, it can be conveyed from A to B with the speed of light. As a result of digitization, everything is in the here and now. Before very long, the whole world will be on disk. Salvation is but a modem away. The digitization process is often seen in terms of (information) technology. That is to say, one hears a lot of talk about the digital media, about computer hardware, about the modem, mobile phone, dictaphone, remote control, buzzer, data glove and the cable or satellite links in between. Besides, our heads are spinning from the progress made in the field of software, in which multimedia applications, with their integration of text, image and sound, especially attract our attention. But digitization is not just a question of technology, it also involves a cultural reorganization. The question is not just what the cultural implications of digitization will be, but also why our culture should give rise to digitization in the first place. Culture is not simply a function of technology; the reverse is surely also true. Anyone who thinks about cultural implications, is interested in the effects of the computer. And indeed, those effects are overwhelming, providing enough material for endless speculation. The digital paradigm will entail a new image of humankind and a further dilution of the notion of social perfectibility; it will create new notions of time and space, a new concept of cause and effect and of hierarchy, a different sort of public sphere, a new view of matter, and so on. In the process it will indubitably alter our environment. Offices, shopping centres, dockyards, schools, hospitals, prisons, cultural institutions, even the private domain of the home: all the familiar design types will be up for review. Fascinated, we watch how the new wave accelerates the process of social change. The most popular sport nowadays is 'surfing' - because everyone is keen to display their grasp of dirty realism. But there is another way of looking at it: under what sort of circumstances is the process of digitization actually taking place? What conditions do we provide that enable technology to exert the influence it does? This is a perspective that leaves room for individual and collective responsibility. Technology is not some inevitable process sweeping history along in a dynamics of its own. Rather, it is the result of choices we ourselves make and these choices can be debated in a way that is rarely done at present: digitization thanks to or in spite of human culture, that is the question. In addition to the distinction between culture as the cause or the effect of digitization, there are a number of other distinctions that are accentuated by the computer. The best known and most widely reported is the generation gap. It is certainly stretching things a bit to write off everybody over the age of 35, as sometimes happens, but there is no getting around the fact that for a large group of people digitization simply does not exist. Anyone who has been in the bit business for a few years can't help noticing that mum and dad are living in a different place altogether. (But they, at least, still have a sense of place!) In addition to this, it is gradually becoming clear that the age-old distinction between market and individual interests are still relevant in the digital era. On the one hand, the advance of cybernetics is determined by the laws of the marketplace which this capital-intensive industry must satisfy. Increased efficiency, labour productivity and cost-effectiveness play a leading role. The consumer market is chiefly interested in what is 'marketable': info- and edutainment. On the other hand, an increasing number of people are not prepared to wait for what the market has to offer them. They set to work on their own, appropriate networks and software programs, create their own domains in cyberspace, domains that are free from the principle whereby the computer simply reproduces the old world, only faster and better. Here it is possible to create a different world, one that has never existed before. One, in which the Other finds a place. The computer works out a new paradigm for these creative spirits. In all these distinctions, architecture plays a key role. Owing to its many-sidedness, it excludes nothing and no one in advance. It is faced with the prospect of historic changes yet it has also created the preconditions for a digital culture. It is geared to the future, but has had plenty of experience with eternity. Owing to its status as the most expensive of arts, it is bound hand and foot to the laws of the marketplace. Yet it retains its capacity to provide scope for creativity and innovation, a margin of action that is free from standardization and regulation. The aim of RealSpace in QuickTimes is to show that the discipline of designing buildings, cities and landscapes is not only a exemplary illustration of the digital era but that it also provides scope for both collective and individual activity. It is not just architecture's charter that has been changed by the computer, but also its mandate. RealSpace in QuickTimes consists of an exhibition and an essay.
series other
email
last changed 2003/04/23 15:14

_id 0024
authors Breen, J. and Dijk, T. van
year 1997
title Modelling for eye level composition; design media experiments in an educational setting.
source Architectural and Urban Simulation Techniques in Research and Education [Proceedings of the 3rd European Architectural Endoscopy Association Conference / ISBN 90-407-1669-2]
summary In order to simulate the visual effects of designs at eye level, it is necessary to construct models from which (sequences of) images can be taken. This holds true for both Optical Endoscopy and Computer Aided Visualisation techniques. In what ways can an eye level approach stimulate spatial awareness and create insights into the workings of a design concept? Can Endoscopic methods be used effectively as a creative environment for design decision-making and teamwork and even to stimulate the generation of new design ideas? How should modelmaking be considered if it is to be of use in an ‘impatient’ design process, and how can students be made aware of the opportunities of both direct eye level observations from design models and of the more sophisticated endoscopic imaging techniques? This paper explores the theme of eye level modelling by focusing on a number of formal exercises and educational experiments carried out by the Delft Media group in recent years. An attempt is made to describe and evaluate these experiences, in order to draw conclusions and to signal possible new opportunities for eye level composition for the benefit of both design education and practice...
keywords Architectural Endoscopy, Endoscopy, Simulation, Visualisation, Visualization, Real Environments
series EAEA
email
more http://www.bk.tudelft.nl/media/eaea/eaea97.html
last changed 2005/09/09 10:43

_id 873a
authors Ng, Edward
year 1997
title An Evaluative Approach to Architectural Visualization
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 449-463
doi https://doi.org/10.52842/conf.caadria.1997.449
summary In the forthcoming globalization and virtual almost everything, we are indeed reliving a moment of history when, at the turn of the century, machines replace craftsman in mass-producing goods quicker, cheaper, ‘better’ and faster for the mass market regardless of the appropriateness in using the machine. So much so that the recent proliferation of computer graphics has reached a stage where many are questioning their validity and usefulness in the advancement of architectural discourse. This paper argues that the pedagogy of the use of the new tools should be effective communication in vision and in representation. In short, saying what you do and doing what you say, no more and no less, or to be ‘true’ and ‘honest’. The paper tries to provide a hypothetical framework whereby the rationale of drawing could be more systematically understood and criticised, and it reports ways the framework is introduced in the teaching of design studio. The focus of the experimental studio (Active Studio 1.6 beta) is to enable the substantiation of ideas and feelings through a critical manipulation of medium and techniques. The results are narratives whereby the expression of intention as well as the drawings are both on trial.
series CAADRIA
last changed 2022/06/07 07:58

_id 7b96
authors Schley, M., Buday, R., Sanders, K. and Smith, D. (eds.)
year 1997
title AIA CAD layer guidelines
source Washington, DC: The American Institute of Architects Press
summary The power and potential of computer-aided design (CAD) is based on the ability to reuse and share information. This is particularly true in building design and construction, a field that involves extensive information and teamwork between a variety of consultants. CAD provides both a common medium of exchange and a tool for producing the documentation required for construction and management. The key to realizing the potential of CAD is using common organizing principles. In particular, standard organization of files and layers is essential for efficient work and communication. Virtually all CAD systems support the concept of layers. This function allows graphic information to be grouped for display or plotting purposes. Intelligent use of layers can reduce drawing time and improve drawing coordination. By turning selected layers on or off, a variety of different plotted sheets can be produced. The layer is the basic CAD tool for managing visual information. By making it possible to reuse information, layers reduce drawing time and improve coordination. Layers and the new class libraries and object data complement, rather than compete with each other. Using layers to manage the visual aspects of graphic entities, with class libraries and object data to store the non-graphic data, gives architects an efficient way to work in CAD.
series other
last changed 2003/04/23 15:14

_id 060b
authors Af Klercker, J.
year 1997
title A National Strategy for CAAD and IT-Implementation in the Construction Industry the Construction Industry
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.o8u
summary The objective of this paper is to present a strategy for implementation of CAD and IT in the construction and building management#1 industry in Sweden. The interest is in how to make the best use of the limited resources in a small country or region, cooperating internationally and at the same time avoiding to be totally dominated by the great international actors in the market of information technology.

In Sweden representatives from the construction and building management industry have put forward a research and development program called: "IT-Bygg#2 2002 - Implementation". It aims at making IT the vehicle for decreasing the building costs and at the same time getting better quality and efficiency out of the industry.

The presented strategy is based on a seminar with some of the most experienced researchers, developers and practitioners of CAD in Sweden. The activities were recorded and annotated, analyzed and put together afterwards.

The proposal in brief is that object oriented distributed CAD is to be used in the long perspective. It will need to be based on international standards such as STEP and it will take at least another 5 years to get established.

Meanwhile something temporary has to be used. Pragmatically a "de facto standard" on formats has to be accepted and implemented. To support new users of IT all software in use in the country will be analyzed, described and published for a national platform for IT-communication within the construction industry.

Finally the question is discussed "How can architect schools then contribute to IT being implemented within the housing sector at a regional or national level?" Some ideas are presented: Creating the good example, better support for the customer, sharing the holistic concept of the project with all actors, taking part in an integrated education process and international collaboration like AVOCAAD and ECAADE.

 

keywords CAAD, IT, Implementation, Education, Collaboration
series eCAADe
type normal paper
email
more http://info.tuwien.ac.at/ecaade/proc/afklerck/afklerck.htm
last changed 2022/06/07 07:50

_id 127c
authors Bhavnani, S.K. and John, B.E.
year 1997
title From Sufficient to Efficient Usage: An Analysis of Strategic Knowledge
source Proceedings of CHI'97 (1997), 91-98
summary Can good design guarantee the eflicient use of computer tools? Can experience guarantee it? We raise these questions to explore why empirical studies of real-world usage show even experienced users under-utilizing the capabilities of computer applications. By analyzing the use of everyday devices and computer applications, as well as reviewing empirical studies, we conclude that neither good design nor experience may be able to guarantee efficient usage. Efficient use requires task decomposition strategies that exploit capabilities offered by computer applications such as the ability to aggregute objects, and to manipulate the aggregates with powerful operators. To understand the effects that strategies can have on performance, we present results from a GOMS analysis of a CAD task. Furthermore, we identify some key aggregation strategies that appear to generalize across applications. Such strategies may provide a framework to enable users to move from a sufficient to a more ef)icient use of computer tools.
keywords Strategies; Task Decomposition; Aggregation
series other
email
last changed 2003/11/21 15:16

_id a58d
authors Cicognani, Anna
year 1997
title On the linguistic nature of Cyberspace and Virtual Communities
source CM Special Journal Issue. Edited by Dave Snowdon, Nottingham: Submitted
summary This paper argues for a linguistic explanation of the nature of Virtual Communities. Virtual Communities develop and grow in electronic space, or 'cyberspace'. Authors such as Benedikt Meyrowitz and Mitchell have theorised about the nature of electronic space whilst Lefebvre, Popper, Hakim Bey (aka Lamborn Wilson) and Kuhn have theorised more generally about the nature of space. Extending this tradition and the works of these authors, this paper presents a language based perspective on the nature of electronic spaces. Behaviour in cyberspace is based on and regulated by hardware, software tools and interfaces. A definition of electronic space cannot be given beyond its linguistic characteristics, which underlie and sustain it. The author believes that the more users and developers understand the relationship between language and cyberspace, the more they will be able to use specific metaphors for dwelling and inhabiting it. In particular, MUDs/MOOs and the Web are interesting places for testing and observing social behaviours and dynamics.
series journal paper
email
last changed 2003/04/23 15:50

_id 2354
authors Clayden, A. and Szalapaj, P.
year 1997
title Architecture in Landscape: Integrated CAD Environments for Contextually Situated Design
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.q6p
summary This paper explores the future role of a more holistic and integrated approach to the design of architecture in landscape. Many of the design exploration and presentation techniques presently used by particular design professions do not lend themselves to an inherently collaborative design strategy.

Within contemporary digital environments, there are increasing opportunities to explore and evaluate design proposals which integrate both architectural and landscape aspects. The production of integrated design solutions exploring buildings and their surrounding context is now possible through the design development of shared 3-D and 4-D virtual environments, in which buildings no longer float in space.

The scope of landscape design has expanded through the application of techniques such as GIS allowing interpretations that include social, economic and environmental dimensions. In architecture, for example, object-oriented CAD environments now make it feasible to integrate conventional modelling techniques with analytical evaluations such as energy calculations and lighting simulations. These were all ambitions of architects and landscape designers in the 70s when computer power restricted the successful implementation of these ideas. Instead, the commercial trend at that time moved towards isolated specialist design tools in particular areas. Prior to recent innovations in computing, the closely related disciplines of architecture and landscape have been separated through the unnecessary development, in our view, of their own symbolic representations, and the subsequent computer applications. This has led to an unnatural separation between what were once closely related disciplines.

Significant increases in the performance of computers are now making it possible to move on from symbolic representations towards more contextual and meaningful representations. For example, the application of realistic materials textures to CAD-generated building models can then be linked to energy calculations using the chosen materials. It is now possible for a tree to look like a tree, to have leaves and even to be botanicaly identifiable. The building and landscape can be rendered from a common database of digital samples taken from the real world. The complete model may be viewed in a more meaningful way either through stills or animation, or better still, through a total simulation of the lifecycle of the design proposal. The model may also be used to explore environmental/energy considerations and changes in the balance between the building and its context most immediately through the growth simulation of vegetation but also as part of a larger planning model.

The Internet has a key role to play in facilitating this emerging collaborative design process. Design professionals are now able via the net to work on a shared model and to explore and test designs through the development of VRML, JAVA, whiteboarding and video conferencing. The end product may potentially be something that can be more easily viewed by the client/user. The ideas presented in this paper form the basis for the development of a dual course in landscape and architecture. This will create new teaching opportunities for exploring the design of buildings and sites through the shared development of a common computer model.

keywords Integrated Design Process, Landscape and Architecture, Shared Environmentsenvironments
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/szalapaj/szalapaj.htm
last changed 2022/06/07 07:50

_id 426f
authors Colajanni, Benedetto and Pellitteri, Giuseppe
year 1997
title Image Recognition: from Syntax to Semantics
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.n7x
summary n a previous paper the authors presented an analyser of simple architectural images. It works at syntactical level inasmuch as it is able to detect the elementary components of the images and to perform on them some analyses regarding their reciprocal position and their combinations.

Here we present a second step of development of the analyser: the implementation of some semantic capabilities. The most elementary level of semantics is the simple recognition of each object present in the architectural image. Which, in turn means attributing to each object the name of the class of similar objects to which the single object is supposed to pertain. While at the syntactical level the pertinence to a class implies the identity of an object to the class prototype, at the semantic level this is not compulsory. Pertaining to the same class, that is having the same architectural meaning, can be objects having approximately the same shape. Consequently in order to detect the pertinence of an object to a class, that is giving it an architectural meaning, two things are necessary: a date base containing the class prototypes to which the recognized objects are to be assigned and a tool able to "measure" the difference of two shapes.

keywords Image Analysis, Semantics
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/pell/pell.htm
last changed 2022/06/07 07:50

_id 460e
authors Dannettel, Mark E
year 1997
title Interactive Multimedia Design: Operational Structures and Intuitive Environments for CD-ROM
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 415-427
doi https://doi.org/10.52842/conf.caadria.1997.415
summary This paper presents practical design concepts for the production of CD-ROMs or on-line media projects which are intended for scholastic and professional use. It is based on the experience and knowledge which has been gained while developing a multimedia package here at the Department of Architecture at CUHK. The package deals exclusively with the technical issue of vertical transportation in buildings, and is intended to be used as a design tool in professional offices, as well as in classroom settings. The required research and production for the development of the structures, formats, and interfaces of this project, along with the consequential evaluation and revision of this work, has led to a greater understanding of appropriate applications for interactive interactive multimedia designs. Specially, the paper addresses the fundamental issues of ‘user-format’, and a distinction is made between applications which operate as ‘tools’ and those which operate as ‘resources’. Descriptions are provided for both types of operational formats, and suggestions are made as to how one might decided which format would be appropriate for a specific project. Briefly, resource produces imply that a user actively pursues information in a relatively static environment, while tool procedures imply that a user works jointly with the software to process information and arrive at a unique output. This distinction between the two formats is mostly grounded in the design of the structure and user-interface, and thus the point is made that the material content of the application does not necessarily imply a mandatory use of either format. In light of this observation that an application’s format relies on the appropriateness of operational procedures, rather than on its material content, further discussions of the implications of such procedures (using a ‘resource’ vs. using a ‘tool’) are provided.
series CAADRIA
email
last changed 2022/06/07 07:55

_id b3f6
authors Goodwin, G.
year 1997
title Software and hardware summary
source Automation in Construction 6 (1) (1997) pp. 29-31
summary With the rate of change accelerating in both technological development and in the spread of global markets, very few cost cutting businesses will survive in a value added marketplace. Information sharing over networks has a lot to offer the Industry by way of eliminating delays from the construction process and inventory from the Supply Chain. The most significant change by 2005 will be in networking and communications. Discovering design flaws at the design stage rather than when the building is in use must be very attractive to clients of the Construction Industry. Construction firms must have some sort of IT Strategy or clear view of how to exploit IT to support their businesses. Large companies need to act as coaches and mentors to smaller ones. The Industry cannot maximise its IT benefits unless all the players participate.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 02e4
authors Groh, Paul H.
year 1997
title Computer Visualization as a Tool for the Conceptual Understanding of Architecture
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 243-248
doi https://doi.org/10.52842/conf.acadia.1997.243
summary A good piece of architecture contains many levels of interrelated complexity. Understanding these levels and their interrelationship is critical to the understanding of a building to both architects and non-architects alike. A building's form, function, structure, materials, and details all relate to and impact one another. By selectively dissecting and taking apart buildings through their representations, one can carefully examine and understand the interrelationship of these building components.

With the recent introduction of computer graphics, much attention has been given to the representation of architecture. Floor plans and elevations have remained relatively unchanged, while digital animation and photorealistic renderings have become exciting new means of representation. A problem with the majority of this work and especially photorealistic rendering is that it represents the building as a image and concentrates on how a building looks as opposed to how it works. Often times this "look" is artificial, expressing the incapacity of programs (or their users) to represent the complexities of materials, lighting, and perspective. By using digital representation in a descriptive, less realistic way, one can explore the rich complexities and interrelationships of architecture. Instead of representing architecture as a finished product, it is possible to represent the ideas and concepts of the project.

series ACADIA
email
last changed 2022/06/07 07:51

_id cc87
authors Johnson, Scott
year 1997
title What's in a Representation, Why Do We Care, and What Does It Mean? Examining Evidence from Psychology
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 5-15
doi https://doi.org/10.52842/conf.acadia.1997.005
summary This paper examines psychological evidence on the nature and role of representations in cognition. Both internal (mental) and external (physical or digital) representations are considered. It is discovered that both types of representation are deeply linked to thought processes. They are linked to learning, the ability to use existing knowledge, and problem solving strategies. The links between representations, thought processes, and behavior are so deep that even eye movements are partly governed by representations. Choice of representations can affect limited cognitive resources like attention and short-term memory by forcing a person to try to utilize poorly organized information or perform "translations" from one representation to another. The implications of this evidence are discussed. Based on these findings, a set of guidelines is presented, for digital representations which minimize drain of cognitive resources. These guidelines describe what sorts of characteristics and behaviors a representation should exhibit, and what sorts of information it should contain in order to accommodate and facilitate design. Current attempts to implement such representations are discussed.

series ACADIA
email
last changed 2022/06/07 07:52

_id 1767
authors Loveday, D.L., Virk, G.S., Cheung, J.Y.M. and Azzi, D.
year 1997
title Intelligence in buildings: the potential of advanced modelling
source Automation in Construction 6 (5-6) (1997) pp. 447-461
summary Intelligence in buildings usually implies facilities management via building automation systems (BAS). However, present-day commercial BAS adopt a rudimentary approach to data handling, control and fault detection, and there is much scope for improvement. This paper describes a model-based technique for raising the level of sophistication at which BAS currently operate. Using stochastic multivariable identification, models are derived which describe the behaviour of air temperature and relative humidity in a full-scale office zone equipped with a dedicated heating, ventilating and air-conditioning (HVAC) plant. The models are of good quality, giving prediction accuracies of ± 0.25°C in 19.2°C and of ± 0.6% rh in 53% rh when forecasting up to 15 minutes ahead. For forecasts up to 3 days ahead, accuracies are ± 0.65°C and ± 1.25% rh, respectively. The utility of the models for facilities management is investigated. The "temperature model" was employed within a predictive on/off control strategy for the office zone, and was shown to substantially improve temperature regulation and to reduce energy consumption in comparison with conventional on/off control. Comparison of prediction accuracies for two different situations, that is, the office with and without furniture plus carpet, showed that some level of furnishing is essential during the commissioning phase if model-based control of relative humidity is contemplated. The prospects are assessed for wide-scale replication of the model-based technique, and it is shown that deterministic simulation has potential to be used as a means of initialising a model structure and hence of selecting the sensors for a BAS for any building at the design stage. It is concluded that advanced model-based methods offer significant promise for improving BAS performance, and that proving trials in full-scale everyday situations are now needed prior to commercial development and installation.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id cf2011_p093
id cf2011_p093
authors Nguyen, Thi Lan Truc; Tan Beng Kiang
year 2011
title Understanding Shared Space for Informal Interaction among Geographically Distributed Teams
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 41-54.
summary In a design project, much creative work is done in teams, thus requires spaces for collaborative works such as conference rooms, project rooms and chill-out areas. These spaces are designed to provide an atmosphere conducive to discussion and communication ranging from formal meetings to informal communication. According to Kraut et al (E.Kraut et al., 1990), informal communication is an important factor for the success of collaboration and is defined as “conversations take place at the time, with the participants, and about the topics at hand. It often occurs spontaneously by chance and in face-to-face manner. As shown in many research, much of good and creative ideas originate from impromptu meeting rather than in a formal meeting (Grajewski, 1993, A.Isaacs et al., 1997). Therefore, the places for informal communication are taken into account in workplace design and scattered throughout the building in order to stimulate face-to-face interaction, especially serendipitous communication among different groups across disciplines such as engineering, technology, design and so forth. Nowadays, team members of a project are not confined to people working in one location but are spread widely with geographically distributed collaborations. Being separated by long physical distance, informal interaction by chance is impossible since people are not co-located. In order to maintain the benefit of informal interaction in collaborative works, research endeavor has developed a variety ways to shorten the physical distance and bring people together in one shared space. Technologies to support informal interaction at a distance include video-based technologies, virtual reality technologies, location-based technologies and ubiquitous technologies. These technologies facilitate people to stay aware of other’s availability in distributed environment and to socialize and interact in a multi-users virtual environment. Each type of applications supports informal interaction through the employed technology characteristics. One of the conditions for promoting frequent and impromptu face-to-face communication is being co-located in one space in which the spatial settings play as catalyst to increase the likelihood for frequent encounter. Therefore, this paper analyses the degree to which sense of shared space is supported by these technical approaches. This analysis helps to identify the trade-off features of each shared space technology and its current problems. A taxonomy of shared space is introduced based on three types of shared space technologies for supporting informal interaction. These types are named as shared physical environments, collaborative virtual environments and mixed reality environments and are ordered increasingly towards the reality of sense of shared space. Based on the problem learnt from other technical approaches and the nature of informal interaction, this paper proposes physical-virtual shared space for supporting intended and opportunistic informal interaction. The shared space will be created by augmenting a 3D collaborative virtual environment (CVE) with real world scene at the virtual world side; and blending the CVE scene to the physical settings at the real world side. Given this, the two spaces are merged into one global structure. With augmented view of the real world, geographically distributed co-workers who populate the 3D CVE are facilitated to encounter and interact with their real world counterparts in a meaningful and natural manner.
keywords shared space, collaborative virtual environment, informal interaction, intended interaction, opportunistic interaction
series CAAD Futures
email
last changed 2012/02/11 19:21

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id sigradi2006_c012b
id sigradi2006_c012b
authors Rodriguez Barros, Diana and Carmena, Sonia
year 2006
title Estudio Descriptivo de Prácticas Padagógicas Mediadas por Tecnologías Digitales en Facultades de Arquitectura y Diseño asociadas a la buena Enseñanza [Descriptive study of pedagogical practices mediated by digital technologies in school of architecture and design, associated to the good education]
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 191-194
summary It is presented a descriptive type study link to the documentary investigation. It is considers to understand, interpretate and critically reconstruct the present practices of proyectual education in studio of school of architecture and design of the region in virtual surroundings, tie to good education. It was used the Burbules & Callister (2001) new emergent postecnocratic approach. It is boarded from the perspective of the authors, in its natural scenes, in its all complexity and its implicances. One worked with a quanti-qualitative methodology, where revision techniques, analysis, evaluation and interpretation of documental textual and visual materials from primary sources were integrated. One has been based on the selection of exposed works in Sigradi congresses, since its creation in 1997 to the present, with extended and updated versions of the authors. As conclusions are recognized professors that show expertise and disciplinar control, that develop investigation tasks tie to the education practices, that incorporate technologies valuating limitations and advantages, and that has recognized the multiple implicit effects in the technologically mediated practices.
series SIGRADI
email
last changed 2016/03/10 09:59

_id a8ff
authors Sanchez, Santiago, Zulueta, Alberto and Barrallo, Javier
year 1997
title CAAD and Historical Buildings: The Importance of the Simulation of the Historical Process
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.u7b
summary The majority of the problems that CAAD deals with are located in contemporary buildings. But many other buildings of the historical heritage also need special attentions with their computer design prior to the restoration projects. Generally, in restoration work, hand drawing and artistic criteria have been more usual than work with precision topographic data and accurate technical plans.

But a very rigorous design is not always enough to start restoration work. The real state that presents a historical building could have been modified substantially from its original state due to previous interventions, wars, seismic movements, erosion, biological aggressions or any other historical event.

So, it is necessary to join CAAD tasks with a simulation of the historical process suffered by the building. Historical data and ancient cartography must be the basis of all the CAAD works, and the quality of the computer 3D model can be established comparing it with the original available maps.

This paper explains the CAAD works and the intervention proposals for the restoration of the City Walls of Hondarribia, a small Spanish village placed in the frontier between Spain and France. These Renaissance bastioned walls were partially destroyed throughout many wars with France. The exact knowledge of their original trace and dimensions only is possible comparing the real CAD models with the plans that exist in the Spanish Military Archives since the XVIth. century.

The digital store and index of all the historical information, their comparison with real photographs of the city walls, the creation of photo realistic images with the intervention proposals, and the influence of the structural repairs in the final project will be explained in the CAAD context.

keywords CAAD, Historical Buildings
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/barrallo/sanchez.htm
last changed 2022/06/07 07:50

_id avocaad_2001_20
id avocaad_2001_20
authors Shen-Kai Tang
year 2001
title Toward a procedure of computer simulation in the restoration of historical architecture
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In the field of architectural design, “visualization¨ generally refers to some media, communicating and representing the idea of designers, such as ordinary drafts, maps, perspectives, photos and physical models, etc. (Rahman, 1992; Susan, 2000). The main reason why we adopt visualization is that it enables us to understand clearly and to control complicated procedures (Gombrich, 1990). Secondly, the way we get design knowledge is more from the published visualized images and less from personal experiences (Evans, 1989). Thus the importance of the representation of visualization is manifested.Due to the developments of computer technology in recent years, various computer aided design system are invented and used in a great amount, such as image processing, computer graphic, computer modeling/rendering, animation, multimedia, virtual reality and collaboration, etc. (Lawson, 1995; Liu, 1996). The conventional media are greatly replaced by computer media, and the visualization is further brought into the computerized stage. The procedure of visual impact analysis and assessment (VIAA), addressed by Rahman (1992), is renewed and amended for the intervention of computer (Liu, 2000). Based on the procedures above, a great amount of applied researches are proceeded. Therefore it is evident that the computer visualization is helpful to the discussion and evaluation during the design process (Hall, 1988, 1990, 1992, 1995, 1996, 1997, 1998; Liu, 1997; Sasada, 1986, 1988, 1990, 1993, 1997, 1998). In addition to the process of architectural design, the computer visualization is also applied to the subject of construction, which is repeatedly amended and corrected by the images of computer simulation (Liu, 2000). Potier (2000) probes into the contextual research and restoration of historical architecture by the technology of computer simulation before the practical restoration is constructed. In this way he established a communicative mode among archeologists, architects via computer media.In the research of restoration and preservation of historical architecture in Taiwan, many scholars have been devoted into the studies of historical contextual criticism (Shi, 1988, 1990, 1991, 1992, 1995; Fu, 1995, 1997; Chiu, 2000). Clues that accompany the historical contextual criticism (such as oral information, writings, photographs, pictures, etc.) help to explore the construction and the procedure of restoration (Hung, 1995), and serve as an aid to the studies of the usage and durability of the materials in the restoration of historical architecture (Dasser, 1990; Wang, 1998). Many clues are lost, because historical architecture is often age-old (Hung, 1995). Under the circumstance, restoration of historical architecture can only be proceeded by restricted pictures, written data and oral information (Shi, 1989). Therefore, computer simulation is employed by scholars to simulate the condition of historical architecture with restricted information after restoration (Potier, 2000). Yet this is only the early stage of computer-aid restoration. The focus of the paper aims at exploring that whether visual simulation of computer can help to investigate the practice of restoration and the estimation and evaluation after restoration.By exploring the restoration of historical architecture (taking the Gigi Train Station destroyed by the earthquake in last September as the operating example), this study aims to establish a complete work on computer visualization, including the concept of restoration, the practice of restoration, and the estimation and evaluation of restoration.This research is to simulate the process of restoration by computer simulation based on visualized media (restricted pictures, restricted written data and restricted oral information) and the specialized experience of historical architects (Potier, 2000). During the process of practicing, communicates with craftsmen repeatedly with some simulated alternatives, and makes the result as the foundation of evaluating and adjusting the simulating process and outcome. In this way we address a suitable and complete process of computer visualization for historical architecture.The significance of this paper is that we are able to control every detail more exactly, and then prevent possible problems during the process of restoration of historical architecture.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id c14d
authors Silva, Neander
year 1997
title Artificial Intelligence and 3D Modelling Exploration: An Integrated Digital Design Studio
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.l5p
summary

This paper describes a CAAD teaching strategy in which some Artificial Intelligence techniques are integrated with 3D modelling exploration. The main objective is to lead the students towards "repertoire" acquisition and creative exploration of design alternatives. This strategy is based on dialogue emulation, graphic precedent libraries, and 3D modelling as a medium of design study.

The course syllabus is developed in two parts: a first stage in which the students interact with an intelligent interface that emulates a dialogue. This interface produces advice composed of either precedents or possible new solutions. Textual descriptions of precedents are coupled with graphical illustrations and textual descriptions of possible new solutions are coupled with sets of 3D components. The second and final stage of the course is based on 3D modelling, not simply as a means of presentation, but as a design study medium. The students are then encouraged to get the system’s output from the first stage of the course and explore it graphically. This is done through an environment in which modelling in 3D is straightforward allowing the focus to be placed on design exploration rather than simply on design presentation. The students go back to the first stage for further advice depending on the results achieved in the second stage. This cycle is repeated until the design solution receives a satisfactory assessment.

keywords Education, Design Process, Interfaces, Neural Networks, 3D Modelling
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/silva/silva.htm
last changed 2022/06/07 07:50

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 25HOMELOGIN (you are user _anon_692563 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002