CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 538

_id 70c4
authors Gross, M.D., Do, E.Y.-L. and Johnson, B.R.
year 2000
title Beyond the low-hanging fruit: Information technology in architectural design past, present and future
source W. Mitchell and J. Fernandez (eds), ACSA Technology Conference, MIT Press, Cambridge MA
summary Today's commercial CAD software is the product of years of research that began in the 1960's and 1970's. These applications have found widespread use in the architectural marketplace; nevertheless they represent only the first fruits of research in computer aided design. New developments based on research in human-computer interaction (HCI), computer-supported collaborative work (CSCW), and virtual reality (VR) will result in a next generation of tools for architectural design. Although preliminary applications to design have been demonstrated in each of these areas, excellent opportunities remain to exploit new technologies and insights in service of better design software. In this paper we briefly examine each of these areas using examples from our own work to discuss the prospects for future research. We envision that future design technologies will develop from current and traditional conventions of practice combined with forward looking application of emerging technologies. In HCI, pen based interaction will allow architects to use the pencil again, without sacrificing the added power of computer aided design tools, and speech recognition will begin to play a role in capturing and retrieving design critique and discussion. In CSCW, a new generation of applications will address the needs of designers more closely than current general purpose meeting tools. In VR, applications are possible that use the technology not simply to provide a sense of three-dimensional presence, but that organize design information spatially, integrating it into the representation of artifacts and places.
series other
email
last changed 2003/04/23 15:50

_id b0e7
authors Ahmad Rafi, M.E. and Karboulonis, P.
year 2000
title The Re-Convergence of Art and Science: A Vehicle for Creativity
source CAADRIA 2000 [Proceedings of the Fifth Conference on Computer Aided Architectural Design Research in Asia / ISBN 981-04-2491-4] Singapore 18-19 May 2000, pp. 491-500
doi https://doi.org/10.52842/conf.caadria.2000.491
summary Ever-increasing complexity in product design and the need to deliver a cost-effective solution that benefits from a dynamic approach requires the employment and adoption of innovative design methods which ensure that products are of the highest quality and meet or exceed customers' expectations. According to Bronowski (1976) science and art were originally two faces of the same human creativity. However, as civilisation advances and works became specialised, the dichotomy of science and art gradually became apparent. Hence scientists and artists were born, and began to develop work that was polar opposite. The sense of beauty itself became separated from science and was confined within the field of art. This dichotomy existed through mankind's efforts in advancing civilisation to its present state. This paper briefly examines the relationship between art and science through the ages and discusses their relatively recent re-convergence. Based on this hypothesis, this paper studies the current state of the convergence between arts and sciences and examines the current relationship between the two by considering real world applications and products. The study of such products and their successes and impact they had in the marketplace due to their designs and aesthetics rather than their advanced technology that had partially failed them appears to support this argument. This text further argues that a re-convergence between art and science is currently occurring and highlights the need for accelerating this process. It is suggested that re-convergence is a result of new technologies which are adopted by practitioners that include effective visualisation and communication of ideas and concepts. Such elements are widely found today in multimedia and Virtual Environments (VEs) where such tools offer increased power and new abilities to both scientists and designers as both venture in each other's domains. This paper highlights the need for the employment of emerging computer based real-time interactive technologies that are expected to enhance the design process through real-time prototyping and visualisation, better decision-making, higher quality communication and collaboration, lessor error and reduced design cycles. Effective employment and adoption of innovative design methods that ensure products are delivered on time, and within budget, are of the highest quality and meet customer expectations are becoming of ever increasing importance. Such tools and concepts are outlined and their roles in the industries they currently serve are identified. Case studies from differing fields are also studied. It is also suggested that Virtual Reality interfaces should be used and given access to Computer Aided Design (CAD) model information and data so that users may interrogate virtual models for additional information and functionality. Adoption and appliance of such integrated technologies over the Internet and their relevance to electronic commerce is also discussed. Finally, emerging software and hardware technologies are outlined and case studies from the architecture, electronic games, and retail industries among others are discussed, the benefits are subsequently put forward to support the argument. The requirements for adopting such technologies in financial, skills required and process management terms are also considered and outlined.
series CAADRIA
email
last changed 2022/06/07 07:54

_id 53c8
authors Donath, Dirk and Lömker, Thorsten Michael
year 2000
title Illusion, Frustration and Vision in Computer-Aided Project Planning: A Reflection and Outlook on the Use of Computing in Architecture
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 3-9
doi https://doi.org/10.52842/conf.acadia.2000.003
summary This paper examines the progressive and pragmatic use of computers and CAAD systems in the architectural practice. With the aid of three scenarios, this paper will illustrate gainful implementation of computer aided project planning in architecture. The first scenario describes an actual situation of implementation and describes conceptual abortive developments in office organization as well as in software technology. Scenario two outlines the essential features of an integrated building design system and the efforts involved in its implementation in the architectural practice. It clearly defines preconditions for implementation and focuses on feasible concepts for the integration of different database management systems. A glance at paradigms of conceptual work currently under development will be taken. The third scenario deals with the structure and integration of innovative concepts and the responsibility the architect will bear with regard to necessary alterations in office and workgroup organization. A future-oriented building design system will be described that distinguishes itself from existing programs because of its modular, net-based structure. With reference to today’s situation in architectural offices and according to realizable improvements, this article will demonstrate courses for future IT-support on the basis of an ongoing research project. The presented project is part of the special research area 524 “Materials and Constructions for the Revitalization of Existing Buildings” which is funded by the Deutsche Forschungsgemeinschaft. It deals with the integration of various parties that are involved in the revitalization process of existing buildings as well as with the provision of adequate information within the planning process resting upon the survey of existing building substance. Additional concepts that might change the way an architect’s work is organized will also be presented. “Case-based-reasoning” methods will make informal knowledge available, leading to a digital memory of preservable solutions.
series ACADIA
email
last changed 2022/06/07 07:55

_id 349e
authors Durmisevic, Sanja
year 2002
title Perception Aspects in Underground Spaces using Intelligent Knowledge Modeling
source Delft University of Technology
summary The intensification, combination and transformation are main strategies for future spatial development of the Netherlands, which are stated in the Fifth Bill regarding Spatial Planning. These strategies indicate that in the future, space should be utilized in a more compact and more efficient way requiring, at the same time, re-evaluation of the existing built environment and finding ways to improve it. In this context, the concept of multiple space usage is accentuated, which would focus on intensive 4-dimensional spatial exploration. The underground space is acknowledged as an important part of multiple space usage. In the document 'Spatial Exploration 2000', the underground space is recognized by policy makers as an important new 'frontier' that could provide significant contribution to future spatial requirements.In a relatively short period, the underground space became an important research area. Although among specialists there is appreciation of what underground space could provide for densely populated urban areas, there are still reserved feelings by the public, which mostly relate to the poor quality of these spaces. Many realized underground projects, namely subways, resulted in poor user satisfaction. Today, there is still a significant knowledge gap related to perception of underground space. There is also a lack of detailed documentation on actual applications of the theories, followed by research results and applied techniques. This is the case in different areas of architectural design, but for underground spaces perhaps most evident due to their infancv role in general architectural practice. In order to create better designs, diverse aspects, which are very often of qualitative nature, should be considered in perspective with the final goal to improve quality and image of underground space. In the architectural design process, one has to establish certain relations among design information in advance, to make design backed by sound rationale. The main difficulty at this point is that such relationships may not be determined due to various reasons. One example may be the vagueness of the architectural design data due to linguistic qualities in them. Another, may be vaguely defined design qualities. In this work, the problem was not only the initial fuzziness of the information but also the desired relevancy determination among all pieces of information given. Presently, to determine the existence of such relevancy is more or less a matter of architectural subjective judgement rather than systematic, non-subjective decision-making based on an existing design. This implies that the invocation of certain tools dealing with fuzzy information is essential for enhanced design decisions. Efficient methods and tools to deal with qualitative, soft data are scarce, especially in the architectural domain. Traditionally well established methods, such as statistical analysis, have been used mainly for data analysis focused on similar types to the present research. These methods mainly fall into a category of pattern recognition. Statistical regression methods are the most common approaches towards this goal. One essential drawback of this method is the inability of dealing efficiently with non-linear data. With statistical analysis, the linear relationships are established by regression analysis where dealing with non-linearity is mostly evaded. Concerning the presence of multi-dimensional data sets, it is evident that the assumption of linear relationships among all pieces of information would be a gross approximation, which one has no basis to assume. A starting point in this research was that there maybe both linearity and non-linearity present in the data and therefore the appropriate methods should be used in order to deal with that non-linearity. Therefore, some other commensurate methods were adopted for knowledge modeling. In that respect, soft computing techniques proved to match the quality of the multi-dimensional data-set subject to analysis, which is deemed to be 'soft'. There is yet another reason why soft-computing techniques were applied, which is related to the automation of knowledge modeling. In this respect, traditional models such as Decision Support Systems and Expert Systems have drawbacks. One important drawback is that the development of these systems is a time-consuming process. The programming part, in which various deliberations are required to form a consistent if-then rule knowledge based system, is also a time-consuming activity. For these reasons, the methods and tools from other disciplines, which also deal with soft data, should be integrated into architectural design. With fuzzy logic, the imprecision of data can be dealt with in a similar way to how humans do it. Artificial neural networks are deemed to some extent to model the human brain, and simulate its functions in the form of parallel information processing. They are considered important components of Artificial Intelligence (Al). With neural networks, it is possible to learn from examples, or more precisely to learn from input-output data samples. The combination of the neural and fuzzy approach proved to be a powerful combination for dealing with qualitative data. The problem of automated knowledge modeling is efficiently solved by employment of machine learning techniques. Here, the expertise of prof. dr. Ozer Ciftcioglu in the field of soft computing was crucial for tool development. By combining knowledge from two different disciplines a unique tool could be developed that would enable intelligent modeling of soft data needed for support of the building design process. In this respect, this research is a starting point in that direction. It is multidisciplinary and on the cutting edge between the field of Architecture and the field of Artificial Intelligence. From the architectural viewpoint, the perception of space is considered through relationship between a human being and a built environment. Techniques from the field of Artificial Intelligence are employed to model that relationship. Such an efficient combination of two disciplines makes it possible to extend our knowledge boundaries in the field of architecture and improve design quality. With additional techniques, meta know/edge, or in other words "knowledge about knowledge", can be created. Such techniques involve sensitivity analysis, which determines the amount of dependency of the output of a model (comfort and public safety) on the information fed into the model (input). Another technique is functional relationship modeling between aspects, which is derivation of dependency of a design parameter as a function of user's perceptions. With this technique, it is possible to determine functional relationships between dependent and independent variables. This thesis is a contribution to better understanding of users' perception of underground space, through the prism of public safety and comfort, which was achieved by means of intelligent knowledge modeling. In this respect, this thesis demonstrated an application of ICT (Information and Communication Technology) as a partner in the building design process by employing advanced modeling techniques. The method explained throughout this work is very generic and is possible to apply to not only different areas of architectural design, but also to other domains that involve qualitative data.
keywords Underground Space; Perception; Soft Computing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id bd1e
authors Evans, Barrie
year 1999
title A Communicating Profession
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 313-320
doi https://doi.org/10.52842/conf.ecaade.1999.313
summary This paper discusses aspects of the near future, a future that in parts is already with us, a future that we need to attend to now. The focus is computer aided design, but not graphics-based CAD. Rather today's CAD innovation is focused on the use of smart communications to provide designers with an information-rich support environment and the design team with an infrastructure for co-operative working. Based on this picture of a different, emerging CAD, the paper finishes with a brief comment on educational implications. One is that the emerging commercial project information management software could prove useful as infrastructure for co-operative educational projects. Another is that there could be significant gaps in information content for educational users as education becomes more IT-based. Should providing this content be a role for joint ECAADE research and development projects?
keywords Information, Smart Telecomms, CSCW, Learning, CAD
series eCAADe
email
last changed 2022/06/07 07:55

_id f78f
authors Fridqvist, Sverker
year 2000
title Property-Oriented Information Systems for Design Prototypes for the BAS•CAAD system
source Lund Institute of Technology, School of Architecture
summary Property-oriented systems are a new kind of information systems that are based on concepts of properties instead of concepts of things or classes of things. By focusing on properties, the property-oriented systems become more flexible and more suited to the dynamic early stages of design than the traditional class-oriented systems can be. The theoretical framework for property-oriented systems developed within the BAS*CAAD project and presented in this thesis has previously been presented in several papers, a selection of which are included here. Some of the basic considerations from the papers are further developed in a separate chapter. Additionally, the thesis covers several questions regarding prerequisites for and implications of property-oriented systems. These questions have not been addressed in earlier BAS*CAAD publications. The development of research proptotypes based on the theoretical framework is presented, with a discussion of the different versions and the considerations behind them. A study of the history of computer aided building design has revealed that many basic ideas of today were developed the first time at the beginning of electronic computing, in the early sixties. Since the early development seems to be unknown today, a brief account is presented in this thesis, with special focus on issues considered in the BAS*CAAD project. Finally, the experimental architectural design software DASK, developed mainly by the present author in the later 1980s, will get its first presentation in writing in this thesis.
keywords Information Technology; Design; Construction; Product Modelling
series thesis:PhD
email
more http://www.lub.lu.se/cgi-bin/show_diss.pl?db=global&fname=tec_391.html
last changed 2003/02/12 22:37

_id 3e01
authors Linnert, C., Encarnacao, M., Storck, A. and Koch, V.
year 2000
title Virtual Building Lifecycle - Giving architects access to the future of buildings by visualizing lifecycle data
source ICCCBE8, Stanford, August 2000
summary Today’s software for architects and civil engineers is lacking support for the evaluation and improvement of building lifecycles. Facility Management Systems and 4D-CAD try to integrate lifecycle data and make them better accessible, but miss the investigation of the development of the structure itself. Much money is inappropriately spent when materials with different life expectancies are combined in the wrong way and building parts are repaired or replaced too early or too late. With the methods of scientific visualization and real-time 3Dgraphics these deficiencies can be eliminated. The project “Virtual Building Lifecycle” (short VBLC, [W-VBLC]) connects 3D geometrical information to research data such as life expectancy and emissions and to standard database information like prices. The automated visualization of critical points of the structure in the past, presence and future is a huge advantage and helps engineers to improve the duration of the lifecycle and reduce the costs.
keywords Visualization; lifecycle; virtual building; realtime 3D graphics; architectural database; 4D-CAD; Facility Management
series other
email
last changed 2003/02/26 18:58

_id ga0101
id ga0101
authors Tanzini, Luca
year 2000
title Universal City
source International Conference on Generative Art
summary "Universal City" is a multimedia performance that documents the evolution of the city in history. Whereas in the past the city was symbolically the world, today the world has become a city. The city rose up in an area once scattered and disorganized for so long that most of its ancient elements of culture were destroyed. It absorbed and re synthesized the remnants of this culture, cultivating power and efficiency. By means of this concentration of physical and cultural power, the city accelerated the rhythm of human relationships and converted their products into forms that are easily stockpiled and reproduced. Along with monuments, written documents and ordered associative organizations amplified the impact of all human activities, extending backwards and forwards over time. Since the beginning however, law and order stood alongside brute force, and power was always determined by these new institutions. Written law served to produce a canon of justice and equality that claimed a higher principle: the king's will, synonymous with divine command. The Urban Neolithic Revolution is comparable only to the Industrial Revolution, and the Media Technology in our own era. There is of course a substantial difference: ours is an era of immeasurable technological progress as an end in itself, which leads to the explosion of the city, and the consequent dissemination of its structure across the countryside. The old walled city has not only fallen, it's buried its foundations. Our civilization flees from every possibility of control, by means of its own extra resources not controllable by the egregious ambitions of man. The image of modern industrialization that Charlie Chaplin resurrected from the past in "Modern Times" is the exact opposite of contemporary metropolitan reality. He figured the worker as a slave chained to his machine and fed by machinery as he continued to work at maintaining the machine itself. Today the workplace is not so brutal, but automation has made it much more oppressive. Energy and dedication once directed towards the production process are today shifted towards consumption. The metropolis in the final phase of its evolution, is becoming a collective mechanism for maintaining the function of this system, and for giving the illusion of power, wealth, happiness, and total success, to those who are, in actuality, its victims. It is a concept foreign to the modern metropolitan mentality that life should be an occasion to Live, and not an excuse for generating newspaper articles, television interviews, or mass spectacles for those who know nothing better. Instead the process continues, until people prefer the simulacrum to the real, where image dominates over object, the copy over the original, representation over reality, appearance over Being. The first phase of the Economy's domination over social life brought about the visible degradation of every human accomplishment from "Being" into "Having". The present phase of social life's total occupation by the accumulated effects of the Economy is leading to a general downslide from "Having" into "Seeming". The performance is based on the instantaneous interaction between video and music: the video component is assembled in real time with RandomCinema a software that I developed and projected on a screen. The music-noise is the product of human radical improvisation togheter automatic-computer process. Everything is based on the consideration of the element of chance as a stimulus for the construction of the most options. The unpredictable helps to reveal things as they happen. The montage, the music, and their interaction, are born and die and the same moment: there are no stage directions or scripts.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id eabb
authors Boeykens, St. Geebelen, B. and Neuckermans, H.
year 2002
title Design phase transitions in object-oriented modeling of architecture
source Connecting the Real and the Virtual - design e-ducation [20th eCAADe Conference Proceedings / ISBN 0-9541183-0-8] Warsaw (Poland) 18-20 September 2002, pp. 310-313
doi https://doi.org/10.52842/conf.ecaade.2002.310
summary The project IDEA+ aims to develop an “Integrated Design Environment for Architecture”. Its goal is providing a tool for the designer-architect that can be of assistance in the early-design phases. It should provide the possibility to perform tests (like heat or cost calculations) and simple simulations in the different (early) design phases, without the need for a fully detailed design or remodeling in a different application. The test for daylighting is already in development (Geebelen, to be published). The conceptual foundation for this design environment has been laid out in a scheme in which different design phases and scales are defined, together with appropriate tests at the different levels (Neuckermans, 1992). It is a translation of the “designerly” way of thinking of the architect (Cross, 1982). This conceptual model has been translated into a “Core Object Model” (Hendricx, 2000), which defines a structured object model to describe the necessary building model. These developments form the theoretical basis for the implementation of IDEA+ (both the data structure & prototype software), which is currently in progress. The research project addresses some issues, which are at the forefront of the architect’s interest while designing with CAAD. These are treated from the point of view of a practicing architect.
series eCAADe
email
last changed 2022/06/07 07:52

_id 83cb
authors Telea, Alexandru C.
year 2000
title Visualisation and simulation with object-oriented networks
source Eindhoven University of Technology
summary Among the existing systems, visual programming environments address best these issues. However, producing interactive simulations and visualisations is still a difficult task. This defines the main research objective of this thesis: The development and implementation of concepts and techniques to combine visualisation, simulation, and application construction in an interactive, easy to use, generic environment. The aim is to produce an environment in which the above mentioned activities can be learnt and carried out easily by a researcher. Working with such an environment should decrease the amount of time usually spent in redesigning existing software elements such as graphics interfaces, existing computational modules, and general infrastructure code. Writing new computational components or importing existing ones should be simple and automatic enough to make using the envisaged system an attractive option for a non programmer expert. Besides this, all proven successful elements of an interactive simulation and visualisation environment should be provided, such as visual programming, graphics user interfaces, direct manipulation, and so on. Finally, a large palette of existing scientific computation, data processing, and visualisation components should be integrated in the proposed system. On one hand, this should prove our claims of openness and easy code integration. On the other hand, this should provide the concrete set of tools needed for building a range of scientific applications and visualisations. This thesis is structured as follows. Chapter 2 defines the context of our work. The scientific research environment is presented and partitioned into the three roles of end user, application designer, and component developer. The interactions between these roles and their specific requirements are described and lead to a more precise formulation of our problem statement. Chapter 3 presents the most used architectures for simulation and visualisation systems: the monolithic system, the application library, and the framework. The advantages and disadvantages of these architectural models are then discussed in relation with our problem statement requirements. The main conclusion drawn is that no single existing architectural model suffices, and that what is needed is a combination of the features present in all three models. Chapter 4 introduces the new architectural model we propose, based on the combination of object-orientation in form of the C++ language and dataflow modelling in the new MC++ language. Chapter 5 presents VISSION, an interactive simulation and visualisation environment constructed on the introduced new architectural model, and shows how the usual tasks of application construction, steering, and visualisation are addressed. In chapter 6, the implementation of VISSION’s architectural model is described in terms of its component parts. Chapter 7 presents the applications of VISSION to numerical simulation, while chapter 8 focuses on its visualisation and graphics applications. Finally, chapter 9 concludes the thesis and outlines possible direction for future research.
keywords Computer Visualisation
series thesis:PhD
email
last changed 2003/02/12 22:37

_id f08d
authors Abrahamson, S., Wallace, D., Senin, N. and Sferro, P.
year 2000
title Integrated design in a service marketplace
source Computer-Aided Design, Vol. 32 (2) (2000) pp. 97-107
summary This paper presents a service marketplace vision for enterprise-wide integrated design modeling. In this environment, expert participants and product developmentorganizations are empowered to publish their geometric design, CAE, manufacturing, or marketing capabilities as live services that are operable over the Internet. Theseservices are made available through a service marketplace. Product developers, small or large, can subscribe to and flexibly inter-relate these services to embody adistributed product development organization, while simultaneously creating system models that allow the prediction and analysis of integrated product performance. It ishypothesized that product development services will become commodities, much like many component-level products are today. It will be possible to rapidly interchangeequivalent design service providers so that the development of the product and the definition of the product development organization become part of the same process.Computer-aided design tools will evolve to facilitate the publishing of live design services. A research prototype system called DOME is used to illustrate the concept and apilot study with Ford Motor Company is used in a preliminary assessment of the vision.
keywords Integrated Modeling, System Modeling, Design Service Marketplace
series journal paper
email
last changed 2003/05/15 21:33

_id 60e7
authors Bailey, Rohan
year 2000
title The Intelligent Sketch: Developing a Conceptual Model for a Digital Design Assistant
source Eternity, Infinity and Virtuality in Architecture [Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture / 1-880250-09-8] Washington D.C. 19-22 October 2000, pp. 137-145
doi https://doi.org/10.52842/conf.acadia.2000.137
summary The computer is a relatively new tool in the practice of Architecture. Since its introduction, there has been a desire amongst designers to use this new tool quite early in the design process. However, contrary to this desire, most Architects today use pen and paper in the very early stages of design to sketch. Architects solve problems by thinking visually. One of the most important tools that the Architect has at his disposal in the design process is the hand sketch. This iterative way of testing ideas and informing the design process with images fundamentally directs and aids the architect’s decision making. It has been said (Schön and Wiggins 1992) that sketching is about the reflective conversation designers have with images and ideas conveyed by the act of drawing. It is highly dependent on feedback. This “conversation” is an area worthy of investigation. Understanding this “conversation” is significant to understanding how we might apply the computer to enhance the designer’s ability to capture, manipulate and reflect on ideas during conceptual design. This paper discusses sketching and its relation to design thinking. It explores the conversations that designers engage in with the media they use. This is done through the explanation of a protocol analysis method. Protocol analysis used in the field of psychology, has been used extensively by Eastman et al (starting in the early 70s) as a method to elicit information about design thinking. In the pilot experiment described in this paper, two persons are used. One plays the role of the “hand” while the other is the “mind”- the two elements that are involved in the design “conversation”. This variation on classical protocol analysis sets out to discover how “intelligent” the hand should be to enhance design by reflection. The paper describes the procedures entailed in the pilot experiment and the resulting data. The paper then concludes by discussing future intentions for research and the far reaching possibilities for use of the computer in architectural studio teaching (as teaching aids) as well as a digital design assistant in conceptual design.
keywords CAAD, Sketching, Protocol Analysis, Design Thinking, Design Education
series ACADIA
last changed 2022/06/07 07:54

_id c229
authors Cavazos, María Estela Sánchez
year 2002
title Experiencia en Digitalización de Procesos de Diseño Arquitectónico Caso Taller de Modelación Espacial, Universidad Autónoma de Aguascalientes [Experience in Digitalization Processes of Architectural Design: Study Case of Space Modeling, Independent University of Aguascalientes ]
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 252-256
summary This project has been based in an experience that took time in the years 1999 and 2000 where a group of 13 students of the Architectonic Design Masters in the U.A.A. were submitted to a project that consisted in register their Architectonic Design Processing during a year with the main purpose of having the most complete material possible to be used as material for different research projects. At the end of the architectonic project the students scanned all the graphics and ordered them in the format that was established by the group using ACDSee32 as the program, which resulted very simple to manage and permitted to order the graphics and write comments to them as it was thought. The result obtained was 12 ordered texts by seven segments pefectly identifi ed and with easy manage for any investigation that you want to realice with them, in fact today exist two fi nished investigations that were realized with this information added to one formal investigation and some informal in process.
series SIGRADI
email
last changed 2016/03/10 09:48

_id 9403
authors De Carvalho, Silvana Sá
year 2000
title A Telemática e o Meio Técnico- Científico-Informacional: Um Olhar sobre o Urbano (Telematics and Technical Scientific-Information Environment: An Urban View)
source SIGraDi’2000 - Construindo (n)o espacio digital (constructing the digital Space) [4th SIGRADI Conference Proceedings / ISBN 85-88027-02-X] Rio de Janeiro (Brazil) 25-28 september 2000, pp. 160-162
summary The instantaneous nature of globalized information has brought places closer together and homogenized space, eliminating regional differences. Contemporary urban architecture and the technical-scientific- informational quality of the human-made environment innovates the rationality of the dominant actors in society. The field of telecommunications has developed substantially in the last 30 years, and today we are participants in a digital era, that has not only shortened distances but revolutionized the concepts of time and space. Telematics is a fundamental element of cities at the end of the millennium and has become a new instrument of social control. Electronic vigilance systems, as an application of telematics, are now widely used in cities, and a new urban space is being configured based on this dynamic. This paper is an introductory essay on the topic, which is essential in the understanding of urban spatial dynamics, and its objective is to point out fields for future research.
series SIGRADI
email
last changed 2016/03/10 09:50

_id ad8f
authors Novitski, B.J.
year 2000
title Once and Future Graphics Pioneer
source Architectural Record, June
summary In the glitzy world of computer-generated visualizations that dominate movies and magazines today, it's easy to take for granted the photographic quality that architects are able to give their renderings of proposed buildings. But behind the scenes, there have been have been four decades of grueling, dedicated, and inspired research to make possible these synthetic images that are indistinguishable from photographs.
series journal paper
email
last changed 2003/04/23 15:50

_id bf19
id bf19
authors Rafi, A
year 2001
title Design computing: A new challenge for creative synergy
source In Saito, N. (Ed.), Creative digital media: Its impact on the new century (pp. 132-136), Japan: Keio University Press
summary As content becomes increasingly significant in giving ‘face’ to information technology (IT), the need to train and produce content designers has also become more and more important. The development of powerful computer technologies and the complexity of design have demanded designers to re-examine the design process and consider the adaptation of tools that will provide for creativity, improve the overall design process and, at the same time, reveal new insights (Rafi and Karboulonis, 2000). This paper gives an overview of the relationship between art and science through the ages, and discusses their relatively recent re-convergence. This text further argues that a re-convergence between art and science is currently occurring, highlighting the need to accelerate the process. It is suggested that re-convergence is a result of new technologies being researched, namely related to effective visualisation and communication of ideas and concepts, subsequently adopted by practitioners. Such elements, with tools that offer increased power and new abilities, are widely found today in the multimedia and the Virtual Environment (VE) as scientists and designers venture into each other’s domain. This paper also argues that content designers of the future must not only be both artist and technologist, but artist and technologist that are aware of the context in which content is being developed. The presentation will be a showcase of our exploration at the Faculty of Creative Multimedia, Multimedia University for the last 4 years, in integrating design and computer skills – the synergy that we called DESIGN COMPUTING.
keywords design computing, creativity, content, design
series book
type normal paper
email
last changed 2007/09/13 03:43

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ca3d
authors Shakarchi, Ali Y.
year 2000
title Tools for Distributed Design Practice
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 89-92
doi https://doi.org/10.52842/conf.ecaade.2000.089
summary During collaboration designers jointly solve problems as well as interact for critical feedback. Today’s heterogeneous, distributed and global market demands of designers collaboration in both synchronous and asynchronous mode. The management and control of such projects is frequently geographical and temporally distributed. Increasingly, efficient communication is becoming a vital component in the design process, whether in managing the project data or controlling the compatibility of different inputs by design team members or minimizing the revision cycles. Paper presents and discuss iSPACE, the mature prototype software application developed to serve different scenarios of communication between the distributed design team members. The iSPACE is web based application that can deliver an interactive environment over low-bandwidth connections. Application of iSPACE in the educational environment is monitored and discussed. Giving the potential of this technology to enhance and to streamline complex tasks associated with the design process, the quality of the design product is changing. The new style of design practice can be now practically further modeled, supported and enhanced.
keywords Design Collaboration, Design Process, i-space, Digital Media
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:56

_id 735b
authors Tolone, W.J.
year 2000
title Virtual situation rooms: connecting people across enterprises for supply-chain agility
source Computer-Aided Design, Vol. 32 (2) (2000) pp. 109-117
summary Agility and time-based manufacturing are critical success factors for today's manufacturing enterprise. To be competitive, enterprises must integrate their supply chains moreeffectively and forge close memberships with customers and suppliers more quickly. Consequently, technologies must be developed that enable enterprises to respond toconsumer demand more quickly, integrate with suppliers more effectively, adapt to market variations more efficiently and evolve product designs with manufacturing practicesmore seamlessly. The mission of the Extended-Enterprise Coalition for Integrated Collaborative Manufacturing Systems coalition is to research, develop, and demonstratetechnologies to enable the integration of manufacturing applications in a multi-company supply chain planning and execution environment. We believe real-time andasynchronous collaboration technology will play a critical role in allowing manufacturers to increase their supply chain agility. We are realizing our efforts through our VirtualSituation Room (VSR) technology. The primary goal of the VSR technology is to enhance current ad-hoc, limited methods and mechanisms for spontaneous, real-timecommunication using feature-rich, industry standards-based building blocks and network protocols. VSR technology is being designed to find and engage quickly all relevantmembers of a problem solving team supported by highly interactive, conversational access to information and control and enabled by business processes, security policies andtechnologies, intelligence, and integration tools.
keywords Collaborative Systems, Supply Chain Integration, Real-Time Conferencing
series journal paper
email
last changed 2003/05/15 21:33

_id 047e
authors Wong, Chien-Hui
year 2000
title Some Phenomena of Design Thinking in the Concept Generation Stage Using Computer Media
source CAADRIA 2000 [Proceedings of the Fifth Conference on Computer Aided Architectural Design Research in Asia / ISBN 981-04-2491-4] Singapore 18-19 May 2000, pp. 255-263
doi https://doi.org/10.52842/conf.caadria.2000.255
summary Today, the computer media has become more and more important in design process. It is not only used as kind of simulated and presented media. Also, various kinds of research start developing the computer aided design system and probing the possibility of using computers in creative activities. In resent years, many studies concentrate on the forepart of design, the concept generation stage, but most of them are based on conventional media such as papers and pencils. This study attempts to probe the different design thinking phenomenon produced through concept generation by computers and by conventional media; and the effects of the development and presence of design concept generation resulted from the merits and features of the computers themselves. The methodology used here is protocol analysis of gaining subject's verbal data in think-aloud way and then encoding it to analyze. The outcome of this study is to find some phenomena of design thinking when using computers to progress concept generation, and suggest further studies relating to the topic of methodology.
series CAADRIA
email
last changed 2022/06/07 07:57

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 26HOMELOGIN (you are user _anon_726251 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002