CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 491

_id 39fb
authors Langton, C.G.
year 1996
title Artificial Life
source Boden, M. A. (1996). The Philosophy of Artificial Life, 39-94.New York and Oxford: Oxford University Press
summary Artificial Life contains a selection of articles from the first three issues of the journal of the same name, chosen so as to give an overview of the field, its connections with other disciplines, and its philosophical foundations. It is aimed at those with a general background in the sciences: some of the articles assume a mathematical background, or basic biology and computer science. I found it an informative and thought-provoking survey of a field around whose edges I have skirted for years. Many of the articles take biology as their starting point. Charles Taylor and David Jefferson provide a brief overview of the uses of artificial life as a tool in biology. Others look at more specific topics: Kristian Lindgren and Mats G. Nordahl use the iterated Prisoner's Dilemma to model cooperation and community structure in artificial ecosystems; Peter Schuster writes about molecular evolution in simplified test tube systems and its spin-off, evolutionary biotechnology; Przemyslaw Prusinkiewicz presents some examples of visual modelling of morphogenesis, illustrated with colour photographs; and Michael G. Dyer surveys different kinds of cooperative animal behaviour and some of the problems synthesising neural networks which exhibit similar behaviours. Other articles highlight the connections of artificial life with artificial intelligence. A review article by Luc Steels covers the relationship between the two fields, while another by Pattie Maes covers work on adaptive autonomous agents. Thomas S. Ray takes a synthetic approach to artificial life, with the goal of instantiating life rather than simulating it; he manages an awkward compromise between respecting the "physics and chemistry" of the digital medium and transplanting features of biological life. Kunihiko Kaneko looks to the mathematics of chaos theory to help understand the origins of complexity in evolution. In "Beyond Digital Naturalism", Walter Fontana, Guenter Wagner and Leo Buss argue that the test of artificial life is to solve conceptual problems of biology and that "there exists a logical deep structure of which carbon chemistry-based life is a manifestation"; they use lambda calculus to try and build a theory of organisation.
series other
last changed 2003/04/23 15:14

_id a115
authors Hanna, R.
year 1996
title A Computer-based Approach for Teaching Daylighting at the Early Design Stage
doi https://doi.org/10.52842/conf.ecaade.1996.181
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 181-190
summary This paper has reviewed the literature on the teaching of daylight systems design in architectural education, and found that traditionally such teaching has evolved around the prediction of the Daylight Factor (DF%), i.e. illuminance, via two methods one studio-based and another laboratory based. The former relies on graphical and/or mathematical techniques, e.g. the BRE Protractors, the BRE Tables, Waldram Diagrams, the Pepper-pot diagrams and the BRE formula. The latter tests scale models of buildings under artificial sky conditions (CIE sky). The paper lists the advantages and disadvantages of both methods in terms of compatibility with the design process, time required, accuracy, energy-consumption facts, and visual information.

This paper outlines a proposal for an alternative method for teaching daylight and artificial lighting design for both architectural students and practitioners. It is based on photorealistic images as well as numbers, and employs the Lumen Micro 6.0 programme. This software package is a complete indoor lighting design and analysis programme which generates perspective renderings and animated walk-throughs of the space lighted naturally and artificially.

The paper also presents the findings of an empirical case study to validate Lumen Micro 6.0 by comparing simulated output with field monitoring of horizontal and vertical illuminance and luminance inside the highly acclaimed GSA building in Glasgow. The monitoring station was masterminded by the author and uses the Megatron lighting sensors, Luscar dataloggers and the Easylog analysis software. In addition photographs of a selected design studio inside the GSA building were contrasted with computer generated perspective images of the same space.

series eCAADe
email
last changed 2022/06/07 07:50

_id ddssar9611
id ddssar9611
authors de Gelder, Johan and Lucardie, Larry
year 1996
title Criteria for the Selection of Conceptual Modelling Languages for Knowledge Based Systems
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary In recent years knowledge is increasingly recognised as a critical production factor for organisations. Performance of activities such as designing, diagnosing, advising and decision making, depend on the availability and accessibility of knowledge. However, the increasing volume and complexity of knowledge endangers its availability and accessibility. By their knowledge processing competence, knowledge based systems containing a structured and explicit representation of knowledge, are expected to solve this problem. In the realisation of a knowledge based system, the phase in which a knowledge model is reconstructed through a conceptual language, is essential. Because the knowledge model has to be an adequate reflection of real-world knowledge, the conceptual language should not only offer sufficient expressiveness for unambiguous knowledge representation, but also provide facilities to validate knowledge on correctness, completeness and consistency. Furthermore, the language should supply facilities to be processed by a computer. This paper discusses fundamental criteria to select a conceptual language for modelling the knowledge of a knowledge based system. It substantiates the claim that the selection depends on the nature of the knowledge in the application domain. By analysing the nature of knowledge using the theory of functional object-types, a framework to compare, evaluate and select a conceptual language is presented. To illustrate the selection process, the paper describes the choice of a conceptual language of a knowledge based system for checking office buildings on fire-safety demands. In this application domain, the language formed by decision tables has been selected to develop the conceptual model. The paper provides an in-depth motivation why decision tables form the best language to model the knowledge in this case.
series DDSS
last changed 2003/08/07 16:36

_id ddssar9633
id ddssar9633
authors Szalapaj, Peter and Kane, Andrew
year 1996
title Techniques of Superimposition
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary This paper addresses the issues of 2-D and 3-D image manipulation in the context of a Computational Design Formulation System. The central feature of such a system is the ability to bring together two or more design objects in the same reference space for the purpose of analysis. Studies of traditional design methods has revealed the effectiveness of this technique of superimposition. This paper describes ways in which superimposition can be achieved, and, in particular, focuses on a range of domain-independent knowledge-based graphical operators that enable the decomposition of complex design forms into simpler aspects (secondary models) that can then be superimposed and/or analysed from a design-theoretic point of view. Examples of domain-independent knowledge-base graphical operators include object selection, planar bisection, 2-D closure (the grouping of lines into regions), aggregation (the decomposition of 2-D regions into aggregations of lines), spatial bisection, 3-D closure (the grouping of 2-D regions into volumes), 3-D aggregation (the decomposition of volumes into aggregations of 2-D regions). The representation of these operators is dependent upon the notion of a parameterisable volume, thus avoiding the need for translations between multiple representations of graphical objects by providing a common representation form for all objects. Secondary models can therefore subsequently be manipulated either through subtractive procedures (e.g. carving voids from solids), or by additive ones (e.g. assembling given design elements), or by other means such as transformation or distortion. The same techniques of superimposition can also be used to support the visualisation of design forms in two ways: by the juxtaposition of plans and sections with the 3-D form; by the multiple superimposition of alternative design representations e.g. structural schematic, parti schematic, volumetric schematic and architectural model.
keywords Design Formulation, Superimposition, Primary Model, Secondary Model, Parameterisable Volume
series DDSS
last changed 2003/08/07 16:36

_id c872
authors Beliveau, Y.J., Fithian, J.E. and Deisenroth, M.P.
year 1996
title Autonomous vehicle navigation with real-time 3D laser based positioning for construction
source Automation in Construction 5 (4) (1996) pp. 261-272
summary Autonomous Guided Vehicles (AGVs) are a way of life in manufacturing where navigation can be done in a structured environment. Construction is an unstructured environment and requires a different type of navigation system to deal with three dimensional control and rough terrain. This paper provides a review of navigation systems that utilize dead-reckoning in conjunction with absolute referencing systems such as beacon-based systems, and vision and mapping based system. The use of a real-time laser based technology is demonstrated as a new form of navigation. This, technology does not rely on dead reckoning. The paper outlines the issues and strategies in guiding an autonomous vehicle utilizing only the laser-based positioning system. Algorithms were developed to provide real-time control of the AGV. The laser based positioning system is unique in that it provides three dimensional position data with five updates per second. No other system can provide this level of performance. This allows for control of end effectors and autonomous vehicles in complex and unstructured three dimensional environments. The use of this new type of navigation makes possible the automation of large complex assemblies in rough terrain such as construction.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id avocaad_2001_02
id avocaad_2001_02
authors Cheng-Yuan Lin, Yu-Tung Liu
year 2001
title A digital Procedure of Building Construction: A practical project
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In earlier times in which computers have not yet been developed well, there has been some researches regarding representation using conventional media (Gombrich, 1960; Arnheim, 1970). For ancient architects, the design process was described abstractly by text (Hewitt, 1985; Cable, 1983); the process evolved from unselfconscious to conscious ways (Alexander, 1964). Till the appearance of 2D drawings, these drawings could only express abstract visual thinking and visually conceptualized vocabulary (Goldschmidt, 1999). Then with the massive use of physical models in the Renaissance, the form and space of architecture was given better precision (Millon, 1994). Researches continued their attempts to identify the nature of different design tools (Eastman and Fereshe, 1994). Simon (1981) figured out that human increasingly relies on other specialists, computational agents, and materials referred to augment their cognitive abilities. This discourse was verified by recent research on conception of design and the expression using digital technologies (McCullough, 1996; Perez-Gomez and Pelletier, 1997). While other design tools did not change as much as representation (Panofsky, 1991; Koch, 1997), the involvement of computers in conventional architecture design arouses a new design thinking of digital architecture (Liu, 1996; Krawczyk, 1997; Murray, 1997; Wertheim, 1999). The notion of the link between ideas and media is emphasized throughout various fields, such as architectural education (Radford, 2000), Internet, and restoration of historical architecture (Potier et al., 2000). Information technology is also an important tool for civil engineering projects (Choi and Ibbs, 1989). Compared with conventional design media, computers avoid some errors in the process (Zaera, 1997). However, most of the application of computers to construction is restricted to simulations in building process (Halpin, 1990). It is worth studying how to employ computer technology meaningfully to bring significant changes to concept stage during the process of building construction (Madazo, 2000; Dave, 2000) and communication (Haymaker, 2000).In architectural design, concept design was achieved through drawings and models (Mitchell, 1997), while the working drawings and even shop drawings were brewed and communicated through drawings only. However, the most effective method of shaping building elements is to build models by computer (Madrazo, 1999). With the trend of 3D visualization (Johnson and Clayton, 1998) and the difference of designing between the physical environment and virtual environment (Maher et al. 2000), we intend to study the possibilities of using digital models, in addition to drawings, as a critical media in the conceptual stage of building construction process in the near future (just as the critical role that physical models played in early design process in the Renaissance). This research is combined with two practical building projects, following the progress of construction by using digital models and animations to simulate the structural layouts of the projects. We also tried to solve the complicated and even conflicting problems in the detail and piping design process through an easily accessible and precise interface. An attempt was made to delineate the hierarchy of the elements in a single structural and constructional system, and the corresponding relations among the systems. Since building construction is often complicated and even conflicting, precision needed to complete the projects can not be based merely on 2D drawings with some imagination. The purpose of this paper is to describe all the related elements according to precision and correctness, to discuss every possibility of different thinking in design of electric-mechanical engineering, to receive feedback from the construction projects in the real world, and to compare the digital models with conventional drawings.Through the application of this research, the subtle relations between the conventional drawings and digital models can be used in the area of building construction. Moreover, a theoretical model and standard process is proposed by using conventional drawings, digital models and physical buildings. By introducing the intervention of digital media in design process of working drawings and shop drawings, there is an opportune chance to use the digital media as a prominent design tool. This study extends the use of digital model and animation from design process to construction process. However, the entire construction process involves various details and exceptions, which are not discussed in this paper. These limitations should be explored in future studies.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id f5ee
authors Erhorn, H., De Boer, J. and Dirksmueller, M.
year 1997
title ADELINE, an Integrated Approach to Lighting Simulation
source Proceedings of Right Light 4, 4th European Conference on Energy-Efficient Lighting, pp.99-103
summary The use of daylighting and artificial lighting simulation programs to calculate complex systems and models in the design practice often is impeded by the fact that the operation of these programs, especially the model input, is extremely complicated and time-consuming. Programs that are easier to use generally do not show the calculation capabilities required in practice. A second obstacle arises as the lighting calculations often do not allow any statements regarding the interactions with the energetic and thermal building performance. Both problems are mainly due to a lacking integration of the design tools of other building design practitioners as well as due to insufficient user interfaces. The program package ADELINE (Advanced Daylight and Electric Lighting Integrated New Environment) being available since May 1996 as completely revised version 2.0 presents a promising approach to solve these problems. This contribution describes the approaches and methods used within the international project IEA Task 21 for a further development of the ADELINE system. Aim of this work is a further improvement of user interfaces based on the inclusion of new dialogs and on a portation of the program system from MS-DOS to the Windows NT platform. Additional focus is laid on the use of recent developments in the field of information technology and experiences gained in other projects on integrated building design systems, like for example EU-COMBINE, in a pragmatical way. An integrated building design system with open standardized interfaces is to be achieved inter alia by using ISOSTEP formats, database technologies and a consequent, object-oriented design.
series other
last changed 2003/04/23 15:50

_id 3451
authors Harrison, Beverly L.
year 1996
title The Design and Evaluation of Transparent User Interfaces. From Theory to Practice
source University of Toronto, Toronto
summary The central research issue addressed by this dissertation is how we can design systems where information on user interface tools is overlaid on the work product being developed with these tools. The interface tools typically appear in the display foreground while the data or work space being manipulated typically appear in the perceptual background. This represents a trade-off in focused foreground attention versus focused background attention. By better supporting human attention we hope to improve the fluency of work, where fluency is reflected in a more seamless integration between task goals, user interface tool manipulations to achieve these goals, and feedback from the data or work space being manipulated. This research specifically focuses on the design and evaluation of transparent user interface 'layers' applied to graphical user interfaces. By allowing users to see through windows, menus, and tool palettes appearing in the perceptual foreground, an improved awareness of the underlying workspace and preservation of context are possible. However, transparent overlapping objects introduce visual interference which may degrade task performance, through reduced legibility. This dissertation explores a new interface technique (i.e., transparent layering) and, more importantly, undertakes a deeper investigation into the underlying issues that have implications for the design and use of this new technique. We have conducted a series of experiments, progressively more representative of the complex stimuli from real task domains. This enables us to systematically evaluate a variety of transparent user interfaces, while remaining confident of the applicability of the results to actual task contexts. We also describe prototypes and a case study evaluation of a working system using transparency based on our design parameters and experimental findings. Our findings indicate that similarity in both image color and in image content affect the levels of visual interference. Solid imagery in either the user interface tools (e.g., icons) or in the work space content (e.g., video, rendered models) are highly interference resistant and work well up to 75% transparent (i.e., 25% of foreground image and 75% of background content). Text and wire frame images (or line drawings) perform equally poorly but are highly usable up to 50% transparent, with no apparent performance penalty. Introducing contrasting outlining techniques improves the usability of transparent text menu interfaces up to 90% transparency. These results suggest that transparency is a usable and promising interface alternative. We suggest several methods of overcoming today's technical challenges in order to integrate transparency into existing applications.  
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 1162
authors Malkawi, Ali and Jabi, Wassim
year 1996
title Integrating Shadow Casting Methodology and Thermal Simulation
source Proceedings of the Solar ‘96 Conference. Asheville, North Carolina: American Solar Energy Society, 1996, pp. 271-276
summary This paper describes an experiment that integrates shadow casting methodology and thermal simulation algorithms developed by the authors. The 3D shadow procedures use a polyhedral representation of solids within a Cartesian space that allows for accurate casting of shadows. The algorithm is also capable of calculating surface areas of polygonal shadows of any arbitrary shape and size. The thermal simulation algorithms – using the Transfer Function Method (TFM) – incorporate the shaded area calculations to better predict solar heat gain from glazing based on transmitted, absorbed, and conducted cooling loads. The paper describes the use of a 3D computer model to illustrate the impact of the pattern and area of shading on the visual and thermal properties of building apertures. The paper discusses the objectives of this experiment, the algorithms used, and their integration. Conclusions and findings are drawn.
keywords Shadow Casting Algorithms Energey Thermal Simulation
series other
email
last changed 2002/03/05 19:51

_id maver_078
id maver_078
authors Maver, T.W.
year 1996
title Information Technology and Building Performance
source 3rd International Symposium on the Application of the Performance Concept in Building. Tel Aviv, Israel
summary The quality of the built environment depends critically on the concept of sustainability and, in particular, on designs which are energy efficient and environmentally friendly. This paper gives an account of the successful application of computer-based simulations of the physical environment made available to architects through an Energy Design Advisory Service and used parametrically within a research project carried out jointly with a design and build company. It goes on to indicate how emerging multimedia technology can be used to provide an explanation, particularly to those who are technically unsophisticated, of the complexity of the way in which design decisions impact upon the energy efficiency and environmental friendliness of buildings.
series other
email
last changed 2003/09/03 15:01

_id sigradi2023_108
id sigradi2023_108
authors Passos, Aderson, Jorge, Luna, Cavalcante, Ana, Sampaio, Hugo, Moreira, Eugenio and Cardoso, Daniel
year 2023
title Urban Morphology and Solar Incidence in Public Spaces - an Exploratory Correlation Analysis Through a CIM System
source García Amen, F, Goni Fitipaldo, A L and Armagno Gentile, Á (eds.), Accelerated Landscapes - Proceedings of the XXVII International Conference of the Ibero-American Society of Digital Graphics (SIGraDi 2023), Punta del Este, Maldonado, Uruguay, 29 November - 1 December 2023, pp. 1655–1666
summary The walkability of open spaces has been highlighted in current discussions about the production of designed environments in urban contexts (Matan, 2011). To contribute to this theme, this work selects the environmental comfort of open spaces as its element of study. The production of urban space was investigated, specifically in regard to urban morphology, understanding that city design directly influences environmental comfort (Jacobs, 1996). This work addresses the geographic context of low latitudes, specifically in hot and humid climate zones of Brazil, and, in this context, according to NBR 15220 (national performance standards), shading is one of the main comfort strategies, so solar incidence was the approached environmental phenomenon. Thus, this work presents a digital system that performs exploratory analysis on the correlations between urban form indicators and environmental performance indicators, specifically solar incidence. The method consists of three steps: urban form modeling (1), indicator measurement (2) and correlation analysis (3). In the first stage, different spatial sections of a city in Brazil were represented in the digital environment (1). This work’s implementation instrument is based on a City Information Modeling framework (Beirao et al., 2012). Visual Programming Interface (VPI) and Geographic Information Systems (GIS) tools were used, in addition to a Relational Database Management System (RDBMS). Then, for each urban clipping, the values of morphological indicators and the incidence of solar radiation were measured (2). Based on the values of the indicators, an exploration of their correlation was carried out by statistical methods (3). The results of the correlation analysis and their correspondent scatter plots are presented. Finally, possible applications of the results for the creation of prescriptive urban planning systems are discussed, seeking to promote a sustainable urban environment.
keywords Urban planning, Environmental comfort, Walkability, Urban morphology, Statistical methods.
series SIGraDi
email
last changed 2024/03/08 14:09

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id avocaad_2001_17
id avocaad_2001_17
authors Ying-Hsiu Huang, Yu-Tung Liu, Cheng-Yuan Lin, Yi-Ting Cheng, Yu-Chen Chiu
year 2001
title The comparison of animation, virtual reality, and scenario scripting in design process
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary Design media is a fundamental tool, which can incubate concrete ideas from ambiguous concepts. Evolved from freehand sketches, physical models to computerized drafting, modeling (Dave, 2000), animations (Woo, et al., 1999), and virtual reality (Chiu, 1999; Klercker, 1999; Emdanat, 1999), different media are used to communicate to designers or users with different conceptual levels¡@during the design process. Extensively employed in design process, physical models help designers in managing forms and spaces more precisely and more freely (Millon, 1994; Liu, 1996).Computerized drafting, models, animations, and VR have gradually replaced conventional media, freehand sketches and physical models. Diversely used in the design process, computerized media allow designers to handle more divergent levels of space than conventional media do. The rapid emergence of computers in design process has ushered in efforts to the visual impact of this media, particularly (Rahman, 1992). He also emphasized the use of computerized media: modeling and animations. Moreover, based on Rahman's study, Bai and Liu (1998) applied a new design media¡Xvirtual reality, to the design process. In doing so, they proposed an evaluation process to examine the visual impact of this new media in the design process. That same investigation pointed towards the facilitative role of the computerized media in enhancing topical comprehension, concept realization, and development of ideas.Computer technology fosters the growth of emerging media. A new computerized media, scenario scripting (Sasada, 2000; Jozen, 2000), markedly enhances computer animations and, in doing so, positively impacts design processes. For the three latest media, i.e., computerized animation, virtual reality, and scenario scripting, the following question arises: What role does visual impact play in different design phases of these media. Moreover, what is the origin of such an impact? Furthermore, what are the similarities and variances of computing techniques, principles of interaction, and practical applications among these computerized media?This study investigates the similarities and variances among computing techniques, interacting principles, and their applications in the above three media. Different computerized media in the design process are also adopted to explore related phenomenon by using these three media in two projects. First, a renewal planning project of the old district of Hsinchu City is inspected, in which animations and scenario scripting are used. Second, the renewal project is compared with a progressive design project for the Hsinchu Digital Museum, as designed by Peter Eisenman. Finally, similarity and variance among these computerized media are discussed.This study also examines the visual impact of these three computerized media in the design process. In computerized animation, although other designers can realize the spatial concept in design, users cannot fully comprehend the concept. On the other hand, other media such as virtual reality and scenario scripting enable users to more directly comprehend what the designer's presentation.Future studies should more closely examine how these three media impact the design process. This study not only provides further insight into the fundamental characteristics of the three computerized media discussed herein, but also enables designers to adopt different media in the design stages. Both designers and users can more fully understand design-related concepts.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id avocaad_2001_16
id avocaad_2001_16
authors Yu-Ying Chang, Yu-Tung Liu, Chien-Hui Wong
year 2001
title Some Phenomena of Spatial Characteristics of Cyberspace
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary "Space," which has long been an important concept in architecture (Bloomer & Moore, 1977; Mitchell, 1995, 1999), has attracted interest of researchers from various academic disciplines in recent years (Agnew, 1993; Benko & Strohmayer, 1996; Chang, 1999; Foucault, 1982; Gould, 1998). Researchers from disciplines such as anthropology, geography, sociology, philosophy, and linguistics regard it as the basis of the discussion of various theories in social sciences and humanities (Chen, 1999). On the other hand, since the invention of Internet, Internet users have been experiencing a new and magic "world." According to the definitions in traditional architecture theories, "space" is generated whenever people define a finite void by some physical elements (Zevi, 1985). However, although Internet is a virtual, immense, invisible and intangible world, navigating in it, we can still sense the very presence of ourselves and others in a wonderland. This sense could be testified by our naming of Internet as Cyberspace -- an exotic kind of space. Therefore, as people nowadays rely more and more on the Internet in their daily life, and as more and more architectural scholars and designers begin to invest their efforts in the design of virtual places online (e.g., Maher, 1999; Li & Maher, 2000), we cannot help but ask whether there are indeed sensible spaces in Internet. And if yes, these spaces exist in terms of what forms and created by what ways?To join the current interdisciplinary discussion on the issue of space, and to obtain new definition as well as insightful understanding of "space", this study explores the spatial phenomena in Internet. We hope that our findings would ultimately be also useful for contemporary architectural designers and scholars in their designs in the real world.As a preliminary exploration, the main objective of this study is to discover the elements involved in the creation/construction of Internet spaces and to examine the relationship between human participants and Internet spaces. In addition, this study also attempts to investigate whether participants from different academic disciplines define or experience Internet spaces in different ways, and to find what spatial elements of Internet they emphasize the most.In order to achieve a more comprehensive understanding of the spatial phenomena in Internet and to overcome the subjectivity of the members of the research team, the research design of this study was divided into two stages. At the first stage, we conducted literature review to study existing theories of space (which are based on observations and investigations of the physical world). At the second stage of this study, we recruited 8 Internet regular users to approach this topic from different point of views, and to see whether people with different academic training would define and experience Internet spaces differently.The results of this study reveal that the relationship between human participants and Internet spaces is different from that between human participants and physical spaces. In the physical world, physical elements of space must be established first; it then begins to be regarded as a place after interaction between/among human participants or interaction between human participants and the physical environment. In contrast, in Internet, a sense of place is first created through human interactions (or activities), Internet participants then begin to sense the existence of a space. Therefore, it seems that, among the many spatial elements of Internet we found, "interaction/reciprocity" Ñ either between/among human participants or between human participants and the computer interface Ð seems to be the most crucial element.In addition, another interesting result of this study is that verbal (linguistic) elements could provoke a sense of space in a degree higher than 2D visual representation and no less than 3D visual simulations. Nevertheless, verbal and 3D visual elements seem to work in different ways in terms of cognitive behaviors: Verbal elements provoke visual imagery and other sensory perceptions by "imagining" and then excite personal experiences of space; visual elements, on the other hand, provoke and excite visual experiences of space directly by "mapping".Finally, it was found that participants with different academic training did experience and define space differently. For example, when experiencing and analyzing Internet spaces, architecture designers, the creators of the physical world, emphasize the design of circulation and orientation, while participants with linguistics training focus more on subtle language usage. Visual designers tend to analyze the graphical elements of virtual spaces based on traditional painting theories; industrial designers, on the other hand, tend to treat these spaces as industrial products, emphasizing concept of user-center and the control of the computer interface.The findings of this study seem to add new information to our understanding of virtual space. It would be interesting for future studies to investigate how this information influences architectural designers in their real-world practices in this digital age. In addition, to obtain a fuller picture of Internet space, further research is needed to study the same issue by examining more Internet participants who have no formal linguistics and graphical training.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 1584
authors Moeck, M. and Selkowitz, S.E.
year 1996
title A computer-based daylight systems design tool
source Automation in Construction 5 (3) (1996) pp. 193-209
summary Currently numbers like illuminance or glare index are used for the evaluation of daylight system designs. We propose to use photorealistic pictures in addition to numbers as a way to assess the quality of a design solution. This is necessary since numbers-based performance criteria, that are currently in use, are either not sufficient to evaluate performance, or they require expert knowledge for interpretation. The paper discusses the implications and ramifications of this approach.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 0f0e
authors Andrzejewski, H. and Rostanski, K.
year 1996
title Landscape Design Tool of Wide Ecological Aspect
source CAD Creativeness [Conference Proceedings / ISBN 83-905377-0-2] Bialystock (Poland), 25-27 April 1996 pp. 7-12
summary The article shows new tool prepared in two Technical Universities in Poland. The packet as a whole, of its current condition, is mainly elaborated by Henryk Andrzejewski at Faculty of Architecture of Wroclaw Technical University. Plant and vegetation units specifier is so far prepared by Krzysztof M. Rostahski and Mirostaw Rogula at Faculty of Architecture of Silesian Technical University. The packet allows to create the new text database of plants and to add the external data to the existing database, to change, to view and to search the data of the existing database of plants in accordance with the selection based on nongraphic search criteria. The packet finally will have 4 modules. One of them is .plant end vegetation units specifier', some details of that are shown here. New aspect is in contents of database which helps to estimate ecological influence of designed group of plants on our body and mind.
series plCAD
last changed 2003/05/17 10:01

_id 0b25
authors Gross , Mark D.
year 1996
title Elements That Follow Your Rules: Constraint Based CAD Layout
doi https://doi.org/10.52842/conf.acadia.1996.115
source Design Computation: Collaboration, Reasoning, Pedagogy [ACADIA Conference Proceedings / ISBN 1-880250-05-5] Tucson (Arizona / USA) October 31 - November 2, 1996, pp. 115-122
summary The paper reports on CKB (Construction Kit Builder) a prototype CAD program that designers can program with positioning and assembly rules for layout of building elements. The program's premise is that designing can be understood as a process of making and following rules for the selection, position, and dimension of built and space elements. CKB operates at two distinct levels of design: the technical system designer, who makes the rules, and the end designer, who lays out the material and space elements to make a design. CKB supports two kinds of rules with constraint based programming techniques: grid and zone based position rules, and assembly rules that position elements with respect to one another. The paper discusses the rationale for CKB and describes its implementation.
series ACADIA
email
last changed 2022/06/07 07:50

_id f5a3
authors Maher, M.L. and Gomez de Silva Garza, A.
year 1996
title Developing case-based reasoning for structural design
source IEEE Expert
summary Case-based systems enable users to retrieve previously known designs from memory and adapt them to fit the current design problem. The four case-based design systems described here illustrate how various implementations achieve design assistance or design automation objectives. Case-based reasoning is a problem-solving technique that makes analogies between a problem and previously encountered situations (cases) relevant to solving the problem. Using CBR as a design process model involves the subtasks of recalling previously known designs from memory and adapting these design cases or subcases to fit the current design context. The detailed development of this process model for a particular design domain proceeds in parallel with the development of the case representation, the case memory organization, and the necessary design knowledge. The selection of an information representation paradigm and the details of its use for a problem-solving domain depend on the intended use of the information, the project information available, and the nature of the domain. CBR could be used to develop and implement a CBR system. Although that sounds circular, if CBR is a viable approach to problem solving, it can be applied to the development of the reasoning system itself. Toward that end, this article presents four "cases" of case-based building design systems that we've developed at the University of Sydney: CaseCAD, CADsyn, Win, and Demex. These systems exemplify alternative case memory contents and organizations and provide insight into different potential implementations of the recall and adaptation subprocesses.
series journal paper
email
last changed 2003/04/23 15:14

_id 6ab6
authors Maher, M.L., Rutherford, J. and Gero, J.
year 1996
title Graduate Design Computing Teaching at the University of Sydney
doi https://doi.org/10.52842/conf.caadria.1996.233
source CAADRIA ‘96 [Proceedings of The First Conference on Computer Aided Architectural Design Research in Asia / ISBN 9627-75-703-9] Hong Kong (Hong Kong) 25-27 April 1996, pp. 233-244
summary Design Computing involves the effective application of computing technologies, digital media, formal methods and design theory to the study and practice of design. Computers are assuming a prominent role in design practice. This change has been partly brought about by economic pressures to improve the efficiency of design practice, but there has also been a desire to aid the design process in order to produce better designs. The introduction of new computer-based techniques and methods generally involves a re-structuring of practice and ways of designing. We are also seeing significant current developments that have far reaching implications for the future. These innovations are occuring at a rapid rate and are imposing increasing pressures on design professionals. A re-orientation of skills is required in order to acquire and manage computer resources. If designers are to lead rather than follow developments then they need to acquire specialist knowledge – a general Computing also demands technical competence, an awareness of advances in the field and an innovative spirit to harness the technology understanding of computers and their impact, expertise in the selection and management of computer-aided design systems, and skill in the design an implementation of computer programs and systems.
series CAADRIA
email
last changed 2022/06/07 07:59

_id 5876
authors Tapia, Mark Andrew
year 1996
title From shape to style. Shape grammars: Issues in representation and computation, presentation and selection
source University of Toronto
summary Shape grammars provide a graphical mechanism for generating a variety of shapes. A shape grammar is a production system for specifying recursive graphical computations for shapes (finite arrangements of finite lines of non-zero length). The dissertation considers design as a plan in art and confines itself to abstract designs composed of lines of uniform color and thickness. The dissertation develops an implementation of shape grammars in which the drawing is the computation. Restricting itself to non-parametric shape grammars, the dissertation approaches the area as two related topics: computation and representation delineate the internal aspects of the problem; presentation and selection are crucial to the user interface. The dissertation applies shape grammars to design, promoting three claims: First, that this dissertation advances the field of shape grammars, by combining approaches in the humanities with those in science, articulating the issues and providing a solid foundation for future work. Second, supporting quality design depends on enumerating the alternatives and pruning the design space using the visual aspects of design. Third, the generative aspect of design is not as important as its presentation and selection.
series thesis:PhD
email
last changed 2003/02/12 22:37

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 24HOMELOGIN (you are user _anon_53324 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002