CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 487

_id 2354
authors Clayden, A. and Szalapaj, P.
year 1997
title Architecture in Landscape: Integrated CAD Environments for Contextually Situated Design
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.q6p
summary This paper explores the future role of a more holistic and integrated approach to the design of architecture in landscape. Many of the design exploration and presentation techniques presently used by particular design professions do not lend themselves to an inherently collaborative design strategy.

Within contemporary digital environments, there are increasing opportunities to explore and evaluate design proposals which integrate both architectural and landscape aspects. The production of integrated design solutions exploring buildings and their surrounding context is now possible through the design development of shared 3-D and 4-D virtual environments, in which buildings no longer float in space.

The scope of landscape design has expanded through the application of techniques such as GIS allowing interpretations that include social, economic and environmental dimensions. In architecture, for example, object-oriented CAD environments now make it feasible to integrate conventional modelling techniques with analytical evaluations such as energy calculations and lighting simulations. These were all ambitions of architects and landscape designers in the 70s when computer power restricted the successful implementation of these ideas. Instead, the commercial trend at that time moved towards isolated specialist design tools in particular areas. Prior to recent innovations in computing, the closely related disciplines of architecture and landscape have been separated through the unnecessary development, in our view, of their own symbolic representations, and the subsequent computer applications. This has led to an unnatural separation between what were once closely related disciplines.

Significant increases in the performance of computers are now making it possible to move on from symbolic representations towards more contextual and meaningful representations. For example, the application of realistic materials textures to CAD-generated building models can then be linked to energy calculations using the chosen materials. It is now possible for a tree to look like a tree, to have leaves and even to be botanicaly identifiable. The building and landscape can be rendered from a common database of digital samples taken from the real world. The complete model may be viewed in a more meaningful way either through stills or animation, or better still, through a total simulation of the lifecycle of the design proposal. The model may also be used to explore environmental/energy considerations and changes in the balance between the building and its context most immediately through the growth simulation of vegetation but also as part of a larger planning model.

The Internet has a key role to play in facilitating this emerging collaborative design process. Design professionals are now able via the net to work on a shared model and to explore and test designs through the development of VRML, JAVA, whiteboarding and video conferencing. The end product may potentially be something that can be more easily viewed by the client/user. The ideas presented in this paper form the basis for the development of a dual course in landscape and architecture. This will create new teaching opportunities for exploring the design of buildings and sites through the shared development of a common computer model.

keywords Integrated Design Process, Landscape and Architecture, Shared Environmentsenvironments
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/szalapaj/szalapaj.htm
last changed 2022/06/07 07:50

_id f5ee
authors Erhorn, H., De Boer, J. and Dirksmueller, M.
year 1997
title ADELINE, an Integrated Approach to Lighting Simulation
source Proceedings of Right Light 4, 4th European Conference on Energy-Efficient Lighting, pp.99-103
summary The use of daylighting and artificial lighting simulation programs to calculate complex systems and models in the design practice often is impeded by the fact that the operation of these programs, especially the model input, is extremely complicated and time-consuming. Programs that are easier to use generally do not show the calculation capabilities required in practice. A second obstacle arises as the lighting calculations often do not allow any statements regarding the interactions with the energetic and thermal building performance. Both problems are mainly due to a lacking integration of the design tools of other building design practitioners as well as due to insufficient user interfaces. The program package ADELINE (Advanced Daylight and Electric Lighting Integrated New Environment) being available since May 1996 as completely revised version 2.0 presents a promising approach to solve these problems. This contribution describes the approaches and methods used within the international project IEA Task 21 for a further development of the ADELINE system. Aim of this work is a further improvement of user interfaces based on the inclusion of new dialogs and on a portation of the program system from MS-DOS to the Windows NT platform. Additional focus is laid on the use of recent developments in the field of information technology and experiences gained in other projects on integrated building design systems, like for example EU-COMBINE, in a pragmatical way. An integrated building design system with open standardized interfaces is to be achieved inter alia by using ISOSTEP formats, database technologies and a consequent, object-oriented design.
series other
last changed 2003/04/23 15:50

_id 6d59
authors Papamichael, K., LaPorta, J. and Chauvet, H.
year 1997
title Building Design Advisor: automated integration of multiple simulation tools
source Automation in Construction 6 (4) (1997) pp. 341-352
summary The Building Design Advisor (BDA) is a software environment that supports the integrated use of multiple analysis and visualization tools throughout the building design process, from the initial, conceptual and schematic phases to the detailed specification of building components and systems. Based on a comprehensive design theory, the BDA uses an object-oriented representation of the building and its context, and acts as a data manager and process controller to allow building designers to benefit from the capabilities of multiple tools. The BDA provides a graphical user interface that consists of two main elements: the Building Browser and the Decision Desktop. The Browser allows building designers to quickly navigate through the multitude of descriptive and performance parameters addressed by the analysis and visualization tools linked to the BDA. Through the Browser the user can edit the values of input parameters and select any number of input and/or output parameters for display in the Decision Desktop. The Desktop allows building designers to compare multiple design alternatives with respect to multiple descriptive and performance parameters addressed by the tools linked to the BDA. The BDA is implemented as a Windows®-based application for personal computers. Its initial version is linked to a Schematic Graphic Editor (SGE), which allows designers to quickly and easily specify the geometric characteristics of building components and systems. For every object created in the SGE, the BDA activates a Default Value Selector (DVS) mechanism that selects `smart' default values from a Prototypes Database for all non-geometric parameters required as input to the analysis and visualization tools linked to the BDA. In addition to the SGE that is an integral part of its user interface, the initial version of the BDA is linked to a daylight analysis tool, an energy analysis tool, and a multimedia, Web-based Case Studies Database (CSD). The next version of the BDA will be linked to additional analysis tools, such as the DOE-2 (thermal, energy and energy cost) and RADIANCE (day/lighting and rendering) computer programs. Plans for the future include the development of links to cost estimating and environmental impact modules, building rating systems, CAD software and electronic product catalogs.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:23

_id fe7f
authors Schofield, A.J., Stonham, T.J. and Mehta, P.A.
year 1997
title Automated people counting to aid lift control
source Automation in Construction 6 (5-6) (1997) pp. 437-445
summary It has been suggested that the efficiency of elevator systems could be improved if lift controllers had access to accurate counts of the number of passengers waiting at each floor. Video cameras and image processing techniques represent a convenient and non-intrusive solution to the people counting problem and can produce reasonably accurate counts for moderate cost. This paper addresses the problem of people counting using video techniques not the problem of lift control. For a video based counting system to be of use it must distinguish people from other (background) objects in the field of view; the principle difficulty being due to variations in the background scene caused by changes in lighting and the movement of objects. The system discussed here uses neural networks to distinguish between parts of the background scene and non-background objects (people). This system is able to form a compact representation of multiple background images and hence deal with variations in the scene under analysis without requiring large amounts of memory or processing time.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:23

_id avocaad_2001_19
id avocaad_2001_19
authors Shen-Kai Tang, Yu-Tung Liu, Yu-Sheng Chung, Chi-Seng Chung
year 2001
title The visual harmony between new and old materials in the restoration of historical architecture: A study of computer simulation
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary In the research of historical architecture restoration, scholars respectively focus on the field of architectural context and architectural archeology (Shi, 1988, 1990, 1991, 1992, 1995; Fu, 1995, 1997; Chiu, 2000) or on architecture construction and the procedure of restoration (Shi, 1988, 1989; Chiu, 1990). How to choose materials and cope with their durability becomes an important issue in the restoration of historical architecture (Dasser, 1990; Wang, 1998).In the related research of the usage and durability of materials, some scholars deem that, instead of continuing the traditional ways that last for hundreds of years (that is to replace new materials with old ones), it might be better to keep the original materials (Dasser, 1990). However, unavoidably, some of the originals are much worn. Thus we have to first establish the standard of eliminating components, and secondly to replace identical or similar materials with the old components (Lee, 1990). After accomplishing the restoration, we often unexpectedly find out that the renewed historical building is too new that the sense of history is eliminated (Dasser, 1990; Fu, 1997). Actually this is the important factor that determines the accomplishment of restoration. In the past, some scholars find out that the contrast and conflict between new and old materials are contributed to the different time of manufacture and different coating, such as antiseptic, pattern, etc., which result in the discrepancy of the sense of visual perception (Lee, 1990; Fu, 1997; Dasser, 1990).In recent years, a number of researches and practice of computer technology have been done in the field of architectural design. We are able to proceed design communication more exactly by the application of some systematic softwares, such as image processing, computer graphic, computer modeling/rendering, animation, multimedia, virtual reality and so on (Lawson, 1995; Liu, 1996). The application of computer technology to the research of the preservation of historical architecture is comparatively late. Continually some researchers explore the procedure of restoration by computer simulation technology (Potier, 2000), or establish digital database of the investigation of historical architecture (Sasada, 2000; Wang, 1998). How to choose materials by the technology of computer simulation influences the sense of visual perception. Liu (2000) has a more complete result on visual impact analysis and assessment (VIAA) about the research of urban design projection. The main subjects of this research paper focuses on whether the technology of computer simulation can extenuate the conflict between new and old materials that imposed on visual perception.The objective of this paper is to propose a standard method of visual harmony effects for materials in historical architecture (taking the Gigi Train Station destroyed by the earthquake in last September as the operating example).There are five steps in this research: 1.Categorize the materials of historical architecture and establish the information in digital database. 2.Get new materials of historical architecture and establish the information in digital database. 3.According to the mixing amount of new and old materials, determinate their proportion of the building; mixing new and old materials in a certain way. 4.Assign the mixed materials to the computer model and proceed the simulation of lighting. 5.Make experts and the citizens to evaluate the accomplished computer model in order to propose the expected standard method.According to the experiment mentioned above, we first address a procedure of material simulation of the historical architecture restoration and then offer some suggestions of how to mix new and old materials.By this procedure of simulation, we offer a better view to control the restoration of historical architecture. And, the discrepancy and discordance by new and old materials can be released. Moreover, we thus avoid to reconstructing ¡§too new¡¨ historical architecture.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id eb53
authors Asanowicz, K. and Bartnicka, M.
year 1997
title Computer analysis of visual perception - endoscopy without endoscope
source Architectural and Urban Simulation Techniques in Research and Education [Proceedings of the 3rd European Architectural Endoscopy Association Conference / ISBN 90-407-1669-2]
summary This paper presents a method of using computer animation techniques in order to solve problems of visual pollution of city environment. It is our observation that human-inducted degradation of city environmental results from well - intentioned but inappropriate preservation actions by uninformed designers and local administration. Very often, a local municipality administration permits to build bad-fitting surroundings houses. It is usually connected with lack of visual information's about housing areas of a city, its features and characteristics. The CAMUS system (Computer Aided Management of Urban Structure) is being created at the Faculty of Architecture of Bialystok Technical University. One of its integral parts is VIA - Visual Impact of Architecture. The basic element of this system is a geometrical model of the housing areas of Bialystok. This model can be enhanced using rendering packages as they create the basis to check our perception of a given area. An inspiration of this approach was the digital endoscopy presented by J. Breen and M. Stellingwerff at the 2nd EAEA Conferences in Vienna. We are presenting the possibilities of using simple computer programs for analysis of spatial model. This contribution presents those factors of computer presentation which can demonstrate that computers achieve such effects as endoscope and often their use be much more efficient and effective.
keywords Architectural Endoscopy, Endoscopy, Simulation, Visualisation, Visualization, Real Environments
series EAEA
email
more http://www.bk.tudelft.nl/media/eaea/eaea97.html
last changed 2005/09/09 10:43

_id 0627
authors Dijkstra, J. and Timmermans, H.J.P.
year 1997
title Exploring the Possibilities of Conjoint Measurement as a Decision-Making Tool for Virtual Wayfinding Environments
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 61-71
doi https://doi.org/10.52842/conf.caadria.1997.061
summary Virtual reality systems may have a lot to offer in architecture and urban planning when visual and active environments may have a dramatic impact on individual preferences and choice behaviour. Conjoint analysis involves the use of designed hypothetical choice situations to measure individuals’ preferences and predict their choice in new situations. Conjoint experiments involve the design and analysis of hypothetical decision tasks. Alternatives are described by their main features, called attributes. Multiple hypothetical alternatives, called product profiles, are generated and presented to respondents, who are requested to express their degree of preference for these profiles or choose between these profiles. Conjoint experiments have become a popular tool to model individual preferences and decision-making in a variety of research areas. Most studies of conjoint analysis have involved a verbal description of product profiles, although some studies have used a pictorial presentation of production profiles. Virtual reality systems offer the potential of moving the response format beyond these traditional response modes. This paper describes a particular aspect of an ongoing research project which aims to develop a virtual reality based system for conjoint analysis. The principles underlying the system will be illustrated by a simple example of wayfinding in a virtual environment.
series CAADRIA
last changed 2022/06/07 07:55

_id 6f47
authors Lewin, J.S. and Gross, M.D.
year 1997
title Resolving archaeological site data with 3D computer modeling: the case of Ceren
source Automation in Construction 6 (4) (1997) pp. 323-334
summary This paper reports on our experience working with a team of anthropologists to construct three-dimensional computer graphic models of Ceren, an archaeological site in western El Salvador, using inexpensive hardware and software. In constructing the model we discovered various ambiguities and inconsistencies in the raw site data and drawings we were provided. We resolved these problems by analysis and reinterpretation of the data, working closely with our archaeologist collaborator. What began as a simple exercise in rendering developed into a collaborative research effort to understand and interpret the source data. The process of computer modeling forced us to re-examine, analyze and interpret the information from the site.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id diss_marsh
id diss_marsh
authors Marsh, A.J.
year 1997
title Performance Analysis and Conceptual Design
source School of Architecture and Fine Arts, University of Western Australia
summary A significant amount of the research referred to by Manning has been directed into the development of computer software for building simulation and performance analysis. A wide range of computational tools are now available and see relatively widespread use in both research and commercial applications. The focus of development in this area has long been on the accurate simulation of fundamental physical processes, such as the mechanisms of heat flow though materials, turbulent air movement and the inter-reflection of light. The adequate description of boundary conditions for such calculations usually requires a very detailed mathematical model. This has tended to produce tools with a very engineering-oriented and solution-based approach. Whilst becoming increasingly popular amongst building services engineers, there has been a relatively slow response to this technology amongst architects. There are some areas of the world, particularly the UK and Germany, where the use of such tools on larger projects is routine. However, this is almost exclusively during the latter stages of a project and usually for purposes of plant sizing or final design validation. The original conceptual work, building form and the selection of materials being the result of an aesthetic and intuitive process, sometimes based solely on precedent. There is no argument that an experienced designer is capable of producing an excellent design in this way. However, not all building designers are experienced, and even fewer have a complete understanding of the fundamental physical processes involved in building performance. These processes can be complex and often highly inter-related, often even counter-intuitive. It is the central argument of this thesis that the needs of the building designer are quite different from the needs of the building services engineer, and that existing building design and performance analysis tools poorly serve these needs. It will be argued that the extensive quantitative input requirement in such tools acts to produce a psychological separation between the act of design and the act of analysis. At the conceptual stage, building geometry is fluid and subject to constant change, with solid quantitative information relatively scarce. Having to measure off surface areas or search out the emissivity of a particular material forces the designer to think mathematically at a time when they are thinking intuitively. It is, however, at this intuitive stage that the greatest potential exists for performance efficiencies and environmental economies. The right orientation and fenestration choice can halve the airconditioning requirement. Incorporating passive solar elements and natural ventilation pathways can eliminate it altogether. The building form can even be designed to provide shading using its own fabric, without any need for additional structure or applied shading. It is significantly more difficult and costly to retrofit these features at a later stage in a project’s development. If the role of the design tool is to serve the design process, then a new approach is required to accommodate the conceptual phase. This thesis presents a number of ideas on what that approach may be, accompanied by some example software that demonstrates their implementation.
series thesis:PhD
more http://www.squ1.com/site.html
last changed 2003/11/28 07:33

_id cc51
authors Schnier, T. and Gero, J.S
year 1997
title Dominant and recessive genes in evolutionary systems applied to spatial reasoning
source A. Sattar (Ed.), Advanced Topics in Artificial Intelligence: 10th Australian Joint Conference on Artificial Intelligence AI97 Proceedings, Springer, Heidelberg, pp. 127-136
summary Learning genetic representation has been shown to be a useful tool in evolutionary computation. It can reduce the time required to find solutions and it allows the search process to be biased towards more desirable solutions. Learn-ing genetic representation involves the bottom-up creation of evolved genes from either original (basic) genes or from other evolved genes and the introduction of those into the population. The evolved genes effectively protect combinations of genes that have been found useful from being disturbed by the genetic operations (cross-over, mutation). However, this protection can rapidly lead to situations where evolved genes in-terlock in such a way that few or no genetic operations are possible on some genotypes. To prevent the interlocking previous implementations only allow the creation of evolved genes from genes that are direct neighbours on the genotype and therefore form continuous blocks. In this paper it is shown that the notion of dominant and recessive genes can be used to remove this limitation. Using more than one gene at a single location makes it possible to construct genetic operations that can separate interlocking evolved genes. This allows the use of non-continuous evolved genes with only minimal violations of the protection of evolved genes from those operations. As an example, this paper shows how evolved genes with dominant and re-cessive genes can be used to learn features from a set of Mondrian paintings. The representation can then be used to create new designs that contain features of the examples. The Mondrian paintings can be coded as a tree, where every node represents a rectangle division, with values for direction, position, line-width and colour. The modified evolutionary operations allow the system to cre-ate non-continuous evolved genes, for example associate two divisions with thin lines, without specifying other values. Analysis of the behaviour of the system shows that about one in ten genes is a dominant/recessive gene pair. This shows that while dominant and recessive genes are important to allow the use of non-continuous evolved genes, they do not occur often enough to seriously violate the protection of evolved genes from genetic operations.
keywords Evolutionary Systems, Genetic Representations
series other
email
last changed 2003/04/06 07:24

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 666e
authors Compagnon, R.
year 1997
title The Radiance simulation software in the architecture teaching context
source Proceedings of the 2nd Florence International conference for Teachers of Architecture. Firenze
summary Two methods of introducing the radiance lighting and daylighting simulation software to architecture students in a relatively short time are presented. The production of visual teaching materialusing the same software is also discussed.
series other
last changed 2003/04/23 15:50

_id 6a37
authors Fowler, Thomas and Muller, Brook
year 2002
title Physical and Digital Media Strategies For Exploring ‘Imagined’ Realities of Space, Skin and Light
source Thresholds - Design, Research, Education and Practice, in the Space Between the Physical and the Virtual [Proceedings of the 2002 Annual Conference of the Association for Computer Aided Design In Architecture / ISBN 1-880250-11-X] Pomona (California) 24-27 October 2002, pp. 13-23
doi https://doi.org/10.52842/conf.acadia.2002.013
summary This paper will discuss an unconventional methodology for using physical and digital media strategies ina tightly structured framework for the integration of Environmental Control Systems (ECS) principles intoa third year design studio. An interchangeable use of digital media and physical material enabledarchitectural explorations of rich tactile and luminous engagement.The principles that provide the foundation for integrative strategies between a design studio and buildingtechnology course spring from the Bauhaus tradition where a systematic approach to craftsmanship andvisual perception is emphasized. Focusing particularly on color, light, texture and materials, Josef Albersexplored the assemblage of found objects, transforming these materials into unexpected dynamiccompositions. Moholy-Nagy developed a technique called the photogram or camera-less photograph torecord the temporal movements of light. Wassily Kandinsky developed a method of analytical drawingthat breaks a still life composition into diagrammatic forces to express tension and geometry. Theseschematic diagrams provide a method for students to examine and analyze the implications of elementplacements in space (Bermudez, Neiman 1997). Gyorgy Kepes's Language of Vision provides a primerfor learning basic design principles. Kepes argued that the perception of a visual image needs aprocess of organization. According to Kepes, the experience of an image is "a creative act ofintegration". All of these principles provide the framework for the studio investigation.The quarter started with a series of intense short workshops that used an interchangeable use of digitaland physical media to focus on ECS topics such as day lighting, electric lighting, and skin vocabulary tolead students to consider these components as part of their form-making inspiration.In integrating ECS components with the design studio, an nine-step methodology was established toprovide students with a compelling and tangible framework for design:Examples of student work will be presented for the two times this course was offered (2001/02) to showhow exercises were linked to allow for a clear design progression.
series ACADIA
email
last changed 2022/06/07 07:51

_id 02e4
authors Groh, Paul H.
year 1997
title Computer Visualization as a Tool for the Conceptual Understanding of Architecture
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 243-248
doi https://doi.org/10.52842/conf.acadia.1997.243
summary A good piece of architecture contains many levels of interrelated complexity. Understanding these levels and their interrelationship is critical to the understanding of a building to both architects and non-architects alike. A building's form, function, structure, materials, and details all relate to and impact one another. By selectively dissecting and taking apart buildings through their representations, one can carefully examine and understand the interrelationship of these building components.

With the recent introduction of computer graphics, much attention has been given to the representation of architecture. Floor plans and elevations have remained relatively unchanged, while digital animation and photorealistic renderings have become exciting new means of representation. A problem with the majority of this work and especially photorealistic rendering is that it represents the building as a image and concentrates on how a building looks as opposed to how it works. Often times this "look" is artificial, expressing the incapacity of programs (or their users) to represent the complexities of materials, lighting, and perspective. By using digital representation in a descriptive, less realistic way, one can explore the rich complexities and interrelationships of architecture. Instead of representing architecture as a finished product, it is possible to represent the ideas and concepts of the project.

series ACADIA
email
last changed 2022/06/07 07:51

_id cc8e
authors Richens, P.
year 1997
title Image Processing for Urban Scale Environmental Modelling
source Proceedings Fifth International IBPSA Conference: Building Simulation ’97 (Prague). International Building Performance Simulation Association
summary If a map of a city is encoded as a Digital Elevation Model, it becomes amenable to image-processing software, such as the public-domain NIH Image application. Standard techniques can be used to measure plan areas and volumes and simple macros can be devised to measure perimeter length and wall areas. A macro for calculating shadow volumes is elaborated for the simulation of solar gains and daylight, including indirect lighting, leading to the possibility of an image-based urban-scale environmental model.
series other
email
more http://www.arct.cam.ac.uk/research/pubs/html/rich97b/
last changed 2000/03/05 19:05

_id 3674
authors Richens, P.
year 1997
title Image Processing for Urban Scale Environmental Modelling
source Proceedings of the Intemational Conference Building Simulation 97 - Prague, 163-171
summary If a map of a city is encoded as a Digital Elevation Model, it becomes amenable to image-processing software, such as the public-domain NIH Image application. Standard techniques can be used to measure plan areas and volumes and simple macros can be devised to measure perimeter length and wall areas. A macro for calculating shadow volumes is elaborated for the simulation of solar gains and daylight, including indirect lighting, leading to the possibility of an image-based urban-scale environmental model.
series other
email
last changed 2003/04/23 15:50

_id c1ad
authors Cheng, Nancy Yen-wen
year 1997
title Teaching CAD with Language Learning Methods
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 173-188
doi https://doi.org/10.52842/conf.acadia.1997.173
summary By looking at computer aided design as design communication we can use pedagogical methods from the well-developed discipline of language learning. Language learning breaks down a complex field into attainable steps, showing how learning strategies and attitudes can enhance mastery. Balancing the linguistic emphases of organizational analysis, communicative intent and contextual application can address different learning styles. Guiding students in learning approaches from language study will equip them to deal with constantly changing technology.

From overall curriculum planning to specific exercises, language study provides a model for building a learner-centered education. Educating students about the learning process, such as the variety of metacognitive, cognitive and social/affective strategies can improve learning. At an introductory level, providing a conceptual framework and enhancing resource-finding, brainstorming and coping abilities can lead to threshold competence. Using kit-of-parts problems helps students to focus on technique and content in successive steps, with mimetic and generative work appealing to different learning styles.

Practicing learning strategies on realistic projects hones the ability to connect concepts to actual situations, drawing on resource-usage, task management, and problem management skills. Including collaborative aspects in these projects provides the motivation of a real audience and while linking academic study to practical concerns. Examples from architectural education illustrate how the approach can be implemented.

series ACADIA
email
last changed 2022/06/07 07:55

_id 426f
authors Colajanni, Benedetto and Pellitteri, Giuseppe
year 1997
title Image Recognition: from Syntax to Semantics
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
doi https://doi.org/10.52842/conf.ecaade.1997.x.n7x
summary n a previous paper the authors presented an analyser of simple architectural images. It works at syntactical level inasmuch as it is able to detect the elementary components of the images and to perform on them some analyses regarding their reciprocal position and their combinations.

Here we present a second step of development of the analyser: the implementation of some semantic capabilities. The most elementary level of semantics is the simple recognition of each object present in the architectural image. Which, in turn means attributing to each object the name of the class of similar objects to which the single object is supposed to pertain. While at the syntactical level the pertinence to a class implies the identity of an object to the class prototype, at the semantic level this is not compulsory. Pertaining to the same class, that is having the same architectural meaning, can be objects having approximately the same shape. Consequently in order to detect the pertinence of an object to a class, that is giving it an architectural meaning, two things are necessary: a date base containing the class prototypes to which the recognized objects are to be assigned and a tool able to "measure" the difference of two shapes.

keywords Image Analysis, Semantics
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/pell/pell.htm
last changed 2022/06/07 07:50

_id ddss9829
id ddss9829
authors De Hoog, J., Hendriks, N.A. and Rutten, P.G.S.
year 1998
title Evaluating Office Buildings with MOLCA(Model for Office Life Cycle Assessment)
source Timmermans, Harry (Ed.), Fourth Design and Decision Support Systems in Architecture and Urban Planning Maastricht, the Netherlands), ISBN 90-6814-081-7, July 26-29, 1998
summary MOLCA (Model for Office Life Cycle Assessment) is a project that aims to develop a tool that enables designers and builders to evaluate the environmental impact of their designs (of office buildings) from a environmental point of view. The model used is based on guidelinesgiven by ISO 14000, using the so-called Life Cycle Assessment (LCA) method. The MOLCA project started in 1997 and will be finished in 2001 resulting in the aforementioned tool. MOLCA is a module within broader research conducted at the Eindhoven University of Technology aiming to reduce design risks to a minimum in the early design stages.Since the MOLCA project started two major case-studies have been carried out. One into the difference in environmental load caused by using concrete and steel roof systems respectively and the role of recycling. The second study focused on biases in LCA data and how to handle them. For the simulations a computer-model named SimaPro was used, using the world-wide accepted method developed by CML (Centre for the Environment, Leiden, the Netherlands). With this model different life-cycle scenarios were studied and evaluated. Based on those two case studies and a third one into an office area, a first model has been developed.Bottle-neck in this field of study is estimating average recycling and re-use percentages of the total flow of material waste in the building sector and collecting reliable process data. Another problem within LCA studies is estimating the reliability of the input data and modelling uncertainties. All these topics will be subject of further analysis.
keywords Life-Cycle Assessment, Office Buildings, Uncertainties in LCA
series DDSS
last changed 2003/08/07 16:36

_id cebb
authors Do, Ellen Yi-Luen and Gross, Mark D.
year 1997
title Tools for Visual and Spatial Analysis of CAD Models - Implementing Computer Tools as a Means to Thinking about Architecture
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 189-202
summary The paper describes a suite of spatial analysis programs to support architectural design. Building these computational tools not only supports the task of spatial analysis for designers but it also helps us think about the spatial perception. We argue that building design software is an important vehicle for understanding architecture, using our efforts to build various visual and spatial analysis tools as examples.
series CAAD Futures
email
last changed 2004/10/04 07:49

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 24HOMELOGIN (you are user _anon_304527 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002