CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 482

_id 39fb
authors Langton, C.G.
year 1996
title Artificial Life
source Boden, M. A. (1996). The Philosophy of Artificial Life, 39-94.New York and Oxford: Oxford University Press
summary Artificial Life contains a selection of articles from the first three issues of the journal of the same name, chosen so as to give an overview of the field, its connections with other disciplines, and its philosophical foundations. It is aimed at those with a general background in the sciences: some of the articles assume a mathematical background, or basic biology and computer science. I found it an informative and thought-provoking survey of a field around whose edges I have skirted for years. Many of the articles take biology as their starting point. Charles Taylor and David Jefferson provide a brief overview of the uses of artificial life as a tool in biology. Others look at more specific topics: Kristian Lindgren and Mats G. Nordahl use the iterated Prisoner's Dilemma to model cooperation and community structure in artificial ecosystems; Peter Schuster writes about molecular evolution in simplified test tube systems and its spin-off, evolutionary biotechnology; Przemyslaw Prusinkiewicz presents some examples of visual modelling of morphogenesis, illustrated with colour photographs; and Michael G. Dyer surveys different kinds of cooperative animal behaviour and some of the problems synthesising neural networks which exhibit similar behaviours. Other articles highlight the connections of artificial life with artificial intelligence. A review article by Luc Steels covers the relationship between the two fields, while another by Pattie Maes covers work on adaptive autonomous agents. Thomas S. Ray takes a synthetic approach to artificial life, with the goal of instantiating life rather than simulating it; he manages an awkward compromise between respecting the "physics and chemistry" of the digital medium and transplanting features of biological life. Kunihiko Kaneko looks to the mathematics of chaos theory to help understand the origins of complexity in evolution. In "Beyond Digital Naturalism", Walter Fontana, Guenter Wagner and Leo Buss argue that the test of artificial life is to solve conceptual problems of biology and that "there exists a logical deep structure of which carbon chemistry-based life is a manifestation"; they use lambda calculus to try and build a theory of organisation.
series other
last changed 2003/04/23 15:14

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id acd7
authors Yueh, Shing
year 1996
title Architecture Design as Two Searches - Knowledge of Spatial Organization and Knowledge of Shape in Design Process
doi https://doi.org/10.52842/conf.caadria.1996.217
source CAADRIA ‘96 [Proceedings of The First Conference on Computer Aided Architectural Design Research in Asia / ISBN 9627-75-703-9] Hong Kong (Hong Kong) 25-27 April 1996, pp. 217-221
summary In the theory of design research, design thinking has gradually become an important direction for the research. In early research of design thinking, due to the insufficiency of academic research in human thinking, we have been unable to make further research in the field of design thinking. However, with the remarkable development of a variety of subjects: such as management science, cognitive psychology as well as artificial intelligence and others, researchers engaged in design thinking have more clear methodologies and solid background to conduct research studies of design thinking process.
series CAADRIA
last changed 2022/06/07 07:57

_id e29d
authors Arvesen, Liv
year 1996
title LIGHT AS LANGUAGE
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary With the unlimited supply of electric light our surroundings very easily may be illuminated too strongly. Too much light is unpleasant for our eyes, and a high level of light in many cases disturbs the conception of form. Just as in a forest, we need shadows, contrasts and variation when we compose with light. If we focus on the term compose, it is natural to conceive our environment as a wholeness. In fact, this is not only aesthetically important, it is true in a physical context. Inspired by old windows several similar examples have been built in the Trondheim Full-scale Laboratory where depth is obtained by constructing shelves on each side of the opening. When daylight is fading, indirect artificial light from above gradually lightens the window. The opening is perceived as a space of light both during the day and when it is dark outside.

Another of the built examples at Trondheim University which will be presented, is a doctor's waitingroom. It is a case study of special interest because it often appears to be a neglected area. Let us start asking: What do we have in common when we are waiting to come in to a doctor? We are nervous and we feel sometimes miserable. Analysing the situation we understand the need for an interior that cares for our state of mind. The level of light is important in this situation. Light has to speak softly. Instead of the ordinary strong light in the middle of the ceiling, several spots are selected to lighten the small tables separating the seats. The separation is supposed to give a feeling of privacy. By the low row of reflected planes we experience an intimate and warming atmosphere in the room. A special place for children contributes to the total impression of calm. In this corner the inside of some shelves are lit by indirect light, an effect which puts emphasis on the small scale suitable for a child. And it also demonstrates the good results of variation. The light setting in this room shows how light is “caught” two different ways.

keywords Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:34

_id d7eb
authors Bharwani, Seraj
year 1996
title The MIT Design Studio of the Future: Virtual Design Review Video Program
source Proceedings of ACM CSCW'96 Conference on Computer-Supported Cooperative Work 1996 p.10
summary The MIT Design Studio of the Future is an interdisciplinary effort to focus on geographically distributed electronic design and work group collaboration issues. The physical elements of this virtual studio comprise networked computer and videoconferencing connections among electronic design studios at MIT in Civil and Environmental Engineering, Architecture and Planning, Mechanical Engineering, the Lab for Computer Science, and the Rapid Prototyping Lab, with WAN and other electronic connections to industry partners and sponsors to take advantage of non-local expertise and to introduce real design and construction and manufacturing problems into the equation. This prototype collaborative design network is known as StudioNet. The project is looking at aspects of the design process to determine how advanced technologies impact the process. The first experiment within the electronic studio setting was the "virtual design review", wherein jurors for the final design review were located in geographically distributed sites. The video captures the results of that project, as does a paper recently published in the journal Architectural Research Quarterly (Cambridge, UK; Vol. 1, No. 2; Dec. 1995).
series other
last changed 2002/07/07 16:01

_id eb87
authors Bhavnani, S.K.
year 1996
title How Architects Draw with Computers: A Cognitive Analysis of Real-World CAD Interactions
source Carnegie Mellon University, School of Architecture and School of Computer Science
summary New media throughout history have passed through a period of transition during which users and technologists took many years to understand and exploit the medium's potential. CAD appears to be passing through a similar period of transition; despite huge investments by vendors and users, CAD productivity remains difficult to achieve. To investigate if history can provide any insights into this problem, this thesis begins with an examination of well-known examples from history. The analysis revealed that, over time, users had developed efficient strategies which were based on powers and limitations of tools; delegation strategies exploited powers provided by tools, and circumvention strategies attempted to overcome their limitations. These insights on efficient strategies were used to investigate the CAD productivity problem based on four research questions:

1. How do architects currently use CAD systems to produce drawings?

2. What are the effects of current CAD usage on product and performance?

3. What are the possible causes of current CAD usage?

4. What are the capabilities of the CAD medium and how can they be used efficiently?

The above four questions were addressed through the qualitative, quantitative, and cognitive analysis of data collected during an ethnographic study of architects working in their natural environment. The qualitative and quantitative analysis revealed that users missed many opportunities to use strategies that delegated iteration to the computer. The cognitive analysis revealed that missed opportunities to use such delegation strategies caused an increase in execution time, and an increase in errors many of which went undetected leading to the production of inaccurate drawings. These analyses pointed to plausible cognitive and contextual explanations for the inefficient use of CAD systems, and to a framework to identify and teach efficient CAD strategies. The above results were found to be neither unique to the CAD domain, nor to the office where the data were collected. The generality of these results motivated the identification of seven claims towards a general theory to explain and identify efficient strategies for a wide range of devices. This thesis contributes to the field of architecture by providing a detailed analysis of real-world CAD usage, and an approach to improve the performance of CAD users. The thesis also contributes to the field of human-computer interaction by demonstrating the generality of these results and by laying the framework for a general theory of efficient strategies which could be used to improve the performance of users of current and future computer applications.

series thesis:PhD
email
last changed 2003/04/15 13:36

_id 0dfb
authors Bovill, C.
year 1996
title Fractal Geometry in Architecture and Design
source Design Science Collection, Harvard University, Boston
summary My intention in this book was to explain the essence of fractal geometry to the design community. Many of the fractals can be drawn by hand and fractal rhythms for use in design can be derived from musical scores. This approach was taken to make the material more approachable. Much of the literature on fractal geometry is hidden behind computer programs or complex mathematical notation systems.
series other
last changed 2003/04/23 15:14

_id 029b
authors Bryson, Steve
year 1996
title Virtual Reality in Scientific Visualization
source Communications of the ACM. Vol.39, No.5. pp. 62-71
summary Immersing the user in the solution, virtual reality reveals the spatially complex structures in computational science in a way that makes them easy to understand and study. But beyond adding a 3D interface, virtual reality also means greater computational complexity.
series journal paper
last changed 2003/04/23 15:50

_id 22fd
authors Chou, Wen Huey
year 1996
title An Empirical Study of 2d Static Computer Art: An Investigation of How Contemporary Computer Art is Affected by Media
doi https://doi.org/10.52842/conf.caadria.1996.081
source CAADRIA ‘96 [Proceedings of The First Conference on Computer Aided Architectural Design Research in Asia / ISBN 9627-75-703-9] Hong Kong (Hong Kong) 25-27 April 1996, pp. 81-89
summary We are in the act of forming the Technology & Electronics society: a society which cultural, psychological, social and economical facets take shape according to the development of technology and electronics, specially in the fields of computer and information. The influence of these mighty functions, produced by the bit, is prevalent in all the science and social courses; in fact, it has already invaded the artistic world. It did not take long after the birth of the computer for it to become the new tool for artistic production; it revolutionized the traditional production habits, production procedures, methods of expression and the work place in artistic creativity, thus bringing the tides of change in the artistic context and attitude towards the study of the Arts.
series CAADRIA
last changed 2022/06/07 07:56

_id 20ff
id 20ff
authors Derix, Christian
year 2004
title Building a Synthetic Cognizer
source Design Computation Cognition conference 2004, MIT
summary Understanding ‘space’ as a structured and dynamic system can provide us with insight into the central concept in the architectural discourse that so far has proven to withstand theoretical framing (McLuhan 1964). The basis for this theoretical assumption is that space is not a void left by solid matter but instead an emergent quality of action and interaction between individuals and groups with a physical environment (Hillier 1996). In this way it can be described as a parallel distributed system, a self-organising entity. Extrapolating from Luhmann’s theory of social systems (Luhmann 1984), a spatial system is autonomous from its progenitors, people, but remains intangible to a human observer due to its abstract nature and therefore has to be analysed by computed entities, synthetic cognisers, with the capacity to perceive. This poster shows an attempt to use another complex system, a distributed connected algorithm based on Kohonen’s self-organising feature maps – SOM (Kohonen 1997), as a “perceptual aid” for creating geometric mappings of these spatial systems that will shed light on our understanding of space by not representing space through our usual mechanics but by constructing artificial spatial cognisers with abilities to make spatial representations of their own. This allows us to be shown novel representations that can help us to see new differences and similarities in spatial configurations.
keywords architectural design, neural networks, cognition, representation
series other
type poster
email
more http://www.springer.com/computer/ai/book/978-1-4020-2392-7
last changed 2012/09/17 21:13

_id cc4f
authors Donath, Dirk
year 1996
title University CAAD-Education for Architectural Students - A Report on the Realisation of a User-oriented Computer Education at the Bauhaus University Weimar
doi https://doi.org/10.52842/conf.ecaade.1996.143
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 143-154
summary Practically no other field of human creativity is evolving as fast and innovatively as the development and integration of the computer into every possible area imaginable. The computer has today become a natural tool in the fields of architecture and space-planning. The changing form of professional practice due to the increasing application of computerassisted work techniques results in the need, currently being addressed in the education of future architects and town planners, to bring these new mediums into the realm between architecture - art - and building - science.
series eCAADe
email
more http://www.uni-weimar.de/architektur/InfAR/
last changed 2022/06/07 07:55

_id f5ee
authors Erhorn, H., De Boer, J. and Dirksmueller, M.
year 1997
title ADELINE, an Integrated Approach to Lighting Simulation
source Proceedings of Right Light 4, 4th European Conference on Energy-Efficient Lighting, pp.99-103
summary The use of daylighting and artificial lighting simulation programs to calculate complex systems and models in the design practice often is impeded by the fact that the operation of these programs, especially the model input, is extremely complicated and time-consuming. Programs that are easier to use generally do not show the calculation capabilities required in practice. A second obstacle arises as the lighting calculations often do not allow any statements regarding the interactions with the energetic and thermal building performance. Both problems are mainly due to a lacking integration of the design tools of other building design practitioners as well as due to insufficient user interfaces. The program package ADELINE (Advanced Daylight and Electric Lighting Integrated New Environment) being available since May 1996 as completely revised version 2.0 presents a promising approach to solve these problems. This contribution describes the approaches and methods used within the international project IEA Task 21 for a further development of the ADELINE system. Aim of this work is a further improvement of user interfaces based on the inclusion of new dialogs and on a portation of the program system from MS-DOS to the Windows NT platform. Additional focus is laid on the use of recent developments in the field of information technology and experiences gained in other projects on integrated building design systems, like for example EU-COMBINE, in a pragmatical way. An integrated building design system with open standardized interfaces is to be achieved inter alia by using ISOSTEP formats, database technologies and a consequent, object-oriented design.
series other
last changed 2003/04/23 15:50

_id db00
authors Espina, Jane J.B.
year 2002
title Base de datos de la arquitectura moderna de la ciudad de Maracaibo 1920-1990 [Database of the Modern Architecture of the City of Maracaibo 1920-1990]
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 133-139
summary Bases de datos, Sistemas y Redes 134The purpose of this report is to present the achievements obtained in the use of the technologies of information andcommunication in the architecture, by means of the construction of a database to register the information on the modernarchitecture of the city of Maracaibo from 1920 until 1990, in reference to the constructions located in 5 of Julio, Sectorand to the most outstanding planners for its work, by means of the representation of the same ones in digital format.The objective of this investigation it was to elaborate a database for the registration of the information on the modernarchitecture in the period 1920-1990 of Maracaibo, by means of the design of an automated tool to organize the it datesrelated with the buildings, parcels and planners of the city. The investigation was carried out considering three methodologicalmoments: a) Gathering and classification of the information of the buildings and planners of the modern architectureto elaborate the databases, b) Design of the databases for the organization of the information and c) Design ofthe consultations, information, reports and the beginning menu. For the prosecution of the data files were generated inprograms attended by such computer as: AutoCAD R14 and 2000, Microsoft Word, Microsoft PowerPoint and MicrosoftAccess 2000, CorelDRAW V9.0 and Corel PHOTOPAINT V9.0.The investigation is related with the work developed in the class of Graphic Calculation II, belonging to the Departmentof Communication of the School of Architecture of the Faculty of Architecture and Design of The University of the Zulia(FADLUZ), carried out from the year 1999, using part of the obtained information of the works of the students generatedby means of the CAD systems for the representation in three dimensions of constructions with historical relevance in themodern architecture of Maracaibo, which are classified in the work of The Other City, generating different types ofisometric views, perspectives, representations photorealistics, plants and facades, among others.In what concerns to the thematic of this investigation, previous antecedents are ignored in our environment, and beingthe first time that incorporates the digital graph applied to the work carried out by the architects of “The Other City, thegenesis of the oil city of Maracaibo” carried out in the year 1994; of there the value of this research the field of thearchitecture and computer science. To point out that databases exist in the architecture field fits and of the design, alsoweb sites with information has more than enough architects and architecture works (Montagu, 1999).In The University of the Zulia, specifically in the Faculty of Architecture and Design, they have been carried out twoworks related with the thematic one of database, specifically in the years 1995 and 1996, in the first one a system wasdesigned to visualize, to classify and to analyze from the architectural point of view some historical buildings of Maracaiboand in the second an automated system of documental information was generated on the goods properties built insidethe urban area of Maracaibo. In the world environment it stands out the first database developed in Argentina, it is the database of the Modern andContemporary Architecture “Datarq 2000” elaborated by the Prof. Arturo Montagú of the University of Buenos Aires. The general objective of this work it was the use of new technologies for the prosecution in Architecture and Design (MONTAGU, Ob.cit). In the database, he intends to incorporate a complementary methodology and alternative of use of the informationthat habitually is used in the teaching of the architecture. When concluding this investigation, it was achieved: 1) analysis of projects of modern architecture, of which some form part of the historical patrimony of Maracaibo; 2) organized registrations of type text: historical, formal, space and technical data, and graph: you plant, facades, perspectives, pictures, among other, of the Moments of the Architecture of the Modernity in the city, general data and more excellent characteristics of the constructions, and general data of the Planners with their more important works, besides information on the parcels where the constructions are located, 3)construction in digital format and development of representations photorealistics of architecture projects already built. It is excellent to highlight the importance in the use of the Technologies of Information and Communication in this investigation, since it will allow to incorporate to the means digital part of the information of the modern architecturalconstructions that characterized the city of Maracaibo at the end of the XX century, and that in the last decades they have suffered changes, some of them have disappeared, destroying leaves of the modern historical patrimony of the city; therefore, the necessity arises of to register and to systematize in digital format the graphic information of those constructions. Also, to demonstrate the importance of the use of the computer and of the computer science in the representation and compression of the buildings of the modern architecture, to inclination texts, images, mapping, models in 3D and information organized in databases, and the relevance of the work from the pedagogic point of view,since it will be able to be used in the dictation of computer science classes and history in the teaching of the University studies of third level, allowing the learning with the use in new ways of transmission of the knowledge starting from the visual information on the part of the students in the elaboration of models in three dimensions or electronic scalemodels, also of the modern architecture and in a future to serve as support material for virtual recoveries of some buildings that at the present time they don’t exist or they are almost destroyed. In synthesis, the investigation will allow to know and to register the architecture of Maracaibo in this last decade, which arises under the parameters of the modernity and that through its organization and visualization in digital format, it will allow to the students, professors and interested in knowing it in a quicker and more efficient way, constituting a contribution to theteaching in the history area and calculation. Also, it can be of a lot of utility for the development of future investigation projects related with the thematic one and restoration of buildings of the modernity in Maracaibo.
keywords database, digital format, modern architecture, model, mapping
series SIGRADI
email
last changed 2016/03/10 09:51

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 0c78
authors Flood, I. and Christophilos, P.
year 1996
title Modeling construction processes using artificial neural networks
source Automation in Construction 4 (4) (1996) pp. 307-320
summary The paper evaluates a neural network approach to modeling the dynamics of construction processes that exhibit both discrete and stochastic behavior, providing an alternative to the more conventional method of discrete-event simulation. The incentive for developing the technique is its potential for (i) facilitating model development in situations where there is limited theory describing the dependence between component processes; and (ii) rapid execution of a simulation through parallel processing. The alternative ways in which neural networks can be used to model construction processes are reviewed and their relative merits are identified. The most promising approach, a recursive method of dynamic modeling, is examined in a series of experiments. These involve the application of the technique to two classes of earthmoving system, the first comprising a push-dozer and a fleet of scrapers, and the second a loader and fleet of haul trucks. The viability of the neural network approach is demonstrated in terms of its ability to model the discrete and stochastic behavior of these classes of construction processes. The paper concludes with an indication of some areas for further development of the technique.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id ab45
authors Gu, Jingwen
year 1996
title Natural Results from Advances in Computer Techniques - CAAD Teaching in China Yesterday, Today and Tomorrow
doi https://doi.org/10.52842/conf.caadria.1996.021
source CAADRIA ‘96 [Proceedings of The First Conference on Computer Aided Architectural Design Research in Asia / ISBN 9627-75-703-9] Hong Kong (Hong Kong) 25-27 April 1996, pp. 21-26
summary The computer science has been becoming one of the most rapidly developed science areas in the world since 1970s. Many new and powerful solutions to engineering and scientific problems are based on computers. Now the applications and teaching of computer techniques are quickly towards almost all of the fields including architecture and urban planning. Of course, the advances of application of computers in particular fields and teachings are very different for some reasons. CAAD is one of few fields in which the teaching states, teaching ways and level are obviously different from university to university and from one area or country to another. In this paper the history of CAD and CAAD applications in China is first briefly reviewed. Then the CAAD activities including teaching and research work at Tongji University are introduced, and the social, economical, functional, technical and physical factors that have effects on CAAD teaching are discussed. What is currently included in our CAAD program is also discussed. As the further advances in computer technology including both software and hardware, What CAAD will include and in what way CAAD will be taught and the CAAD collaborative research projects will be taken remotely are shown finally.
series CAADRIA
email
last changed 2022/06/07 07:51

_id a115
authors Hanna, R.
year 1996
title A Computer-based Approach for Teaching Daylighting at the Early Design Stage
doi https://doi.org/10.52842/conf.ecaade.1996.181
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 181-190
summary This paper has reviewed the literature on the teaching of daylight systems design in architectural education, and found that traditionally such teaching has evolved around the prediction of the Daylight Factor (DF%), i.e. illuminance, via two methods one studio-based and another laboratory based. The former relies on graphical and/or mathematical techniques, e.g. the BRE Protractors, the BRE Tables, Waldram Diagrams, the Pepper-pot diagrams and the BRE formula. The latter tests scale models of buildings under artificial sky conditions (CIE sky). The paper lists the advantages and disadvantages of both methods in terms of compatibility with the design process, time required, accuracy, energy-consumption facts, and visual information.

This paper outlines a proposal for an alternative method for teaching daylight and artificial lighting design for both architectural students and practitioners. It is based on photorealistic images as well as numbers, and employs the Lumen Micro 6.0 programme. This software package is a complete indoor lighting design and analysis programme which generates perspective renderings and animated walk-throughs of the space lighted naturally and artificially.

The paper also presents the findings of an empirical case study to validate Lumen Micro 6.0 by comparing simulated output with field monitoring of horizontal and vertical illuminance and luminance inside the highly acclaimed GSA building in Glasgow. The monitoring station was masterminded by the author and uses the Megatron lighting sensors, Luscar dataloggers and the Easylog analysis software. In addition photographs of a selected design studio inside the GSA building were contrasted with computer generated perspective images of the same space.

series eCAADe
email
last changed 2022/06/07 07:50

_id 4fc4
authors Jakimowicz, Adam
year 1996
title Towards Affective Architectural Computing: An Additional Element in CAAD
source CAD Creativeness [Conference Proceedings / ISBN 83-905377-0-2] Bialystock (Poland), 25-27 April 1996 pp. 121-135
summary The sphere of computing, in general, is the sphere of confusion. First, computers', thanks to (or because o) the indirect way of communicating "with" them, have not become yet the obvious and natural extension of human abilities - as TV set, radio or cars already have. It is probably because of the feeling, that they are, more or less, for specialists and that they require special knowledge or skills. In a way it is true, but surely it will change within a few years, when they become everyday tools of education at schools or just toys for children. Second, there is also the feeling or wish, that every computer is able to do everything we want - from, lets say, writing a letter, washing the dishes to very complex things as, for example, designing architecture. This is the dream of universal artificial intelligence, which should be a perfect servant, which not only listens to, but also predicts our wishes.
series plCAD
email
last changed 2003/05/17 10:01

_id b6a7
authors Jensen, K.
year 1996
title Coloured Petri Nets: Basic Concepts
source 2nd ed., Springer Verlag, Berlin
summary This book presents a coherent description of the theoretical and practical aspects of Coloured Petri Nets (CP-nets or CPN). It shows how CP-nets have been developed - from being a promising theoretical model to being a full-fledged language for the design, specification, simulation, validation and implementation of large software systems (and other systems in which human beings and/or computers communicate by means of some more or less formal rules). The book contains the formal definition of CP-nets and the mathematical theory behind their analysis methods. However, it has been the intention to write the book in such a way that it also becomes attractive to readers who are more interested in applications than the underlying mathematics. This means that a large part of the book is written in a style which is closer to an engineering textbook (or a users' manual) than it is to a typical textbook in theoretical computer science. The book consists of three separate volumes. The first volume defines the net model (i.e., hierarchical CP-nets) and the basic concepts (e.g., the different behavioural properties such as deadlocks, fairness and home markings). It gives a detailed presentation of many small examples and a brief overview of some industrial applications. It introduces the formal analysis methods. Finally, it contains a description of a set of CPN tools which support the practical use of CP-nets. Most of the material in this volume is application oriented. The purpose of the volume is to teach the reader how to construct CPN models and how to analyse these by means of simulation. The second volume contains a detailed presentation of the theory behind the formal analysis methods - in particular occurrence graphs with equivalence classes and place/transition invariants. It also describes how these analysis methods are supported by computer tools. Parts of this volume are rather theoretical while other parts are application oriented. The purpose of the volume is to teach the reader how to use the formal analysis methods. This will not necessarily require a deep understanding of the underlying mathematical theory (although such knowledge will of course be a help). The third volume contains a detailed description of a selection of industrial applications. The purpose is to document the most important ideas and experiences from the projects - in a way which is useful for readers who do not yet have personal experience with the construction and analysis of large CPN diagrams. Another purpose is to demonstrate the feasibility of using CP-nets and the CPN tools for such projects. Together the three volumes present the theory behind CP-nets, the supporting CPN tools and some of the practical experiences with CP-nets and the tools. In our opinion it is extremely important that these three research areas have been developed simultaneously. The three areas influence each other and none of them could be adequately developed without the other two. As an example, we think it would have been totally impossible to develop the hierarchy concepts of CP-nets without simultaneously having a solid background in the theory of CP-nets, a good idea for a tool to support the hierarchy concepts, and a thorough knowledge of the typical application areas.
series other
last changed 2003/04/23 15:14

_id ab3c
authors Kramer, G.
year 1996
title Mapping a Single Data Stream to Multiple Auditory Variables: A Subjective Approach to Creating a Compelling Design
source Proceedings of the Third International Conferenceon Auditory Display, Santa FO Institute
summary Representing a single data variable changing in time via sonification, or using that data to control a sound in some way appears to be a simple problem but actually involves a significant degree of subjectivity. This paper is a response to my own focus on specific sonification tasks (Kramer 1990, 1993) (Fitch & Kramer, 1994), on broad theoretical concerns in auditory display (Kramer 1994a, 1994b, 1995), and on the representation of high-dimensional data sets (Kramer 1991a & Kramer & Ellison, 1991b). The design focus of this paper is partly a response to the others who, like myself, have primarily employed single fundamental acoustic variables such as pitch or loudness to represent single data streams. These simple representations have framed three challenges: Behavioral and Cognitive Science-Can sonifications created with complex sounds changing simultaneously in several dimensions facilitate the formation of a stronger internal auditory image, or audiation, than would be produced by simpler sonifications? Human Factors and Applications-Would such a stronger internal image of the data prove to be more useful from the standpoint of conveying information? Technology and Design-How might these richer displays be constructed? This final question serves as a starting point for this paper. After years of cautious sonification research I wanted to explore the creation of more interesting and compelling representations.
series other
last changed 2003/04/23 15:50

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 24HOMELOGIN (you are user _anon_40858 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002