CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 515

_id 58f4
authors Barequet, G. and Kumar, S.
year 1997
title Repairing CAD models
source Proceedings of IEEE Visualizationí97, pp. 363-370
summary We describe an algorithm for repairing polyhedral CAD models that have errors in their B-REP. Errors like cracks, degeneracies, duplication, holes and overlaps are usually introduced in solid models due to imprecise arithmetic, model transformations, designer's fault, programming bugs, etc. Such errors often hamper further processing like finite element analysis, radiosity computation and rapid prototyping. Our fault-repair algorithm converts an unordered collection of polygons to a shared-vertex representation to help eliminate errors. This is done by choosing, for each polygon edge, the most appropriate edge to unify it with. The two edges are then geometrically merged into one, by moving vertices. At the end of this process, each polygon edge is either coincident with another or is a boundary edge for a polygonal hole or a dangling wall and may be appropriately repaired. Finally, in order to allow user- inspection of the automatic corrections, we produce a visualization of the repair and let the user mark the corrections that conflict with the original design intent. A second iteration of the correction algorithm then produces a repair that is commensurate with the intent. Thus, by involving the users in a feedback loop, we are able to refine the correction to their satisfaction.
series other
email
last changed 2003/04/23 15:14

_id 20ff
id 20ff
authors Derix, Christian
year 2004
title Building a Synthetic Cognizer
source Design Computation Cognition conference 2004, MIT
summary Understanding ‘space’ as a structured and dynamic system can provide us with insight into the central concept in the architectural discourse that so far has proven to withstand theoretical framing (McLuhan 1964). The basis for this theoretical assumption is that space is not a void left by solid matter but instead an emergent quality of action and interaction between individuals and groups with a physical environment (Hillier 1996). In this way it can be described as a parallel distributed system, a self-organising entity. Extrapolating from Luhmann’s theory of social systems (Luhmann 1984), a spatial system is autonomous from its progenitors, people, but remains intangible to a human observer due to its abstract nature and therefore has to be analysed by computed entities, synthetic cognisers, with the capacity to perceive. This poster shows an attempt to use another complex system, a distributed connected algorithm based on Kohonen’s self-organising feature maps – SOM (Kohonen 1997), as a “perceptual aid” for creating geometric mappings of these spatial systems that will shed light on our understanding of space by not representing space through our usual mechanics but by constructing artificial spatial cognisers with abilities to make spatial representations of their own. This allows us to be shown novel representations that can help us to see new differences and similarities in spatial configurations.
keywords architectural design, neural networks, cognition, representation
series other
type poster
email
more http://www.springer.com/computer/ai/book/978-1-4020-2392-7
last changed 2012/09/17 21:13

_id diss_marsh
id diss_marsh
authors Marsh, A.J.
year 1997
title Performance Analysis and Conceptual Design
source School of Architecture and Fine Arts, University of Western Australia
summary A significant amount of the research referred to by Manning has been directed into the development of computer software for building simulation and performance analysis. A wide range of computational tools are now available and see relatively widespread use in both research and commercial applications. The focus of development in this area has long been on the accurate simulation of fundamental physical processes, such as the mechanisms of heat flow though materials, turbulent air movement and the inter-reflection of light. The adequate description of boundary conditions for such calculations usually requires a very detailed mathematical model. This has tended to produce tools with a very engineering-oriented and solution-based approach. Whilst becoming increasingly popular amongst building services engineers, there has been a relatively slow response to this technology amongst architects. There are some areas of the world, particularly the UK and Germany, where the use of such tools on larger projects is routine. However, this is almost exclusively during the latter stages of a project and usually for purposes of plant sizing or final design validation. The original conceptual work, building form and the selection of materials being the result of an aesthetic and intuitive process, sometimes based solely on precedent. There is no argument that an experienced designer is capable of producing an excellent design in this way. However, not all building designers are experienced, and even fewer have a complete understanding of the fundamental physical processes involved in building performance. These processes can be complex and often highly inter-related, often even counter-intuitive. It is the central argument of this thesis that the needs of the building designer are quite different from the needs of the building services engineer, and that existing building design and performance analysis tools poorly serve these needs. It will be argued that the extensive quantitative input requirement in such tools acts to produce a psychological separation between the act of design and the act of analysis. At the conceptual stage, building geometry is fluid and subject to constant change, with solid quantitative information relatively scarce. Having to measure off surface areas or search out the emissivity of a particular material forces the designer to think mathematically at a time when they are thinking intuitively. It is, however, at this intuitive stage that the greatest potential exists for performance efficiencies and environmental economies. The right orientation and fenestration choice can halve the airconditioning requirement. Incorporating passive solar elements and natural ventilation pathways can eliminate it altogether. The building form can even be designed to provide shading using its own fabric, without any need for additional structure or applied shading. It is significantly more difficult and costly to retrofit these features at a later stage in a project’s development. If the role of the design tool is to serve the design process, then a new approach is required to accommodate the conceptual phase. This thesis presents a number of ideas on what that approach may be, accompanied by some example software that demonstrates their implementation.
series thesis:PhD
more http://www.squ1.com/site.html
last changed 2003/11/28 07:33

_id diss_ruhl
id diss_ruhl
authors Ruhl, Volker R.
year 1997
title Computer-Aided Design and Manufacturing of Complex Shaped Concrete Formwork
source Doctor of Design Thesis, Graduate School of Design, Harvard University, Cambridge, MA
summary The research presented in this thesis challenges the appropriateness of existing, conventional forming practices in the building construction industry--both in situ or in prefabrication--for building concrete "freeforms," as they are characterized by impracticality and limitations in achieved geometric/formal quality. The author's theory proposes the application of alternative, non-traditional construction methods derived from the integration of information technology, in the form of Computer-Aided Design (CAD), Engineering (CAE) and Manufacturing (CAM), into the concrete tooling and placing process. This concept relies on a descriptive shape model of a physically non-existent building element which serves as a central database containing all the geometric data necessary to completely and accurately inform design development activities as well as the construction process. For this purpose, the thesis orients itself on existing, functioning models in manufacturing engineering and explores the broad spectrum of computer-aided manufacturing techniques applied in this industry. A two-phase, combined method study is applied to support the theory. Part I introduces the phenomenon of "complexity" in the architectural field, defines the goal of the thesis research and gives examples of complex shape. It also presents the two analyzed technologies: concrete tooling and automation technology. For both, it establishes terminology, classifications, gives insight into the state-of-the-art, and describes limitations. For concrete tooling it develops a set of quality criteria. Part II develops a theory in the form of a series of proposed "non-traditional" forming processes and concepts that are derived through a synthesis of state-of-the-art automation with current concrete forming and placing techniques, and describes them in varying depth, in both text and graphics, on the basis of their geometric versatility and their appropriateness for the proposed task. Emphasis is given to the newly emerging and most promising Solid Freeform Fabrication processes, and within this area, to laser-curing technology. The feasibility of using computer-aided formwork design, and computer-aided formwork fabrication in today's standard building practices is evaluated for this particular technology on the basis of case-studies. Performance in the categories of process, material, product, lead time and economy is analyzed over the complete tooling cycle and is compared to the performance of existing, conventional forming systems for steel, wood, plywood veneer and glassfiber reinforced plastic; value s added to the construction process and/or to the formwork product through information technology are pointed out and become part of the evaluation. For this purpose, an analytical framework was developed for testing the performance of various Solid Freeform Fabrication processes as well as the "sensitivity," or the impact of various influencing processes and/or product parameters on lead time and economy. This tool allows us to make various suggestions for optimization as well as to formulate recommendations and guidelines for the implementation of this technology. The primary objective of this research is to offer architects and engineers unprecedented independence from planar, orthogonal building geometry, in the realization of design ideas and/or design requirements for concrete structures and/or their components. The interplay between process-oriented design and innovative implementation technology may ultimately lead to an architecture conceived on a different level of complexity, with an extended form-vocabulary and of high quality.
series thesis:PhD
last changed 2005/09/09 12:58

_id 75a8
authors Achten, Henri H.
year 1997
title Generic representations : an approach for modelling procedural and declarative knowledge of building types in architectural design
source Eindhoven University of Technology
summary The building type is a knowledge structure that is recognised as an important element in the architectural design process. For an architect, the type provides information about norms, layout, appearance, etc. of the kind of building that is being designed. Questions that seem unresolved about (computational) approaches to building types are the relationship between the many kinds of instances that are generally recognised as belonging to a particular building type, the way a type can deal with varying briefs (or with mixed use), and how a type can accommodate different sites. Approaches that aim to model building types as data structures of interrelated variables (so-called ‘prototypes’) face problems clarifying these questions. The research work at hand proposes to investigate the role of knowledge associated with building types in the design process. Knowledge of the building type must be represented during the design process. Therefore, it is necessary to find a representation which supports design decisions, supports the changes and transformations of the design during the design process, encompasses knowledge of the design task, and which relates to the way architects design. It is proposed in the research work that graphic representations can be used as a medium to encode knowledge of the building type. This is possible if they consistently encode the things they represent; if their knowledge content can be derived, and if they are versatile enough to support a design process of a building belonging to a type. A graphic representation consists of graphic entities such as vertices, lines, planes, shapes, symbols, etc. Establishing a graphic representation implies making design decisions with respect to these entities. Therefore it is necessary to identify the elements of the graphic representation that play a role in decision making. An approach based on the concept of ‘graphic units’ is developed. A graphic unit is a particular set of graphic entities that has some constant meaning. Examples are: zone, circulation scheme, axial system, and contour. Each graphic unit implies a particular kind of design decision (e.g. functional areas, system of circulation, spatial organisation, and layout of the building). By differentiating between appearance and meaning, it is possible to define the graphic unit relatively shape-independent. If a number of graphic representations have the same graphic units, they deal with the same kind of design decisions. Graphic representations that have such a specifically defined knowledge content are called ‘generic representations.’ An analysis of over 220 graphic representations in the literature on architecture results in 24 graphic units and 50 generic representations. For each generic representation the design decisions are identified. These decisions are informed by the nature of the design task at hand. If the design task is a building belonging to a building type, then knowledge of the building type is required. In a single generic representation knowledge of norms, rules, and principles associated with the building type are used. Therefore, a single generic representation encodes declarative knowledge of the building type. A sequence of generic representations encodes a series of design decisions which are informed by the design task. If the design task is a building type, then procedural knowledge of the building type is used. By means of the graphic unit and generic representation, it is possible to identify a number of relations that determine sequences of generic representations. These relations are: additional graphic units, themes of generic representations, and successive graphic units. Additional graphic units defines subsequent generic representations by adding a new graphic unit. Themes of generic representations defines groups of generic representations that deal with the same kind of design decisions. Successive graphic units defines preconditions for subsequent or previous generic representations. On the basis of themes it is possible to define six general sequences of generic representations. On the basis of additional and successive graphic units it is possible to define sequences of generic representations in themes. On the basis of these sequences, one particular sequence of 23 generic representations is defined. The particular sequence of generic representations structures the decision process of a building type. In order to test this assertion, the particular sequence is applied to the office building type. For each generic representation, it is possible to establish a graphic representation that follows the definition of the graphic units and to apply the required statements from the office building knowledge base. The application results in a sequence of graphic representations that particularises an office building design. Implementation of seven generic representations in a computer aided design system demonstrates the use of generic representations for design support. The set is large enough to provide additional weight to the conclusion that generic representations map declarative and procedural knowledge of the building type.
series thesis:PhD
email
more http://alexandria.tue.nl/extra2/9703788.pdf
last changed 2003/11/21 15:15

_id eea1
authors Achten, Henri
year 1997
title Generic Representations - Typical Design without the Use of Types
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 117-133
summary The building type is a (knowledge) structure that is both recognised as a constitutive cognitive element of human thought and as a constitutive computational element in CAAD systems. Questions that seem unresolved up to now about computational approaches to building types are the relationship between the various instances that are generally recognised as belonging to a particular building type, the way a type can deal with varying briefs (or with mixed functional use), and how a type can accommodate different sites. Approaches that aim to model building types as data structures of interrelated variables (so-called 'prototypes') face problems clarifying these questions. It is proposed in this research not to focus on a definition of 'type,' but rather to investigate the role of knowledge connected to building types in the design process. The basic proposition is that the graphic representations used to represent the state of the design object throughout the design process can be used as a medium to encode knowledge of the building type. This proposition claims that graphic representations consistently encode the things they represent, that it is possible to derive the knowledge content of graphic representations, and that there is enough diversity within graphic representations to support a design process of a building belonging to a type. In order to substantiate these claims, it is necessary to analyse graphic representations. In the research work, an approach based on the notion of 'graphic units' is developed. The graphic unit is defined and the analysis of graphic representations on the basis of the graphic unit is demonstrated. This analysis brings forward the knowledge content of single graphic representations. Such knowledge content is declarative knowledge. The graphic unit also provides the means to articulate the transition from one graphic representation to another graphic representation. Such transitions encode procedural knowledge. The principles of a sequence of generic representations are discussed and it is demonstrated how a particular type - the office building type - is implemented in the theoretical work. Computational work on implementation part of a sequence of generic representations of the office building type is discussed. The paper ends with a summary and future work.
series CAAD Futures
email
last changed 2003/11/21 15:15

_id debf
authors Bertol, D.
year 1997
title Designing Digital Space - An Architect's Guide to Virtual Reality
source John Wiley & Sons, New York
summary The first in-depth book on virtual reality (VR) aimed specifically at architecture and design professionals, Designing Digital Space steers you skillfully through the learning curve of this exciting new technology. Beginning with a historical overview of the evolution of architectural representations, this unique resource explains what VR is, how it is being applied today, and how it promises to revolutionize not only the design process, but the form and function of the built environment itself. Vividly illustrating how VR fits alongside traditional methods of architectural representation, this comprehensive guide prepares you to make optimum practical use of this powerful interactive tool, and embrace the new role of the architect in a virtually designed world. Offers in-depth coverage of the virtual universe-data representation and information management, static and dynamic worlds, tracking and visual display systems, control devices, and more. Examines a wide range of current VR architectural applications, from walkthroughs, simulations, and evaluations to reconstructions and networked environments Includes insightful essays by leading VR developers covering some of today's most innovative projects Integrates VR into the historical framework of architectural development, with detailed sections on the past, present, and future Features a dazzling array of virtual world images and sequential displays Explores the potential impact of digital architecture on the built environment of the future
series other
last changed 2003/04/23 15:14

_id 0bc0
authors Kellett, R., Brown, G.Z., Dietrich, K., Girling, C., Duncan, J., Larsen, K. and Hendrickson, E.
year 1997
title THE ELEMENTS OF DESIGN INFORMATION FOR PARTICIPATION IN NEIGHBORHOOD-SCALE PLANNING
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 295-304
doi https://doi.org/10.52842/conf.acadia.1997.295
summary Neighborhood scale planning and design in many communities has been evolving from a rule-based process of prescriptive codes and regulation toward a principle- and performance-based process of negotiated priorities and agreements. Much of this negotiation takes place in highly focused and interactive workshop or 'charrette' settings, the best of which are characterized by a fluid and lively exchange of ideas, images and agendas among a diverse mix of citizens, land owners, developers, consultants and public officials. Crucial to the quality and effectiveness of the exchange are techniques and tools that facilitate a greater degree of understanding, communication and collaboration among these participants.

Digital media have a significant and strategic role to play toward this end. Of particular value are representational strategies that help disentangle issues, clarify alternatives and evaluate consequences of very complex and often emotional issues of land use, planning and design. This paper reports on the ELEMENTS OF NEIGHBORHOOD, a prototype 'electronic notebook' (relational database) tool developed to bring design information and example 'to the table' of a public workshop. Elements are examples of the building blocks of neighborhood (open spaces, housing, commercial, industrial, civic and network land uses) derived from built examples, and illustrated with graphic, narrative and numeric representations relevant to planning, design, energy, environmental and economic performance. Quantitative data associated with the elements can be linked to Geographic Information based maps and spreadsheet based-evaluation models.

series ACADIA
type normal paper
email
last changed 2022/06/07 07:52

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id d60a
authors Casti, J.C.
year 1997
title Would be Worlds: How simulation is changing the frontiers of science
source John Wiley & Sons, Inc., New York.
summary Five Golden Rules is caviar for the inquiring reader. Anyone who enjoyed solving math problems in high school will be able to follow the author's explanations, even if high school was a long time ago. There is joy here in watching the unfolding of these intricate and beautiful techniques. Casti's gift is to be able to let the nonmathematical reader share in his understanding of the beauty of a good theory.-Christian Science Monitor "[Five Golden Rules] ranges into exotic fields such as game theory (which played a role in the Cuban Missile Crisis) and topology (which explains how to turn a doughnut into a coffee cup, or vice versa). If you'd like to have fun while giving your brain a first-class workout, then check this book out."-San Francisco Examiner "Unlike many popularizations, [this book] is more than a tour d'horizon: it has the power to change the way you think. Merely knowing about the existence of some of these golden rules may spark new, interesting-maybe even revolutionary-ideas in your mind. And what more could you ask from a book?"-New Scientist "This book has meat! It is solid fare, food for thought . . . makes math less forbidding, and much more interesting."-Ben Bova, The Hartford Courant "This book turns math into beauty."-Colorado Daily "John Casti is one of the great science writers of the 1990s."-San Francisco Examiner In the ever-changing world of science, new instruments often lead to momentous discoveries that dramatically transform our understanding. Today, with the aid of a bold new instrument, scientists are embarking on a scientific revolution as profound as that inspired by Galileo's telescope. Out of the bits and bytes of computer memory, researchers are fashioning silicon surrogates of the real world-elaborate "artificial worlds"-that allow them to perform experiments that are too impractical, too costly, or, in some cases, too dangerous to do "in the flesh." From simulated tests of new drugs to models of the birth of planetary systems and galaxies to computerized petri dishes growing digital life forms, these laboratories of the future are the essential tools of a controversial new scientific method. This new method is founded not on direct observation and experiment but on the mapping of the universe from real space into cyberspace. There is a whole new science happening here-the science of simulation. The most exciting territory being mapped by artificial worlds is the exotic new frontier of "complex, adaptive systems." These systems involve living "agents" that continuously change their behavior in ways that make prediction and measurement by the old rules of science impossible-from environmental ecosystems to the system of a marketplace economy. Their exploration represents the horizon for discovery in the twenty-first century, and simulated worlds are charting the course. In Would-Be Worlds, acclaimed author John Casti takes readers on a fascinating excursion through a number of remarkable silicon microworlds and shows us how they are being used to formulate important new theories and to solve a host of practical problems. We visit Tierra, a "computerized terrarium" in which artificial life forms known as biomorphs grow and mutate, revealing new insights into natural selection and evolution. We play a game of Balance of Power, a simulation of the complex forces shaping geopolitics. And we take a drive through TRANSIMS, a model of the city of Albuquerque, New Mexico, to discover the root causes of events like traffic jams and accidents. Along the way, Casti probes the answers to a host of profound questions these "would-be worlds" raise about the new science of simulation. If we can create worlds inside our computers at will, how real can we say they are? Will they unlock the most intractable secrets of our universe? Or will they reveal instead only the laws of an alternate reality? How "real" do these models need to be? And how real can they be? The answers to these questions are likely to change the face of scientific research forever.
series other
last changed 2003/04/23 15:14

_id ed09
authors Chang, Teng Wen and Woodbury, Robert F.
year 1997
title Efficient Design Spaces of Non-Manifold Solids
source CAADRIA ‘97 [Proceedings of the Second Conference on Computer Aided Architectural Design Research in Asia / ISBN 957-575-057-8] Taiwan 17-19 April 1997, pp. 335-344
doi https://doi.org/10.52842/conf.caadria.1997.335
summary One widely accepted metaphor in design research is search or, equivalently, exploration which likens design to intelligent movement through a possibly infinite space of alternatives. In this metaphor, designers search design spaces, explore possibilities, discover new designs, and recall and adapt existing designs. We give the name design space explorers to computer programs that support exploration. This paper describes an efficient representation of states comprising three-dimensional non-manifold solid models and of design spaces made from such states.
series CAADRIA
email
last changed 2022/06/07 07:56

_id ga9921
id ga9921
authors Coates, P.S. and Hazarika, L.
year 1999
title The use of genetic programming for applications in the field of spatial composition
source International Conference on Generative Art
summary Architectural design teaching using computers has been a preoccupation of CECA since 1991. All design tutors provide their students with a set of models and ways to form, and we have explored a set of approaches including cellular automata, genetic programming ,agent based modelling and shape grammars as additional tools with which to explore architectural ( and architectonic) ideas.This paper discusses the use of genetic programming (G.P.) for applications in the field of spatial composition. CECA has been developing the use of Genetic Programming for some time ( see references ) and has covered the evolution of L-Systems production rules( coates 1997, 1999b), and the evolution of generative grammars of form (Coates 1998 1999a). The G.P. was used to generate three-dimensional spatial forms from a set of geometrical structures .The approach uses genetic programming with a Genetic Library (G.Lib) .G.P. provides a way to genetically breed a computer program to solve a problem.G. Lib. enables genetic programming to define potentially useful subroutines dynamically during a run .* Exploring a shape grammar consisting of simple solid primitives and transformations. * Applying a simple fitness function to the solid breeding G.P.* Exploring a shape grammar of composite surface objects. * Developing grammarsfor existing buildings, and creating hybrids. * Exploring the shape grammar of abuilding within a G.P.We will report on new work using a range of different morphologies ( boolean operations, surface operations and grammars of style ) and describe the use of objective functions ( natural selection) and the "eyeball test" ( artificial selection) as ways of controlling and exploring the design spaces thus defined.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id d4b1
authors Egglib, L., Ching-yaob, H., Brüderlinb, B. and Elbera, G.
year 1997
title Inferring 3D models from freehand sketches and constraints
source Computer-Aided Design, Vol. 29 (2) (1997) pp. 101-112
summary This paper describes `Quick-sketch', a 2D and 3D modelling tool for pen-based computers. Users of this system define a model by simple pen strokes, drawn directlyon the screen of a pen-based PC. Exact shapes and geometric relationships are interpreted from the sketch. The system can also be used to sketch 3D solid objects andB-spline surfaces. These objects may be refined by defining 2D and 3D geometric constraints. A novel graph-based constraint solver is used to establish the geometricrelationships, or to maintain them when manipulating the objects interactively. The approach presented here is a first step towards a conceptual design system.Quick-sketch can be used as a hand sketching front-end to more sophisticated modelling, rendering or animation systems.
keywords Geometric Constraints, Conceptual Design, Free-Hand Sketch Interpretation
series journal paper
last changed 2003/05/15 21:33

_id 8569
authors Kurmann, D., Elte, N. and Engeli, M.
year 1997
title Real-Time Modeling with Architectural Space
source CAAD Futures 1997 [Conference Proceedings / ISBN 0-7923-4726-9] München (Germany), 4-6 August 1997, pp. 809-819
summary Space as an architectural theme has been explored in many ways over many centuries; designing the architectural space is a major issue in both architectural education and in the design process. Based on these observations, it follows that computer tools should be available that help architects manipulate and explore space and spatial configurations directly and interactively. Therefore, we have created and extended the computer tool Sculptor. This tool enables the architect to design interactively with the computer, directly in real-time and in three dimensions. We developed the concept of 'space as an element' and integrated it into Sculptor. These combinations of solid and void elements - positive and negative volumes - enable the architect to use the computer already in an early design stage for conceptual design and spatial studies. Similar to solids modeling but much simpler, more intuitive and in real-time this allows the creation of complex spatial compositions in 3D space. Additionally, several concepts, operations and functions are defined inherently. Windows and doors for example are negative volumes that connect other voids inside positive ones. Based on buildings composed with these spaces we developed agents to calculate sound atmosphere and estimate cost, and creatures to test building for fire escape reasons etc. The paper will look at the way to design with space from both an architect's point of view and a computer scientist's. Techniques, possibilities and consequences of this direct void modeling will be explained. It will elaborate on the principle of human machine interaction brought up by our research and used in Sculptor. It will present the possibility to create VRML models directly for the web and show some of the designs done by students using the tool in our CAAD courses.
series CAAD Futures
email
last changed 1999/04/06 09:19

_id 03d0
authors Neiman, Bennett and Bermudez, Julio
year 1997
title Between Digital & Analog Civilizations: The Spatial Manipulation Media Workshop
source Design and Representation [ACADIA ‘97 Conference Proceedings / ISBN 1-880250-06-3] Cincinatti, Ohio (USA) 3-5 October 1997, pp. 131-137
doi https://doi.org/10.52842/conf.acadia.1997.131
summary As the power shift from material culture to media culture accelerates, architecture finds itself in the midst of a clash between centuries-old analog design methods (such as tracing paper, vellum, graphite, ink, chipboard, clay, balsa wood, plastic, metal, etc.) and the new digital systems of production (such as scanning, video capture, image manipulation, visualization, solid modeling, computer aided drafting, animation, rendering, etc.). Moving forward requires a realization that a material interpretation of architecture proves limiting at a time when information and media environments are the major drivers of culture. It means to pro-actively incorporate the emerging digital world into our traditional analog work. It means to change.

This paper presents the results of an intense design workshop that looks, probes, and builds at the very interface that is provoking the cultural and professional shifts. Media space is presented and used as an interpretive playground for design experimentation in which the poetics of representation (and not its technicalities) are the driving force to generate architectural ideas. The work discussed was originally developed as a starting exercise for a digital design course. The exercise was later conducted as a workshop at two schools of architecture by different faculty working in collaboration with it's inventor.

The workshop is an effective sketch problem that gives students an immediate start into a non-traditional, hands-on, and integrated use of contemporary media in the design process. In doing so, it establishes a procedural foundation for a design studio dealing with digital media.

series ACADIA
email
last changed 2022/06/07 07:58

_id 14e6
authors Pegna, J.
year 1997
title Exploratory investigation of solid freeform construction
source Automation in Construction 5 (5) (1997) pp. 427-437
summary A radical departure from generally accepted concepts in construction robotics is proposed in this paper. A new process derived from the emerging field of additive manufacturing processes is investigated for its potential effectiveness in construction automation. In essence, complex assemblies of large construction components are substituted with a large number of elemental component assemblies. The massive complexity of information processing required in construction is replaced with a large number of simple elemental operations which lend themselves easily to computer control. This exploratory work is illustrated with sample masonry structures that cannot be obtained by casting. They are manufactured by an incremental deposition of sand and Portland cement akin to Navajo sand painting. A thin layer of sand is deposited, followed by the deposition of a patterned layer of cement. Steam is then applied to the layer to obtain rapid curing. A characterization of the resulting material properties shows rather novel anisotropic properties for mortar. Finally, the potential of this approach for solid freeform fabrication of large structures is assessed.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:23

_id 07ae
authors Sook Lee, Y. and Mi Lee, S.
year 1997
title Analysis of mental maps for ideal apartments to develop and simulate an innovative residential interior space.
source Architectural and Urban Simulation Techniques in Research and Education [3rd EAEA-Conference Proceedings]
summary Even though results of applied research have been ideally expected to be read and used by practitioners, written suggestions have been less persuasive especially, in visual field such as environmental design, architecture, and interior design. Therefore, visualization of space has been frequently considered as an ideal alternative way of suggestions and an effective method to disseminate research results and help decision makers. In order to make the visualized target space very solid and mundane, scientific research process to define the characteristics of the space should be precedent. This presentation consists of two parts : first research part ; second design and simulation part. The purpose of the research was to identify the ideal residential interior characteristics on the basis of people's mental maps for ideal apartments. To achieve this goal, quantitative content analysis was used using an existing data set of floor plans drawn by housewives. 2,215 floorplans were randomly selected among 3,012 floorplans collected through nation-wide housing design competition for ideal residential apartments. 213 selected variables were used to analyze the floorplans. Major contents were the presentational characteristics of mental maps and the characteristics of design preference such as layout, composition, furnishing etc. As a result, current and future possible trends of ideal residence were identified. On the basis of the result, design guidelines were generated. An interior spatial model for small size unit using CAD was developed according to the guidelines. To present it in more effective way, computer simulated images were made using 3DS. This paper is expected to generate the comparison of various methods for presenting research results such as written documents, drawings, simulated images, small scaled model for endoscopy and full scale modeling.
keywords Architectural Endoscopy, Endoscopy, Simulation, Visualisation, Visualization, Real Environments
series EAEA
email
more http://www.bk.tudelft.nl/media/eaea/eaea97.html
last changed 2005/09/09 10:43

_id d46d
authors Takahashi, S., Shinagawa, Y. and Kunii, T.L.
year 1997
title A Feature-Based Approach for Smooth Surfaces
source Proceedings of Fourth Symposium on Solid Modeling, pp. 97-110
summary Feature-based representation has become a topic of interest in shape modeling techniques. Such feature- based techniques are, however, still restricted to polyhedral shapes, and none has been done on smooth sur- faces. This paper presents a new feature-based ap- proach for smooth surfaces. Here, the smooth surfaces are assumed to be 2-dimensional @differentiable manifolds within a theoretical framework. As the shape features, critical points such as peaks, pits, and passes are used. We also use a critical point graph called the R.eeb graph to represent the topological skeletons of a smooth object. Since the critical points have close relations with the entities of B-reps, the framework of thtx B-reps can easily be applied to our approach. In our method, the shape design process begins with specifying the topological skeletons using the Reeb graph. The Reeb graph is edited by pasting the enti- ties called cells that have one-to-one correspondences with the critical points. In addition to the topological skeletons, users also design the geometry of the objects with smooth surfaces by specifying the flow curves that run on the object surface. From these flow curves, the system automatically creates a control network that encloses the object shape. The surfaces are interpolated from the control network by minimizing the allergy function subject to the deformation of the surfaces using variational optimization.
series other
last changed 2003/04/23 15:50

_id 6f61
authors Turkiyyah, G.M., Storti, D.W., Ganter, M., Hao, C. and Vimawala, M.
year 1997
title An accelerated triangulation method for computing the skeletons of free-form solid models
source Computer-Aided Design, Vol. 29 (1) (1997) pp. 5-19
summary Shape skeletons are powerful geometric abstractions that provide useful intermediate representations for a number of geometric operations on solid models includingfeature recognition, shape decomposition, finite element mesh generation, and shape design. As a result there has been significant interest in the development of effectivemethods for skeleton generation of general free-form solids. In this paper we describe a method that combines Delaunay triangulation with local numerical optimizationschemes for the generation of accurate skeletons of 3D implicit solid models. The proposed method accelerates the slow convergence of Voronoi diagrams to theskeleton, which, without optimization, would require impractically large sample point sets and resulting messhes to attain acceptable accuracy. The Delaunaytriangulation forms the basis for generating the topological structure of the skeleton. The optimization step of the process generates the geometry of the skeleton patchesby moving the vertices of Delaunay tetrahedra and relocating their centres to form maximally inscribed spheres. The computational advantage of the optimization schemeis that it involves the solution of one small optimization problem per tetrahedron and its complexity is therefore only linear (O(n)) in the number of points used for theskeleton approximation. We demonstrate the effectiveness of the method on a number of representative solid models.
keywords Skeleton Generation, Medial Axis, Delaunay Triangulation, Surface Curvature, Implicit Solid Models
series journal paper
last changed 2003/05/15 21:33

_id a35a
authors Arponen, Matti
year 2002
title From 2D Base Map To 3D City Model
source UMDS '02 Proceedings, Prague (Czech Republic) 2-4 October 2002, I.17-I.28
summary Since 1997 Helsinki City Survey Division has proceeded in experimenting and in developing the methods for converting and supplementing current digital 2D base maps in the scale 1:500 to a 3D city model. Actually since 1986 project areas have been produced in 3D for city planning and construction projects, but working with the whole map database started in 1997 because of customer demands and competitive 3D projects. 3D map database needs new data modelling and structures, map update processes need new working orders and the draftsmen need to learn a new profession; the 3D modeller. Laser-scanning and digital photogrammetry have been used in collecting 3D information on the map objects. During the years 1999-2000 laser-scanning experiments covering 45 km2 have been carried out utilizing the Swedish TopEye system. Simultaneous digital photography produces material for orto photo mosaics. These have been applied in mapping out dated map features and in vectorizing 3D buildings manually, semi automatically and automatically. In modelling we use TerraScan, TerraPhoto and TerraModeler sw, which are developed in Finland. The 3D city model project is at the same time partially a software development project. An accuracy and feasibility study was also completed and will be shortly presented. The three scales of 3D models are also presented in this paper. Some new 3D products and some usage of 3D city models in practice will be demonstrated in the actual presentation.
keywords 3D City modeling
series other
email
more www.udms.net
last changed 2003/11/21 15:16

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 25HOMELOGIN (you are user _anon_814261 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002