CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 489

_id ebd6
authors Dobson, Adrian
year 1996
title Teaching Architectural Composition Through the Medium of Virtual Reality Modelling
source Approaches to Computer Aided Architectural Composition [ISBN 83-905377-1-0] 1996, pp. 91-102
summary This paper describes an experimental teaching programme to enable architectural students in the early years of their undergraduate study to explore their understanding of the principles of architectural composition, by the creation and experience of architectural form and space in simple virtual reality environments. Principles of architectural composition, based upon the ordering and organisation of typological architectural elements according to established rules of composition, are introduced to the students, through the study of recognised works of architectural design theory. Virtual reality modelling is then used as a tool by the students for the testing and exploration of these theoretical concepts. Compositional exercises involving the creation and manipulation of a family of architectural elements to create form and space within a three dimensional virtual reality environment are carried out using Superscape VRT, a PC based virtual reality modelling system. The project seeks to bring intuitive and immersive computer based design techniques directly into the context of design theory teaching and studio practice, at an early stage in the architectural education process.
series other
last changed 1999/04/08 17:16

_id ddssar9608
id ddssar9608
authors Emdanat, S.S. and Vakalo, E.-G.
year 1996
title Shape grammars: a critical review and some thoughts
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary Shape grammars are generative formalisms that produce shapes in specified styles. Little critical work has been done to examine the assumptions that shape grammar researchers make about architectural form and its generation, the methodology they employ, the underlying formalism they use, and consequently the adequacy of this formalism to describe architectural form. After establishing the criteria for evaluating the adequacy of a given generative formalism, this paper applies them to the evaluation of the shape grammar formalism. The paper demonstrates that, in its present state, shape grammar leaves a great deal to be desired in terms of its descriptive power and its generalizability. The paper concludes by exploring some of the desired characteristics for languages of architectural form.
series DDSS
last changed 2003/08/07 16:36

_id ecaade2024_230
id ecaade2024_230
authors Fekar, Hugo; Novák, Jan; Míèa, Jakub; Žigmundová, Viktória; Suleimanova, Diana; Tsikoliya, Shota; Vasko, Imrich
year 2024
title Fabrication with Residual Wood through Scanning Optimization and Robotic Milling
source Kontovourkis, O, Phocas, MC and Wurzer, G (eds.), Data-Driven Intelligence - Proceedings of the 42nd Conference on Education and Research in Computer Aided Architectural Design in Europe (eCAADe 2024), Nicosia, 11-13 September 2024, Volume 1, pp. 25–34
doi https://doi.org/10.52842/conf.ecaade.2024.1.025
summary The project deals with the use of residual wood of tree stumps and roots through scanning, optimization and robotic milling. Wood logging residue makes up to 50 percent of the trees harvested biomass. (Hakkila and Parikka 2002). Among prevailing strategies is leaving residue on site, and recovering residue for bioenergy. (Perlack and others 2005). The project explores the third strategy, using parts of the logging residue for fabrication, which may reduce the overall amount of wood logging volume. Furthermore approach aims for applying residue in its natural form and taking advantage of specific local characteristics of wood (Desch and Dinwoodie 1996). The project applies the strategy on working with stump and roots of an oak tree. Due to considerations of scale, available milling technics and available resources, chosen goal of the approach is to create a functioning chair prototype. Among the problems of the approach is the complex shape of the residue, uneven quality of wood, varying humidity and contamination with soil. After cleaning and drying, the stump is scanned and a 3D model is created. The 3D model od a stump is confronted with a 3D modelled limits of the goal typology (height, width, length, sitting surface area and overal volume of a chair) and topological optimization algorithm is used to iteratively reach the desired geometry. Unlike in established topological optimization proces, which aims for a minimal volume, the project attempts to achieve required qualities with removing minimal amount of wood. Due to geometric complexity of both stump and goal object, milling with an 6axis industrial robotic arm and a rotary table was chosen as a fabrication method. The object was clamped to the board (then connected to a rotary table) in order to provide precise location and orientation in 3D space. The milling of the object was divided in two parts, with the seating area milled in higher detail. Overall process of working with a residual wood that has potential to be both effective and present aesthetic quality based on individual characteristics of wood. Further development can integrate a generative tool which would streamline the design and fabrication proces further.
keywords Robotic arm milling, Scanning, Residual wood
series eCAADe
email
last changed 2024/11/17 22:05

_id 3386
authors Gavin, L., Keuppers, S., Mottram, C. and Penn, A.
year 2001
title Awareness Space in Distributed Social Networks
source Proceedings of the Ninth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-7023-6] Eindhoven, 8-11 July 2001, pp. 615-628
summary In the real work environment we are constantly aware of the presence and activity of others. We know when people are away from their desks, whether they are doing concentrated work, or whether they are available for interaction. We use this peripheral awareness of others to guide our interactions and social behaviour. However, when teams of workers are spatially separated we lose 'awareness' information and this severely inhibits interaction and information flow. The Theatre of Work (TOWER) aims to develop a virtual space to help create a sense of social awareness and presence to support distributed working. Presence, status and activity of other people are made visible in the theatre of work and allow one to build peripheral awareness of the current activity patterns of those who we do not share space with in reality. TOWER is developing a construction set to augment the workplace with synchronous as well as asynchronous awareness. Current, synchronous activity patterns and statuses are played out in a 3D virtual space through the use of symbolic acting. The environment itself however is automatically constructed on the basis of the organisation's information resources and is in effect an information space. Location of the symbolic actor in the environment can therefore represent the focus of that person's current activity. The environment itself evolves to reflect historic patterns of information use and exchange, and becomes an asynchronous representation of the past history of the organisation. A module that records specific episodes from the synchronous event cycle as a Docudrama forms an asynchronous information resource to give a history of team work and decision taking. The TOWER environment is displayed using a number of screen based and ambient display devices. Current status and activity events are supplied to the system using a range of sensors both in the real environment and in the information systems. The methodology has been established as a two-stage process. The 3D spatial environment will be automatically constructed or generated from some aspect of the pre-existing organisational structure or its information resources or usage patterns. The methodology must be extended to provide means for that structure to grow and evolve in the light of patterns of actual user behaviour in the TOWER space. We have developed a generative algorithm that uses a cell aggregation process to transcribe the information space into a 3d space. In stage 2 that space was analysed using space syntax methods (Hillier & Hanson, 1984; Hillier 1996) to allow the properties of permeability and intelligibility to be measured, and then these fed back into the generative algorithm. Finally, these same measures have been used to evaluate the spatialised behaviour that users of the TOWER space show, and will used to feed this back into the evolution of the space. The stage of transcription from information structure to 3d space through a generative algorithm is critical since it is this stage that allows neighbourhood relations to be created that are not present in the original information structure. It is these relations that could be expected to help increase social density.
keywords Algorithmic Form Generation, Distributed Workgroups, Space Syntax
series CAAD Futures
email
last changed 2006/11/07 07:22

_id 1aa5
authors Huangb, X., Gub, P. and Zernickea, R.
year 1996
title Localization and comparison of two free-form surfaces
source Computer-Aided Design, Vol. 28 (12) (1996) pp. 1017-1022
summary Comparison of two free-form surfaces based on discrete data points is of paramount importance for reverse engineering. It can be used to assess the accuracy of the reconstructed surfaces and to quantify thedifference between two such surfaces. The entire process involves three main steps: data acquisition, 3D feature localization and quantitative comparison. This paper presents models and algorithms for 3D featurelocalization and quantitative comparison. Complex free-form surfaces are represented by bicubic parametric spline surfaces using discrete points. A simple yet effective pseudoinverse algorithm was developed andimplemented for localization. It consists of two iterative operations, namely, constructing a pseudo transformation matrix and point matching. A computing algorithm was developed to compare two such surfacesusing optimization techniques. Since this approach does not involve solving non-linear equations for the parameters of positions and orientations, it is fast and robust. The algorithm was implemented and testedwith several examples. It is effective and can be used in industry for sculptured surface comparison.
keywords Free-Form Sculptured Surface, Localization, Point Matching, Surface Comparison
series journal paper
last changed 2003/05/15 21:33

_id a026
authors Nagakura, Takehiko
year 1996
title Form Processing: A System for Architectural Design
source Harvard University
summary This thesis introduces a new approach to developing software for formal synthesis in architectural design. It presents theoretical foundations, describes prototype specifications for computable implementation, and illustrates some examples. The approach derives from the observation that architects explore ideas through the use of sequences of drawings. Architects derive each drawing in a sequence from its predecessor by executing some transformation on a portion of the drawing. Thus, a formal design state is established by a sequence of drawings with historical information about their construction through progressive transformations. The proposed system allows an architect to develop a design in three ways. First, a new transformation can be added to a current sequence of drawings. Second, existing sequences can be edited by exchanging their subset sequences. Third, an existing sequence can be revised parametrically by assigning new values to its design variables. The system implements scripts that specify categories of shapes and transformations between any two shape categories. When an instance of a shape category is found in a design, a transformation can replace it with an instance of another shape category. Recursive application of a given set of transformations to an initial shape instance produces a sequence of drawings that represents a formal design state. The system encodes this formal design state as an assembly of all the shape instances used and their relationships (nesting, emergent and replacement). Furthermore, this assembly, called a construction graph, allows the existing sequences to be edited efficiently by exchanging subsets and to be revised parametrically. The advantage of this approach as demonstrated in the examples is that it allows intuitive, rapid and interactive construction of complex designs. Moreover, design knowledge can be captured by scripts that depict heuristic shapes and transformations as well as by assembled construction graphs which depict cases of formal design. Such a reusable and expandable knowledge base is essential for assisting disciplined and creative architectural design.
keywords Computer Software Development; Architectural Design; Data Processing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id sigradi2023_108
id sigradi2023_108
authors Passos, Aderson, Jorge, Luna, Cavalcante, Ana, Sampaio, Hugo, Moreira, Eugenio and Cardoso, Daniel
year 2023
title Urban Morphology and Solar Incidence in Public Spaces - an Exploratory Correlation Analysis Through a CIM System
source García Amen, F, Goni Fitipaldo, A L and Armagno Gentile, Á (eds.), Accelerated Landscapes - Proceedings of the XXVII International Conference of the Ibero-American Society of Digital Graphics (SIGraDi 2023), Punta del Este, Maldonado, Uruguay, 29 November - 1 December 2023, pp. 1655–1666
summary The walkability of open spaces has been highlighted in current discussions about the production of designed environments in urban contexts (Matan, 2011). To contribute to this theme, this work selects the environmental comfort of open spaces as its element of study. The production of urban space was investigated, specifically in regard to urban morphology, understanding that city design directly influences environmental comfort (Jacobs, 1996). This work addresses the geographic context of low latitudes, specifically in hot and humid climate zones of Brazil, and, in this context, according to NBR 15220 (national performance standards), shading is one of the main comfort strategies, so solar incidence was the approached environmental phenomenon. Thus, this work presents a digital system that performs exploratory analysis on the correlations between urban form indicators and environmental performance indicators, specifically solar incidence. The method consists of three steps: urban form modeling (1), indicator measurement (2) and correlation analysis (3). In the first stage, different spatial sections of a city in Brazil were represented in the digital environment (1). This work’s implementation instrument is based on a City Information Modeling framework (Beirao et al., 2012). Visual Programming Interface (VPI) and Geographic Information Systems (GIS) tools were used, in addition to a Relational Database Management System (RDBMS). Then, for each urban clipping, the values of morphological indicators and the incidence of solar radiation were measured (2). Based on the values of the indicators, an exploration of their correlation was carried out by statistical methods (3). The results of the correlation analysis and their correspondent scatter plots are presented. Finally, possible applications of the results for the creation of prescriptive urban planning systems are discussed, seeking to promote a sustainable urban environment.
keywords Urban planning, Environmental comfort, Walkability, Urban morphology, Statistical methods.
series SIGraDi
email
last changed 2024/03/08 14:09

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ddssar9633
id ddssar9633
authors Szalapaj, Peter and Kane, Andrew
year 1996
title Techniques of Superimposition
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary This paper addresses the issues of 2-D and 3-D image manipulation in the context of a Computational Design Formulation System. The central feature of such a system is the ability to bring together two or more design objects in the same reference space for the purpose of analysis. Studies of traditional design methods has revealed the effectiveness of this technique of superimposition. This paper describes ways in which superimposition can be achieved, and, in particular, focuses on a range of domain-independent knowledge-based graphical operators that enable the decomposition of complex design forms into simpler aspects (secondary models) that can then be superimposed and/or analysed from a design-theoretic point of view. Examples of domain-independent knowledge-base graphical operators include object selection, planar bisection, 2-D closure (the grouping of lines into regions), aggregation (the decomposition of 2-D regions into aggregations of lines), spatial bisection, 3-D closure (the grouping of 2-D regions into volumes), 3-D aggregation (the decomposition of volumes into aggregations of 2-D regions). The representation of these operators is dependent upon the notion of a parameterisable volume, thus avoiding the need for translations between multiple representations of graphical objects by providing a common representation form for all objects. Secondary models can therefore subsequently be manipulated either through subtractive procedures (e.g. carving voids from solids), or by additive ones (e.g. assembling given design elements), or by other means such as transformation or distortion. The same techniques of superimposition can also be used to support the visualisation of design forms in two ways: by the juxtaposition of plans and sections with the 3-D form; by the multiple superimposition of alternative design representations e.g. structural schematic, parti schematic, volumetric schematic and architectural model.
keywords Design Formulation, Superimposition, Primary Model, Secondary Model, Parameterisable Volume
series DDSS
last changed 2003/08/07 16:36

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ddssar9620
id ddssar9620
authors Koutamanis, Alexander
year 1996
title Elements and coordinating devices in architecture: An initial formulation
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary Design representations of the built environment are essentially atomistic. A design is represented by its atomic components which may vary according to abstraction level, their properties and, if possible, their relationships. The utility of such representations has been amply demonstrated in academic research. However, the transition to practice means a substantial growth of the size of these representations in order to cover the many abstraction levels and the multiple aspects involved in the design and the management of the built environment. In most cases the complexity of larger representations renders them unmanageable for both computers and humans. The paper outlines an approach which enriches the atomistic basis of the representation with connected but independent coordinating devices. This facilitates the transformation of the basic relational representations into multilevel structures where each level corresponds to different aspects and abstraction scales. Coordinating devices are instrumental for the representation of multilateral relationships and abstract spatial schemata which precede or supersede the placement and arrangement of elements.
series DDSS
last changed 2003/08/07 16:36

_id ddssar9601
id ddssar9601
authors Achten, H.H., Bax, M.F.Th. and Oxman, R.M.
year 1996
title Generic Representations and the Generic Grid: Knowledge Interface, Organisation and Support of the (early) Design Process
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary Computer Aided Design requires the implementation of architectural issues in order to support the architectural design process. These issues consist of elements, knowledge structures, and design processes that are typical for architectural design. The paper introduces two concepts that aim to define and model some of such architectural issues: building types and design processes. The first concept, the Generic grid, will be shown to structure the description of designs, provide a form-based hierarchical decomposition of design elements, and to provide conditions to accommodate concurrent design processes. The second concept, the Generic representation, models generic and typological knowledge of building types through the use of graphic representations with specific knowledge contents. The paper discusses both concepts and will show the potential of implementing Generic representations on the basis of the Generic grid in CAAD systems.
series DDSS
last changed 2003/11/21 15:15

_id c204
authors Aleksander Asanowicz
year 1996
title Teaching and Learning - Full Brainwash
source Education for Practice [14th eCAADe Conference Proceedings / ISBN 0-9523687-2-2] Lund (Sweden) 12-14 September 1996, pp. 51-54
doi https://doi.org/10.52842/conf.ecaade.1996.051
summary We often speak of changes in design process due to an application of computers. But in my opinion we more often rather speak of lack of changes. Lets hope that some day we will be able to witness full integrity and compatibility of design process and tools applied in it. Quite possible such an integrity may occur in the cyberspace. Nevertheless before that could happen some changes within the teaching methods at faculties of architecture, where despite great numbers of computer equipment used, the students are still being taught as in the XIX century. In terms of achieved results it proves ineffective because application of chalk and blackboard only will always loose to new media, which allow visual perception of dinosaurs in Jurassic Park. Our civilisation is the iconographic one. And that is why teaching methods are about to change. An application of computer as simply a slide projector seems to be way too expensive. New media demands new process and new process demands new media. Lets hope that could be achieved in cyberspace as being a combination of: classic ways of teaching, hypertext, multimedia, virtual reality and a new teaching methodology (as used in Berlitz English School - full brainwash). At our faculty several years ago we experimentally undertook and applied an Integrated Design Teaching Method. A student during design process of an object simultaneously learnt all aspects and functions of the object being designing i.e.: its structure, piping and wiring, material cost and even historic evolution of its form and function. Unfortunately that concept was too extravagant as for the seventies in our reality. At present due to wide implementation of new media and tools in design process we come to consider reimplementation of IDTM again.
series eCAADe
email
last changed 2022/06/07 07:54

_id 6ec6
authors Alsayyad, Nezar, Elliott, Ame and Kalay, Yehuda
year 1996
title Narrative Models: A Database Approach to Modeling Medieval Cairo
source Design Computation: Collaboration, Reasoning, Pedagogy [ACADIA Conference Proceedings / ISBN 1-880250-05-5] Tucson (Arizona / USA) October 31 - November 2, 1996, pp. 247-254
doi https://doi.org/10.52842/conf.acadia.1996.247
summary This paper explores the use of three-dimensional simulations to investigate transformations of urban form in medieval Cairo, and lessons about using computers to support historical visualization. Our first attempt to create a single extremely detailed model of Cairo proved unworkable. From this experience we developed a database approach to organizing modeling projects of complex urban environments. The database consists of several complete models at different levels of abstraction. This approach has three advantages over the earlier one: the model is never viewed as incomplete, the framework supports both additive and subtractive chronological studies, and finally, the database is viewed as infinitely expandable. Using modeling software as a tool for inquiry into architectural history becomes more feasible with this new approach.
series ACADIA
email
last changed 2022/06/07 07:54

_id d5c8
authors Angelo, C.V., Bueno, A.P., Ludvig, C., Reis, A.F. and Trezub, D.
year 1999
title Image and Shape: Two Distinct Approaches
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 410-415
summary This paper is the result of two researches done at the district of Campeche, Florianópolis, by the Grupo PET/ARQ/UFSC/CAPES. Different aspects and conceptual approaches were used to study the spatial attributes of this district located in the Southern part of Santa Catarina Island. The readings and analysis of two researches were based on graphic pistures builded with the use of Corel 7.0 e AutoCadR14. The first research – "Urban Development in the Island of Santa Catarina: Public Space Study"- examined the urban structures of Campeche based on the Spatial Syntax Theory developed by Hillier and Hanson (1984) that relates form and social appropriation of public spaces. The second research – "Topoceptive Characterisation of Campeche: The Image of a Locality in Expansion in the Island of Santa Catarina" -, based on the methodology developed by Kohlsdorf (1996) and also on the visual analysis proposed by Lynch (1960), identified characteristics of this locality with the specific goal of selecting attributes that contributed to the ideas of the place its population held. The paper consists of an initial exercise of linking these two methods in order to test the complementarity of their analytical tools. Exemplifying the analytical procedures undertaken in the two approaches, the readings done - global (of the locality as a whole) and partial (from parts of the settlement) - are presented and compared.
series SIGRADI
email
last changed 2016/03/10 09:47

_id ddssup9602
id ddssup9602
authors Arentze, T.A., Borgers, A.W.J. and Harry J.P.
year 1996
title A knowledge-based model for developing location strategies in a DSS for retail planning
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part two: Urban Planning Proceedings (Spa, Belgium), August 18-21, 1996
summary Most DSS for retail planning are based on impact assessment models to support the evaluation of plan scenario's. This paper introduces a complementary knowledge-based model to support also the earlier stage of formulating plan scenario's. An analysis of the retail planning problem reveals the main lines of the strategies adopted by most Dutch planners and retailers to achieve their goals. A basic strategy that seems to be appropriate in most problem contexts is formulated in the form of a set of decision tables. Each decision table or system of decision tables specifies for a problem area decision rules to identify and analyse problems and to formulate possible actions. The model is implemented in a DSS where it is used in combination with quantitative impact assessment models. A case study in the area of daily good facilities demonstrates the approach. The major conclusion is that the knowledge-based approach and in particular the decision table technique provides interesting possibilities to implement planning task structures in a DSS environment.
series DDSS
last changed 2003/11/21 15:16

_id e29d
authors Arvesen, Liv
year 1996
title LIGHT AS LANGUAGE
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary With the unlimited supply of electric light our surroundings very easily may be illuminated too strongly. Too much light is unpleasant for our eyes, and a high level of light in many cases disturbs the conception of form. Just as in a forest, we need shadows, contrasts and variation when we compose with light. If we focus on the term compose, it is natural to conceive our environment as a wholeness. In fact, this is not only aesthetically important, it is true in a physical context. Inspired by old windows several similar examples have been built in the Trondheim Full-scale Laboratory where depth is obtained by constructing shelves on each side of the opening. When daylight is fading, indirect artificial light from above gradually lightens the window. The opening is perceived as a space of light both during the day and when it is dark outside.

Another of the built examples at Trondheim University which will be presented, is a doctor's waitingroom. It is a case study of special interest because it often appears to be a neglected area. Let us start asking: What do we have in common when we are waiting to come in to a doctor? We are nervous and we feel sometimes miserable. Analysing the situation we understand the need for an interior that cares for our state of mind. The level of light is important in this situation. Light has to speak softly. Instead of the ordinary strong light in the middle of the ceiling, several spots are selected to lighten the small tables separating the seats. The separation is supposed to give a feeling of privacy. By the low row of reflected planes we experience an intimate and warming atmosphere in the room. A special place for children contributes to the total impression of calm. In this corner the inside of some shelves are lit by indirect light, an effect which puts emphasis on the small scale suitable for a child. And it also demonstrates the good results of variation. The light setting in this room shows how light is “caught” two different ways.

keywords Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:34

_id 6c97
authors Asanowicz, Aleksander
year 1996
title Using the Computer in Analysis of Architectural Form
source Approaches to Computer Aided Architectural Composition [ISBN 83-905377-1-0] 1996, pp. 25-34
summary One of the most important aspects of the designing process is: the design activity is usually conducted with incomplete information. Another important aspect of designing activity is: designing activity is usually based on past experience. As a matter of fact looking at designers in the early conceptual phases, one thing that appears clear is, instead starting from scratch, they spend a part of their time thinking about existing designing experience, reviewing the literature, and so on. That is why explicit representation of designing knowledge is needed if computers are to be used as the aid of design education and practice. Composition knowledge data base will be helpful during an architectural form analysis process as well. It makes possible to provide answers and explanations as well as allowing to view tutorials illustrating the particular problem. On its basic level such a program will present analysis of architectural objects and abstract forms based on subjective criteria. On its upper level allowing further exploration of various architectural composition attributes, as well as their influence on emotional- aesthetic judgements being formed during the process of analysis the architectural form.
series other
last changed 1999/04/08 17:16

_id 63a7
id 63a7
authors Ataman, Osman and Lonnman, Bruce
year 1996
title Introduction to Concept and Form in Architecture: An Experimental Design Studio Using the Digital Media
source Design Computation: Collaboration, Reasoning, Pedagogy [ACADIA Conference Proceedings / ISBN 1-880250-05-5] Tucson (Arizona / USA) October 31 - November 2, 1996, pp. 3-9
doi https://doi.org/10.52842/conf.acadia.1996.003
summary This paper describes the use of digital media in a first year undergraduate architectural design studio. It attempts to address the importance of developing a design process that is redefined by the use of computing, integrating concept and perception. Furthermore, it describes the theoretical foundations and quasi-experiments of a series of exercises developed for beginning design students.
series ACADIA
email
last changed 2022/06/07 07:54

_id ddssar9638
id ddssar9638
authors Bax, M.F.Th. and Trum, H.M.G.J.
year 1996
title A Conceptual Model for Concurrent Engineering in Building Design according to Domain Theory
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary Concurrent engineering is a design strategy in which various designers participate in a co-ordinated parallel process. In this process series of functions are simultaneously integrated into a common form. Processes of this type ask for the identification, definition and specification of relatively independent design fields. They also ask for specific design knowledge designers should master in order to participate in these processes. The paper presents a conceptual model of co-ordinated parallel design processes in which architectural space is simultaneously defined in the intersection of three systems: a morphological or level-bound system, a functional or domain-bound system and a procedural or phase-bound system. Design strategies for concurrent engineering are concerned with process design, a design task which is comparable to the design of objects. For successfully accomplishing this task, knowledge is needed of the structural properties of objects and systems; more specifically of the morphological, functional and procedural levels which condition the design fields from which these objects emerge, of the series of generic forms which condition their appearance and of the typological knowledge which conditions their coherence in the overall process.
series DDSS
last changed 2003/11/21 15:16

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 24HOMELOGIN (you are user _anon_390435 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002