CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 32

_id ddssar9638
id ddssar9638
authors Bax, M.F.Th. and Trum, H.M.G.J.
year 1996
title A Conceptual Model for Concurrent Engineering in Building Design according to Domain Theory
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary Concurrent engineering is a design strategy in which various designers participate in a co-ordinated parallel process. In this process series of functions are simultaneously integrated into a common form. Processes of this type ask for the identification, definition and specification of relatively independent design fields. They also ask for specific design knowledge designers should master in order to participate in these processes. The paper presents a conceptual model of co-ordinated parallel design processes in which architectural space is simultaneously defined in the intersection of three systems: a morphological or level-bound system, a functional or domain-bound system and a procedural or phase-bound system. Design strategies for concurrent engineering are concerned with process design, a design task which is comparable to the design of objects. For successfully accomplishing this task, knowledge is needed of the structural properties of objects and systems; more specifically of the morphological, functional and procedural levels which condition the design fields from which these objects emerge, of the series of generic forms which condition their appearance and of the typological knowledge which conditions their coherence in the overall process.
series DDSS
last changed 2003/11/21 15:16

_id 8e02
authors Brown, A.G.P. and Coenen, F.P.
year 2000
title Spatial reasoning: improving computational efficiency
source Automation in Construction 9 (4) (2000) pp. 361-367
summary When spatial data is analysed the result is often very computer intensive: even by the standards of contemporary technologies, the machine power needed is great and the processing times significant. This is particularly so in 3-D and 4-D scenarios. What we describe here is a technique, which tackles this and associated problems. The technique is founded in the idea of quad-tesseral addressing; a technique, which was originally applied to the analysis of atomic structures. It is based on ideas concerning Hierarchical clustering developed in the 1960s and 1970s to improve data access time [G.M. Morton, A computer oriented geodetic database and a new technique on file sequencing, IBM Canada, 1996.], and on atomic isohedral (same shape) tiling strategies developed in the 1970s and 1980s concerned with group theory [B. Grunbaum, G.C. Shephard, Tilings and Patterns, Freeman, New York, 1987.]. The technique was first suggested as a suitable representation for GIS in the early 1980s when the two strands were brought together and a tesseral arithmetic applied [F.C. Holdroyd, The Geometry of Tiling Hierarchies, Ars Combanitoria 16B (1983) 211–244.; S.B.M. Bell, B.M. Diaz, F.C. Holroyd, M.J.J. Jackson, Spatially referenced methods of processing raster and vector data, Image and Vision Computing 1 (4) (1983) 211–220.; Diaz, S.B.M. Bell, Spatial Data Processing Using Tesseral Methods, Natural Environment Research Council, Swindon, 1986.]. Here, we describe how that technique can equally be applied to the analysis of environmental interaction with built forms. The way in which the technique deals with the problems described is first to linearise the three-dimensional (3-D) space being investigated. Then, the reasoning applied to that space is applied within the same environment as the definition of the problem data. We show, with an illustrative example, how the technique can be applied. The problem then remains of how to visualise the results of the analysis so undertaken. We show how this has been accomplished so that the 3-D space and the results are represented in a way which facilitates rapid interpretation of the analysis, which has been carried out.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id ffe2
authors Carrar, G., Luna, F. and Rajchman, A.
year 1999
title Cúpulas Telefónicas - Mobiliario Urbano, Diseño Industrial aplicado a una empresa de servicios (Telephone Cupolas - Urban Furniture, Industrial Design Applied to a Company of Services)
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 426-409
summary By november 1996, the state telecomunication company called for a national booth design contest. The idea was to use the awarded design shortly as part of the renovation of the public phone service. Gruppo MDM won the design contest and was contracted to do the manufacture technical drawings and a prototype which was tested during 1997. By 1997, an international bid was held, including the awarded project. Gruppo MDM was contracted for the follow up of the manufacture process, including research of suppliers worldwide, materials arriving on time with the quality required, verifying local suppliers with deadlines and quality controlls according to the specifications.
series SIGRADI
email
last changed 2016/03/10 09:48

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id cf2017_567
id cf2017_567
authors Kim, Ikhwan; Lee, Injung; Lee, Ji-Hyun
year 2017
title The Expansion of Virtual Landscape in Digital Games: Classification of Virtual Landscapes Through Five principles
source Gülen Çagdas, Mine Özkar, Leman F. Gül and Ethem Gürer (Eds.) Future Trajectories of Computation in Design [17th International Conference, CAAD Futures 2017, Proceedings / ISBN 978-975-561-482-3] Istanbul, Turkey, July 12-14, 2017, pp. 567-584.
summary This research established classification system which contains five principles and variables to classify the types of the virtual landscape in digital games. The principles of the classification are Story, Space Shape, Space and Action Dimension, User Complexity and Interaction Level. With this classification system, our research group found the most representative types of virtual landscape in the digital game market through 1996 to 2016. Although mathematically there can be 288 types of virtual landscape, only 68 types have been used in the game market in recent twenty years. Among the 68 types, we defined 3 types of virtual landscape as the most representative types based on the growth curve and a number of cases. Those three representative types of virtual landscapes are Generating / Face / 3D-3D / Single / Partial, Providing / Chain / 3D-3D / Single / Partial and Providing / Linear / 2D-2D / Single / Partial. With the result, the researchers will be able to establish the virtual landscape design framework for the future research.
keywords Digital Game, Virtual Landscape, Game Design, Game Classification
series CAAD Futures
email
last changed 2017/12/01 14:38

_id 3105
authors Novak, T.P., Hoffman, D.L., and Yung, Y.-F.
year 1996
title Modeling the structure of the flow experience
source INFORMS Marketing Science and the Internet Mini-Conference, MIT
summary The flow construct (Csikszentmihalyi 1977) has recently been proposed by Hoffman and Novak (1996) as essential to understanding consumer navigation behavior in online environments such as the World Wide Web. Previous researchers (e.g. Csikszentmihalyi 1990; Ghani, Supnick and Rooney 1991; Trevino and Webster 1992; Webster, Trevino and Ryan 1993) have noted that flow is a useful construct for describing more general human-computer interactions. Hoffman and Novak define flow as the state occurring during network navigation which is: 1) characterized by a seamless sequence of responses facilitated by machine interactivity, 2) intrinsically enjoyable, 3) accompanied by a loss of self-consciousness, and 4) selfreinforcing." To experience flow while engaged in an activity, consumers must perceive a balance between their skills and the challenges of the activity, and both their skills and challenges must be above a critical threshold. Hoffman and Novak (1996) propose that flow has a number of positive consequences from a marketing perspective, including increased consumer learning, exploratory behavior, and positive affect."
series other
last changed 2003/04/23 15:50

_id 3125
authors Peyret, F. Bétaille, D. and Hintzy, G.
year 2000
title High-precision application of GPS in the field of real-time equipment positioning
source Automation in Construction 9 (3) (2000) pp. 299-314
summary In the frame of its research concerning real-time positioning and control of road construction equipment, the Laboratoire Central des Ponts et Chaussées, has carried out in 1996 a study to know more about the actual vertical accuracy that a real-time kinematic (RTK) global positioning system (GPS) sensor could reach, under work site conditions. This study has widely used the dedicated testing facility called SESSYL, built to perform high-accuracy and real-time evaluation tests on positioning systems. It has been performed in collaboration with the French road contractor COLAS and the Ecole Supérieure des Géomètres et Topographes (ESGT). First, there is the proposed adapted geodetic transformation procedure, compatible with the high accuracy requirements. Then, the main results of a special SESSYL tests program are presented, where the impacts of several influencing parameters on the vertical accuracy have been carefully examined. The core part of the paper is the analysis of the typical RTK GPS set of data, from which we have tried to extract two different components: a high-frequency noise, rather easy to filter, and a low-frequency bias. This bias, given its good repeatability, can be modelled and used in prediction to improve in real-time the raw accuracy of the data. As a full-scale validation of our study, a site experiment is finally described, carried out this time on a real piece of equipment (an asphalt paver) during real roadwork.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:23

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 8c51
authors Schmitt, G., Wenz, F., Kurmann, D., Van der Mark, E.
year 1996
title Toward Virtual Reality in Architecture: Concepts and Scenarios from Architectural Space Laboratory
source Presence, Vol. 4, No. 3, July, pages pp. 267-285
summary Contributed by Bharat Dave (b.dave@architecture.unimelb.edu.au)
keywords 3D City modeling
series other
last changed 2001/06/04 20:23

_id ddssar9601
id ddssar9601
authors Achten, H.H., Bax, M.F.Th. and Oxman, R.M.
year 1996
title Generic Representations and the Generic Grid: Knowledge Interface, Organisation and Support of the (early) Design Process
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary Computer Aided Design requires the implementation of architectural issues in order to support the architectural design process. These issues consist of elements, knowledge structures, and design processes that are typical for architectural design. The paper introduces two concepts that aim to define and model some of such architectural issues: building types and design processes. The first concept, the Generic grid, will be shown to structure the description of designs, provide a form-based hierarchical decomposition of design elements, and to provide conditions to accommodate concurrent design processes. The second concept, the Generic representation, models generic and typological knowledge of building types through the use of graphic representations with specific knowledge contents. The paper discusses both concepts and will show the potential of implementing Generic representations on the basis of the Generic grid in CAAD systems.
series DDSS
last changed 2003/11/21 15:15

_id d9bf
authors Goodchild, N.F., Steyaert, L.T., Parks, B.O., Johnson, C., Maidment, D., Crane, M. and Glendinning, S. (Eds.)
year 1996
title GIS and Environmental Modeling: Progress and Research Issues
source Fort Collins, CO: GIS World Books, pp.451-454
summary GIS and Environmental Modeling: Progress and Research Issues Michael F. Goodchild, Louis T. Steyaert, Bradley O. Parks, Carol Johnston, David Maidment, Michael Crane, and Sandi Glendinning, Editors With growing pressure on natural resources and landscapes there is an increasing need to predict the consequences of any changes to the environment. Modelling plays an important role in this by helping our understanding of the environment and by forecasting likely impacts. In recent years moves have been made to link models to Geographical Information Systems to provide a means of analysing changes over an area as well as over time. GIS and Environmental Modeling explores the progress made to date in integrating these two software systems. Approaches to the subject are made from theoretical, technical as well as data stand points. The existing capabilities of current systems are described along with important issues of data availability, accuracy and error. Various case studies illustrate this and highlight the common concepts and issues that exist between researchers in different environmental fields. The future needs and prospects for integrating GIS and environmental models are also explored with developments in both data handling and modelling discussed. The book brings together the knowledge and experience of over 100 researchers from academic, commercial and government backgrounds who work in a wide range of disciplines. The themes followed in the text provide a fund of knowledge and guidance for those involved in environmental modelling and GIS. The book is easily accessible for readers with a basic GIS knowledge and the ideas and results of the research are clearly illustrated with both colour and black and white graphics.
series other
last changed 2003/04/23 15:14

_id 2ca1
authors Montagu, A. and Bermudez, J.
year 1998
title Datarq: The Development of a Website of Modern Contemporary Architecture
doi https://doi.org/10.52842/conf.ecaade.1998.x.p7a
source Computerised Craftsmanship [eCAADe Conference Proceedings] Paris (France) 24-26 September 1998
summary The pedagogic approach in the architectural field is suffering a deep change taking in consideration the impact that has been produced mainly by the CAD and multimedia procedures. An additional view to be taken in consideration is the challenge produced by the influence of advanced IT which since 1990-92, has affected positively the exchange of information among people of the academic environment. Several studies confirm this hypothesis, from the wide cultural spectrum when the digitalization process was emerging as an alternative way to data processing (Bateson 1976) to the pedagogical-computational side analyzed by (Papert 1996). One of the main characteristics indicated by S. Papert (op.cit) is the idea of "self teaching" which students are used everywhere due to the constant augment of "friendly" software and the decreasing costs of hardware. Another consequences to point out by S. Paper (op.cit) is that will be more probably that students at home will have more actualized equipment that most of the computer lab. of schools in general. Therefore, the main hypothesis of this paper is, "if we are able to combine usual tutorials design methods with the concept of "self-teaching" regarding the paradigmatic architectural models that are used in practically all the schools of architecture (Le Corbusier, F.L.Wright, M.v. der Rohe, M.Botta, T.Ando, etc.) using a Web site available to everybody, what we are doing is expanding the existing knowledge in the libraries and fulfill the future requirements of the newly generations of students".
series eCAADe
email
more http://www.paris-valdemarne.archi.fr/archive/ecaade98/html/35montagu/index.htm
last changed 2022/06/07 07:50

_id c37f
authors Resnick, M., Bruckman, A. and Martin, F.
year 1996
title Pianos Not Stereos: Creating Computational Construction Kits
source Interactions, 3 (6)
summary The stereo has many attractions: it is easier to play and it provides immediate access to a wide range of music. But "ease of use" should not be the only criterion. Playing the piano can be a much richer experience. By learning to play the piano, you can become a creator (not just a consumer) of music, expressing yourself musically in ever-more complex ways. As a result, you can develop a much deeper relationship with (and deeper understanding of) music. So too with computers. In the field of educational technology, there has been too much emphasis on the equivalent of stereos and CDs, and not enough emphasis on computational pianos. In our research group at the MIT Media Lab, we are developing a new generation of "computational construction kits" that, like pianos, enable people to express themselves in ever-more complex ways, deepening their relationships with new domains of knowledge.
series journal paper
last changed 2003/04/23 15:50

_id 149d
authors Rosenman, M.A.
year 1996
title The generation of form using an evolutionary approach
source J.S. Gero and F. Sudweeks (eds), Artificial Intelligence in Design Ì96, 643-662
summary Design is a purposeful knowledge-based human activity whose aim is to create form which, when realized, satisfies the given intended purposes.1 Design may be categorized as routine or non-routine with the latter further categorized as innovative or creative. The lesser the knowledge about existing relationships between the requirements and the form to satisfy those requirements, the more a design problem tends towards creative design. Thus, for non-routine design, a knowledge-lean methodology is necessary. Natural evolution has produced a large variety of forms well-suited to their environment suggesting that the use of an evolutionary approach could provide meaningful design solutions in a non-routine design environment. This work investigates the possibilities of using an evolutionary approach based on a genotype which represents design grammar rules for instructions on locating appropriate building blocks. A decomposition/aggregation hierarchical organization of the design object is used to overcome combinatorial problems and to maximize parallelism in implementation.
series other
last changed 2003/04/23 15:50

_id d5c8
authors Angelo, C.V., Bueno, A.P., Ludvig, C., Reis, A.F. and Trezub, D.
year 1999
title Image and Shape: Two Distinct Approaches
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 410-415
summary This paper is the result of two researches done at the district of Campeche, Florianópolis, by the Grupo PET/ARQ/UFSC/CAPES. Different aspects and conceptual approaches were used to study the spatial attributes of this district located in the Southern part of Santa Catarina Island. The readings and analysis of two researches were based on graphic pistures builded with the use of Corel 7.0 e AutoCadR14. The first research – "Urban Development in the Island of Santa Catarina: Public Space Study"- examined the urban structures of Campeche based on the Spatial Syntax Theory developed by Hillier and Hanson (1984) that relates form and social appropriation of public spaces. The second research – "Topoceptive Characterisation of Campeche: The Image of a Locality in Expansion in the Island of Santa Catarina" -, based on the methodology developed by Kohlsdorf (1996) and also on the visual analysis proposed by Lynch (1960), identified characteristics of this locality with the specific goal of selecting attributes that contributed to the ideas of the place its population held. The paper consists of an initial exercise of linking these two methods in order to test the complementarity of their analytical tools. Exemplifying the analytical procedures undertaken in the two approaches, the readings done - global (of the locality as a whole) and partial (from parts of the settlement) - are presented and compared.
series SIGRADI
email
last changed 2016/03/10 09:47

_id 7135
authors Arumi-Noe, F.
year 1996
title Algorithm for the geometric construction of an optimum shading device
source Automation in Construction 5 (3) (1996) pp. 211-217
summary Given that there is a need to shade a window from the summer sun and also a need to expose it to the winter sun, this article describes an algorithm to design automatically a geometric construct that satisfies both requirements. The construct obtained represents the minimum solution to the simultaneous requirements. The window may be described by an arbitrary convex polygon and it may be oriented in any direction, and it may be placed at any chosen latitude. The algorithm consists of two sequential steps: first to find a winter solar funnel surface; and the second to clip the surface subject to the summer shading conditions. The article introduces the design problem, illustrates the results through two examples, outlines the logic of the algorithm and includes the derivation of the mathematical relations required to implement the algorithm. This work is part of the MUSES project, which is a long term research effort to integrate Energy Consciousness with Computer Graphics in Architectural Design.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 9e3d
authors Cheng, F.F., Patel, P. and Bancroft, S.
year 1996
title Development of an Integrated Facilities Information System Based on STEP - A Generic Product Data Model
source The Int. Journal of Construction IT 4(2), pp.1-13
summary A facility management system must be able to accommodate dynamic change and based on a set of generic tools. The next generation of facility management systems should be STEP conforming if they are to lay the foundation for fully integrated information management and data knowledge engineering that will be demanded in the near future in the new era of advanced site management. This paper describes an attempt to meet such a specification for an in-house system. The proposed system incorporates the latest technological advances in information management and processing. It pioneered an exchange architecture which presents a new class of system, in which the end-user has for the first time total flexibility and control of the data never before automated in this way.
series journal paper
last changed 2003/05/15 21:45

_id 78f0
authors Cotton, J.F.
year 1996
title Solid modeling as a tool for constructing solar envelopes
source Automation in Construction 5 (3) (1996) pp. 185-192
summary This paper presents a method for constructing solar envelopes in site planning using a 3D solid-modeling program. The solar envelope for a site is a mechanism for ensuring that planning regulations on the solar access rights of others are observed. In this application, solid modeling offers the advantage of being a general-purpose tool having the capability to handle sets of site conditions that are complex. The paper reviews the concept of solar envelopes and demonstrates the method of application of solar envelope construction to a site. Techniques for displaying the constraints on building sections imposed by a solar envelope are presented.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id ab1e
authors Coyne, R., McLaughlin, S., Newton, S., Sudweeks, F., Haynes, D. and Jumani, A.
year 1996
title Report on Computers in Practice: A survey of computers in architectural practice
source UK: University of Edinburgh
summary This is a report on the dynamic relationship between information technology (IT) and architectural practice. The report summarises the attitudes and opinions of practitioners gathered through extensive recorded interviews, and compares these attitudes and opinions with the findings of other studies. The report is compiled from the point of view of an understanding of appropriating as preceding as the model for understanding. We thereby connect what is going on in IT with concepts currently under discussion in postmodern thought and in the tradition of philosophical pragmatism. We identify several of the major options identified by practitioners in their use of IT, including practicing without computers, substituting computers for traditional tasks, delivering traditional services in an innovative way through IT, and developing new services with IT. We also demonstrate how firms are changing and are being shaped by the market for architectural services. One of the major areas of change is in how IT and related resources are managed. We also consider how the role of the practitioner as an individual in a firm is changing along with changes in IT, and how different prognoses about the future of IT in practice are influenced by certain dominant metaphors. Our conclusion is that IT is best understood and appropriated when it is seen as fitting into a dynamic field or constellation of technologies and practices. Such an orientation enables the reflective practitioner to confront what is really going on as IT interacts with practice. praxis- practice theory
series report
last changed 2003/04/23 15:50

_id ddssar9613
id ddssar9613
authors de Groot, E.H. and Louwers, F.H.
year 1996
title The TIE-system, a KBS for the Evaluation of Thermal Indoor office Environments
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary A Knowledge-Based System [KBS] for the evaluation of Thermal Indoor office Environments [TIE] (in the Netherlands) was the product of a one-year project, undertaken by researchers of the Physical Aspects of the Built Environment group [FAGO] in cooperation with the Knowledge-Based System Section of the TNO-Building & Construction research Institute in Delft. The objective of the project was to develop a KBS capable of evaluating thermal indoor environments of existing or proposed office buildings designs. The approach used in this study was based on a traditional method of predicting thermal sensation by calculating Fanger's 'Predicted Mean Vote' [PMV]. PMV is influenced by four environmental parameters of a room: air temperature, radiant temperature, air velocity and relative humidity, and by two personal parameters of the employees: metabolic rate and clothing insulation. The knowledge required to determine these six parameters was placed in KBS-databases and tables using a KBS-building tool called Advanced Knowledge Transfer System [AKTS]. By questioning the user, the TIE-system is capable of determining the PMV for a particular office room. The system also provides conclusions and advice on improving the thermal comfort. The TIE-system was a pilot-study for the long-term Building Evaluation research project, being undertaken at FAGO, that examines in all aspects of office building performance, and in which KBS may play a major pole.
series DDSS
last changed 2003/08/07 16:36

For more results click below:

this is page 0show page 1HOMELOGIN (you are user _anon_83105 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002