CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 475

_id ae1b
authors Zarnowiecka, Jadwiga C.
year 1998
title Chaos, Databases and Fractal Dimension of Regional Architecture
doi https://doi.org/10.52842/conf.ecaade.1998.267
source Computerised Craftsmanship [eCAADe Conference Proceedings] Paris (France) 24-26 September 1998, pp. 267-270
summary Modern research on chaos started in the 60's from an incredible finding that simple mathematical equations can model systems as complicated as waterfalls. In the 70's some scientists in the USA and in Europe started to find their way through the chaos. They were dealing with different spheres of science: mathematics, physics, biology, chemistry, physiology, ecology, economy. In the next 10 years? time the term 'chaos' has become generally known in science. Scientists gather in research groups according to their interests as to chaos and secondly according to their scientific specialities. (Gleick 1996) Objects that described chaos were irregular in shape, ripped. In 1975 Benoit Mandelbrot called them fractals. Fractal dimension that described fractal objects was also his invention. Fractal dimension is a way to measure quality: the degree of harshness, uneveness, irregularity of a given object. Carl Bovill (1996) showed how one can use fractal geometry in architecture and designing. This very fact made me try to use fractal geometry to deal with regional architecture. What or who is the degree of regionality of a given object to be for? A specially qualified person is able to state it nearly automatically. However, regionality is in some sense an unmeasurable feature. While dealing with data basis or checking particular projects, creation of procedures of automatic acquiring information concerning regionality is becoming a necessity.
series eCAADe
email
more http://www.paris-valdemarne.archi.fr/archive/ecaade98/html/20zarnowiecka/index.htm
last changed 2022/06/07 07:57

_id b27f
authors Campbell, Dace A.
year 1996
title Design in virtual environments using architectural metaphor : a HIT lab gallery
source University of Washington
summary This thesis explores the application and limitations of architectural metaphor in the design of virtual environments. Architecture, whether physical or virtual, is the expression of a society realized as meaningful space. Physical and virtual architecture have their own constraints and context, yet both use architectural organization as a way to order forms and spaces in the environment. Both strive to create meaningful place by defining space, and both must allow the participant to develop a cognitive map to orient and navigate in the space. The lack of physics of time and space in the virtual realm requires special attention and expression of its architecture in order for the participant to cope with transitions. These issues are exemplified by the development of an on-line gallery of virtual environments. Conclusions reached by the development of this design are discussed in the context of orientation, navigation, transition, enclosure, and scale.
keywords Virtual Reality; Human-Computer Interaction
series thesis:MSc
email
more http://www.hitl.washington.edu/publications/campbell/
last changed 2003/02/12 22:37

_id 39fb
authors Langton, C.G.
year 1996
title Artificial Life
source Boden, M. A. (1996). The Philosophy of Artificial Life, 39-94.New York and Oxford: Oxford University Press
summary Artificial Life contains a selection of articles from the first three issues of the journal of the same name, chosen so as to give an overview of the field, its connections with other disciplines, and its philosophical foundations. It is aimed at those with a general background in the sciences: some of the articles assume a mathematical background, or basic biology and computer science. I found it an informative and thought-provoking survey of a field around whose edges I have skirted for years. Many of the articles take biology as their starting point. Charles Taylor and David Jefferson provide a brief overview of the uses of artificial life as a tool in biology. Others look at more specific topics: Kristian Lindgren and Mats G. Nordahl use the iterated Prisoner's Dilemma to model cooperation and community structure in artificial ecosystems; Peter Schuster writes about molecular evolution in simplified test tube systems and its spin-off, evolutionary biotechnology; Przemyslaw Prusinkiewicz presents some examples of visual modelling of morphogenesis, illustrated with colour photographs; and Michael G. Dyer surveys different kinds of cooperative animal behaviour and some of the problems synthesising neural networks which exhibit similar behaviours. Other articles highlight the connections of artificial life with artificial intelligence. A review article by Luc Steels covers the relationship between the two fields, while another by Pattie Maes covers work on adaptive autonomous agents. Thomas S. Ray takes a synthetic approach to artificial life, with the goal of instantiating life rather than simulating it; he manages an awkward compromise between respecting the "physics and chemistry" of the digital medium and transplanting features of biological life. Kunihiko Kaneko looks to the mathematics of chaos theory to help understand the origins of complexity in evolution. In "Beyond Digital Naturalism", Walter Fontana, Guenter Wagner and Leo Buss argue that the test of artificial life is to solve conceptual problems of biology and that "there exists a logical deep structure of which carbon chemistry-based life is a manifestation"; they use lambda calculus to try and build a theory of organisation.
series other
last changed 2003/04/23 15:14

_id ddssup9603
id ddssup9603
authors Bach, Boudewijn and MacGillivray, Trina
year 1996
title Semi-manual design support for increasing railwaystation catchment & sustainable traffic routing
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part two: Urban Planning Proceedings (Spa, Belgium), August 18-21, 1996
summary The shape ('configuration'), location and direction of the pattern of potential trips by foot or bicycle can help decision makers and designers:- the shape of such a pattern informs about the potential size of a traffic calming area(such as 30Km-zoning),- the location of such a pattern refers to the user-groups and specific destinations that a urban network should bring in safe reach for dictated groups,- the direction of such a pattern, together with shape and location, points to the best routing to raise the Sustainable Traffic Modal Split or to improve the reach of destinations like a railway-station.The patters can be generated from zip-code's of user-groups with obvious and daily destinations (school-children, rail-passengers). The next step confronts the theoretical pattern with the layout of streets and the traffic flow, mapping or listing (potential) confrontations between cars and the non-motorised modes, a basis for economical investment in traffic-safety.A design can 'model' the analysed pattern(s) to a economic, direct and safe base (cycle or pedestrian) network. In co-operation, the Dutch the traffic consultant "Verkeersadviesbureau Diepens & Okkema" in Delft, The Netherlands and the Faculty of Architecture, Delft University of Technology, in Delft, The Netherlands, develloped the semi-manual design & decision support system "STAR-Analysis"
series DDSS
last changed 2003/11/21 15:16

_id 59c3
authors Bruckman, Amy
year 1996
title Finding One's Own Space in Cyberspace
source MIT Technology Review. January 1996, p. 50
summary The week the last Internet porn scandal broke, my phone didn't stop ringing: "Are women comfortable on the Net?" "Should women use gender-neutral names on the Net?" "Are women harassed on the Net?" Reporters called from all over the country with basically the same question. I told them all: your question is ill-formed. "The Net" is not one thing. It's like asking: "Are women comfortable in bars?" That's a silly question. Which woman? Which bar? The summer I was 18, I was the computer counselor at a summer camp. After the campers were asleep, the counselors were allowed out, and would go bar hopping. First everyone would go to Maria's, an Italian restaurant with red-and-white-checked table cloths. Maria welcomed everyone from behind the bar, greeting regular customers by name. She always brought us free garlic bread. Next we'd go to the Sandpiper, a disco with good dance music. The Sandpiper seemed excitingly adult--it was a little scary at first, but then I loved it. Next, we went to the Sportsman, a leather motorcycle bar that I found absolutely terrifying. Huge, bearded men bulging out of their leather vests and pants leered at me. I hid in the corner and tried not to make eye contact with anyone, hoping my friends would get tired soon and give me a ride back to camp.
series other
last changed 2003/04/23 15:50

_id 9e3d
authors Cheng, F.F., Patel, P. and Bancroft, S.
year 1996
title Development of an Integrated Facilities Information System Based on STEP - A Generic Product Data Model
source The Int. Journal of Construction IT 4(2), pp.1-13
summary A facility management system must be able to accommodate dynamic change and based on a set of generic tools. The next generation of facility management systems should be STEP conforming if they are to lay the foundation for fully integrated information management and data knowledge engineering that will be demanded in the near future in the new era of advanced site management. This paper describes an attempt to meet such a specification for an in-house system. The proposed system incorporates the latest technological advances in information management and processing. It pioneered an exchange architecture which presents a new class of system, in which the end-user has for the first time total flexibility and control of the data never before automated in this way.
series journal paper
last changed 2003/05/15 21:45

_id ae9f
authors Damer, B.
year 1996
title Inhabited Virtual Worlds: A New Frontier for Interaction Design
source Interactions, Vol.3, No.5 ACM
summary In April of 1995 the Internet took a step into the third dimension with the introduction of the Virtual Reality Modeling Language (VRML) as a commercial standard. Another event that month caused fewer headlines but in retrospect was just as significant. A small company from San Francisco, Worlds Incorporated, launched WorldsChat, a three dimensional environment allowing any Internet user to don a digital costume, or avatar, and travel about and converse with other people inhabiting the space. WorldsChat was appropriately modeled on a space station complete with a central hub, hallways, sliding doors, windows, and escalators to outlying pods.
series journal paper
last changed 2003/04/23 15:50

_id ddssup9609
id ddssup9609
authors Hall, A.C.
year 1996
title Assessing the Role of Computer Visualisation in Planning Control: a recent case study
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part two: Urban Planning Proceedings (Spa, Belgium), August 18-21, 1996
summary In papers to previous DDSS Conferences, and elsewhere, the author has developed an argument concerning the use of computer visualisation in the planning process. In essence, it proposes that: • visualisation can enable lay persons to play a more effective role and this can result in different and more effective decisions; • the level of realism employed should result from the basic requirements necessary to resolve the issue minimising the cost of production of the images. These points have been tested in repeated examples. The latest one concerns a new site that Anglia Polytechnic University has established in the centre of Chelmsford, UK. A computer model of the new campus showing both the existing and proposed buildings was commissioned from the author by the University for a visit by HM the Queen in June 1995. This model was subsequently adapted for use in the process of obtaining planning consent and the marketing of floorspace for the next building to be constructed. For this purpose, a higher level of realism was requested. The experience of achieving it confirmed the results of the previous research indicating the strong link between realism and cost. It also contributed new insights into the varying expectations of different professionals concerning the role of such a visualisation. The requirement of the architect for demonstrating all aspects of the design required a high level of realism than that required for planning and marketing purposes and was considerably more expensive. The low cost of use for planning purposes should be stressed but surprisingly, the lower level of realism implied may be easier for the lay person than the professional to accept.
series DDSS
last changed 2003/08/07 16:36

_id dba1
authors Hirschberg, Urs and Wenz, Florian
year 2000
title Phase(x) - memetic engineering for architecture
source Automation in Construction 9 (4) (2000) pp. 387-392
summary Phase(x) was a successful teaching experiment we made in our entry level CAAD course in the Wintersemester 1996/1997. The course was entirely organized by means of a central database that managed all the students' works through different learning phases. This set-up allowed that the results of one phase and one author be taken as the starting point for the work in the next phase by a different author. As students could choose which model they wanted to work with, the whole of Phase(x) could be viewed as an organism where, as in a genetic system, only the "fittest" works survived. While some discussion of the technical set-up is necessary as a background, the main topics addressed in this paper will be the structuring in phases of the course, the experiences we had with collective authorship, and the observations we made about the memes2 that developed and spread in the students' works. Finally we'll draw some conclusions in how far Phase(x) is relevant also in a larger context, which is not limited to teaching CAAD. Since this paper was first published in 1997, we have continued to explore the issues described here in various projects3 together with a growing number of other interested institutions worldwide. While leaving the paper essentially in its original form, we added a section at the end, in which we outline some of these recent developments.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 6153
authors Korbel, Wojciech
year 1996
title The Present and Future, Development of CAD Exploration in the Office of City’s Architect
source CAD Creativeness [Conference Proceedings / ISBN 83-905377-0-2] Bialystock (Poland), 25-27 April 1996 pp. 147-157
summary The usage of computer as a standard tool for an architect became obvious in the past few years. Late 90's along with their rapid development of technology, followed by the growing amount of computer hardware on the market /constantly better and cheaper at the same time/ caused the big changes in the possibilities of project's presentation. The lack of necessary memory to perform proper calculations for high quality rendered images no longer exists. The question raised most commonly by all leading computer software producers concerns the amount of time in which those calculations can be carried out. The race continues while once again, the price of already existing hardware drops rapidly. All these facts make computer more accessible for a potential user such as an architect. Additionally CAD programmers try to make programs as friendly as possible, reducing constantly the amount of time required to learn the program, at least at its bases. As the result, in the next few years, computer may become a standard, at least in some ways of project's presentation. Once again we may face the problem, when the everyday life goes far beyond the expectations. The question appears, how can all kinds of architectural authorities be prepared for constant changes in this field.
series plCAD
last changed 1999/04/09 15:30

_id 4a62
authors Leake, D.B. (ed.)
year 1996
title Case-Based Reasoning: Experiences, Lessons, and Future Directions
source The MIT Press
summary Case-based reasoning (CBR) is now a mature subfield of artificial intelligence. The fundamental principles of case-based reasoning have been established, and numerous applications have demonstrated its role as a useful technology. Recent progress has also revealed new opportunities and challenges for the field. This book presents experiences in CBR that illustrate the state of the art, the lessons learned from those experiences, and directions for the future. True to the spirit of CBR, this book examines the field in a primarily case-based way. Its chapters provide concrete examples of how key issues---including indexing and retrieval, case adaptation, evaluation, and application of CBR methods---are being addressed in the context of a range of tasks and domains. These issue-oriented case studies of experiences with particular projects provide a view of the principles of CBR, what CBR can do, how to attack problems with case-based reasoning, and how new challenges are being addressed. The case studies are supplemented with commentaries from leaders in the field providing individual perspectives on the state of CBR and its future impact. This book provides experienced CBR practitioners with a reference to recent progress in case-based reasoning research and applications. It also provides an introduction to CBR methods and the state of the art for students, AI researchers in other areas, and developers starting to build case-based reasoning systems. It presents experts and non-experts alike with visions of the most promising directions for new progress and for the roles of the next generation of CBR systems.
series other
last changed 2003/04/23 15:14

_id c7e9
authors Maver, T.W.
year 2002
title Predicting the Past, Remembering the Future
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 2-3
summary Charlas Magistrales 2There never has been such an exciting moment in time in the extraordinary 30 year history of our subject area, as NOW,when the philosophical theoretical and practical issues of virtuality are taking centre stage.The PastThere have, of course, been other defining moments during these exciting 30 years:• the first algorithms for generating building layouts (circa 1965).• the first use of Computer graphics for building appraisal (circa 1966).• the first integrated package for building performance appraisal (circa 1972).• the first computer generated perspective drawings (circa 1973).• the first robust drafting systems (circa 1975).• the first dynamic energy models (circa 1982).• the first photorealistic colour imaging (circa 1986).• the first animations (circa 1988)• the first multimedia systems (circa 1995), and• the first convincing demonstrations of virtual reality (circa 1996).Whereas the CAAD community has been hugely inventive in the development of ICT applications to building design, it hasbeen woefully remiss in its attempts to evaluate the contribution of those developments to the quality of the built environmentor to the efficiency of the design process. In the absence of any real evidence, one can only conjecture regarding the realbenefits which fall, it is suggested, under the following headings:• Verisimilitude: The extraordinary quality of still and animated images of the formal qualities of the interiors and exteriorsof individual buildings and of whole neighborhoods must surely give great comfort to practitioners and their clients thatwhat is intended, formally, is what will be delivered, i.e. WYSIWYG - what you see is what you get.• Sustainability: The power of «first-principle» models of the dynamic energetic behaviour of buildings in response tochanging diurnal and seasonal conditions has the potential to save millions of dollars and dramatically to reduce thedamaging environmental pollution created by badly designed and managed buildings.• Productivity: CAD is now a multi-billion dollar business which offers design decision support systems which operate,effectively, across continents, time-zones, professions and companies.• Communication: Multi-media technology - cheap to deliver but high in value - is changing the way in which we canexplain and understand the past and, envisage and anticipate the future; virtual past and virtual future!MacromyopiaThe late John Lansdown offered the view, in his wonderfully prophetic way, that ...”the future will be just like the past, onlymore so...”So what can we expect the extraordinary trajectory of our subject area to be?To have any chance of being accurate we have to have an understanding of the phenomenon of macromyopia: thephenomenon exhibitted by society of greatly exaggerating the immediate short-term impact of new technologies (particularlythe information technologies) but, more importantly, seriously underestimating their sustained long-term impacts - socially,economically and intellectually . Examples of flawed predictions regarding the the future application of information technologiesinclude:• The British Government in 1880 declined to support the idea of a national telephonic system, backed by the argumentthat there were sufficient small boys in the countryside to run with messages.• Alexander Bell was modest enough to say that: «I am not boasting or exaggerating but I believe, one day, there will bea telephone in every American city».• Tom Watson, in 1943 said: «I think there is a world market for about 5 computers».• In 1977, Ken Olssop of Digital said: «There is no reason for any individuals to have a computer in their home».The FutureJust as the ascent of woman/man-kind can be attributed to her/his capacity to discover amplifiers of the modest humancapability, so we shall discover how best to exploit our most important amplifier - that of the intellect. The more we know themore we can figure; the more we can figure the more we understand; the more we understand the more we can appraise;the more we can appraise the more we can decide; the more we can decide the more we can act; the more we can act themore we can shape; and the more we can shape, the better the chance that we can leave for future generations a trulysustainable built environment which is fit-for-purpose, cost-beneficial, environmentally friendly and culturally significactCentral to this aspiration will be our understanding of the relationship between real and virtual worlds and how to moveeffortlessly between them. We need to be able to design, from within the virtual world, environments which may be real ormay remain virtual or, perhaps, be part real and part virtual.What is certain is that the next 30 years will be every bit as exciting and challenging as the first 30 years.
series SIGRADI
email
last changed 2016/03/10 09:55

_id 096e
authors Papamichael, K., Porta, J.L., Chauvet, H., Collins, D., Trzcinski, T. , Thorpe, J. and Selkowitz, S.
year 1996
title The Building Design Advisor
doi https://doi.org/10.52842/conf.acadia.1996.085
source Design Computation: Collaboration, Reasoning, Pedagogy [ACADIA Conference Proceedings / ISBN 1-880250-05-5] Tucson (Arizona / USA) October 31 - November 2, 1996, pp. 85-97
summary The Building Design Advisor (BDA) is a software environment that supports the integrated use of multiple analysis and visualization tools throughout the building design process, from the initial, schematic design phases to the detailed specification of building components and systems. Based on a comprehensive design theory, the BDA uses an object-oriented representation of the building and its context, and acts as a data manager and process controller to allow building designers to benefit from the capabilities of multiple tools.

The BDA provides a graphical user interface that consists of two main elements: the Building Browser and the Decision Desktop. The Browser allows building designers to quickly navigate through the multitude of descriptive and performance parameters addressed by the analysis and visualization tools linked to the BDA. Through the Browser the user can edit the values of input parameters and select any number of input and/or output parameters for display in the Decision Desktop. The Desktop allows building designers to compare multiple design alternatives with respect to any number of parameters addressed by the tools linked to the BDA.

The BDA is implemented as a Windows-based application for personal computers. Its initial version is linked to a Schematic Graphic Editor (SGE), which allows designers to quickly and easily specify the geometric characteristics of building components and systems. For every object created in the SGE, the BDA supplies “smart” default values from a Prototypical Values Database (PVD) for all non-geometric parameters required as input to the analysis and visualization tools linked to the BDA. In addition to the SGE and the PVD, the initial version of the BDA is linked to a daylight analysis tool, an energy analysis tool, and a multimedia Case Studies Database (CSD). The next version of the BDA will be linked to additional tools, such as a photo-accurate rendering program and a cost analysis program. Future versions will address the whole building life-cycle and will be linked to construction, commissioning and building monitoring tools.

series ACADIA
email
last changed 2022/06/07 08:00

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 0ef8
authors Völker, H., Sariyildiz, S., Schwenck, M. and Durmisevic, S.
year 1996
title THE NEXT GENERATION OF ARCHITECTURE WITHIN COMPUTER SCIENCES
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary Considering architecture as a mixture of exact sciences and the art, we can state that as in all other sciences, every technical invention and development has resulted in advantages and disadvantages for the well-being and prosperity of mankind. Think about the developments in the fields of nuclear energy or space travel. Besides bringing a lot of improvements in many fields, it also has danger for the well-being of a mankind. The development of the advanced computer techniques has also influence on architecture, which is inevitable. How did the computer science influence architecture till now, and what is going to be the future of the architecture with this ongoing of computer science developments? The future developments will be both in the field of conceptual design (form aspect) and also in the area of materialization of the design process.

These all are dealing with the material world, for which the tools of computer science are highly appropriate. But what will happen to the immaterial world? How can we put these immaterial values into a computers model? Or can the computer be creative as a human being? Early developments of computer science in the field of architecture involved two-dimensional applications, and subsequently the significance of the third dimension became manifest. Nowadays, however, people are already speaking of a fourth dimension, interpreting it as time or as dynamics. And what, for instance, would a fifth, sixth or X-dimension represent?

In the future we will perhaps speak of the fifth dimension, comprising the tangible qualities of the building materials around us. And one day a sixth dimension might be created, when it will be possible to establish direct communication with computers, because direct exchange between the computer and the human brain has been realised. The ideas of designers can then be processed by the computer directly, and we will no longer be hampered by obstacles such as screen and keyboard. There are scientist who are working to realize bio-chips. If it will work, perhaps we can realise all these speculations. It is nearly sure that the emergence of new technologies will also affect our subject area, architecture and this will create fresh challenges, fresh concepts, and new buildings in the 21st century. The responsibility of the architects must be, to bear in mind that we are dealing with the well-being and the prosperity of mankind.

keywords Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa/
last changed 2004/05/04 14:43

_id 6e46
authors Wenz, Florian and Hirschberg, Urs
year 1997
title Phase(x) - Memetic Engineering for ArchitectureArchitecture
doi https://doi.org/10.52842/conf.ecaade.1997.x.b1e
source Challenges of the Future [15th eCAADe Conference Proceedings / ISBN 0-9523687-3-0] Vienna (Austria) 17-20 September 1997
summary Phase(x) was a successful teaching experiment we made in our entry level CAAD course in the Wintersemester 1996/97. The course was entirely organized by means of a central database that managed all the students' works through different learning phases. This setup allowed that the results of one phase and one author be taken as the starting point for the work in the next phase by a different author. As students could choose which model they wanted to work with, the whole of Phase(x) could be viewed as an organism where, as in a genetic system, only the "fittest" works survived.

While some discussion of the technical set-up is necessary as a background, the main topics addressed in this paper will be the structuring in phases of the course, the experiences we had with collective authorship, and the observations we made about the memes hat developed and spread in the students' works. Finally we'll draw some conclusions in how far Phase(x) is relevant also in a larger context, that is not limited to teaching CAAD.

keywords memetic process, collaborative creative work, collective authorship, caad education
series eCAADe
email
more http://info.tuwien.ac.at/ecaade/proc/wenz/wenz.htm
last changed 2022/06/07 07:50

_id a9ca
authors Abadi Abbo, Isaac
year 1996
title EFFECTIVENESS OF MODELS
source Full-Scale Modeling in the Age of Virtual Reality [6th EFA-Conference Proceedings]
summary Architects use many types of models to simulate space either in their design process or as final specifications for building them. These models have been proved useful or effective for specific purposes. This paper evaluates architectural models in terms of five effectiveness components: time of development, cost, complexity, variables simulated and ecological validity. This series of models, used regularly in architecture, are analysed to finally produce a matrix that shows the effectiveness of the different models for specific purposes in architectural design, research and education. Special emphasis is given to three specific models: 1/10 scale, full-scale and computer generated.
keywords Model Simulation, Real Environments
series other
type normal paper
more http://info.tuwien.ac.at/efa/
last changed 2016/02/17 13:47

_id ascaad2004_paper11
id ascaad2004_paper11
authors Abdelfattah, Hesham Khairy and Ali A. Raouf
year 2004
title No More Fear or Doubt: Electronic Architecture in Architectural Education
source eDesign in Architecture: ASCAAD's First International Conference on Computer Aided Architectural Design, 7-9 December 2004, KFUPM, Saudi Arabia
summary Operating electronic and Internet worked tools for Architectural education is an important, and merely a prerequisite step toward creating powerful tele-collabortion and tele-research in our Architectural studios. The design studio, as physical place and pedagogical method, is the core of architectural education. The Carnegie Endowment report on architectural education, published in 1996, identified a comparably central role for studios in schools today. Advances in CAD and visualization, combined with technologies to communicate images, data, and “live” action, now enable virtual dimensions of studio experience. Students no longer need to gather at the same time and place to tackle the same design problem. Critics can comment over the network or by e-mail, and distinguished jurors can make virtual visits without being in the same room as the pin-up—if there is a pin-up (or a room). Virtual design studios (VDS) have the potential to support collaboration over competition, diversify student experiences, and redistribute the intellectual resources of architectural education across geographic and socioeconomic divisions. The challenge is to predict whether VDS will isolate students from a sense of place and materiality, or if it will provide future architects the tools to reconcile communication environments and physical space.
series ASCAAD
email
last changed 2007/04/08 19:47

_id ddssar9601
id ddssar9601
authors Achten, H.H., Bax, M.F.Th. and Oxman, R.M.
year 1996
title Generic Representations and the Generic Grid: Knowledge Interface, Organisation and Support of the (early) Design Process
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary Computer Aided Design requires the implementation of architectural issues in order to support the architectural design process. These issues consist of elements, knowledge structures, and design processes that are typical for architectural design. The paper introduces two concepts that aim to define and model some of such architectural issues: building types and design processes. The first concept, the Generic grid, will be shown to structure the description of designs, provide a form-based hierarchical decomposition of design elements, and to provide conditions to accommodate concurrent design processes. The second concept, the Generic representation, models generic and typological knowledge of building types through the use of graphic representations with specific knowledge contents. The paper discusses both concepts and will show the potential of implementing Generic representations on the basis of the Generic grid in CAAD systems.
series DDSS
last changed 2003/11/21 15:15

_id 846c
authors Achten, Henri
year 1996
title Generic Representations: Intermediate Structures in Computer Aided Architectural Composition.
source Approaches to Computer Aided Architectural Composition [ISBN 83-905377-1-0] 1996, pp. 9-24
summary The paper discusses research work on typological and generic knowledge in architectural design. Architectural composition occurs predominantly through drawings as a medium. Throughout the process, architects apply knowledge. The paper discusses the question how to accommodate this process in computers bearing in mind the medium of drawings and the application of knowledge. It introduces generic representations as one particular approach and discusses its implications by the concept of intermediate structures. The paper concludes with an evaluation of the presented ideas.
keywords
series other
email
last changed 1999/04/08 17:16

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 23HOMELOGIN (you are user _anon_182405 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002