CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 581

_id ga0007
id ga0007
authors Coates, Paul and Miranda, Pablo
year 2000
title Swarm modelling. The use of Swarm Intelligence to generate architectural form
source International Conference on Generative Art
summary .neither the human purposes nor the architect's method are fully known in advance. Consequently, if this interpretation of the architectural problem situation is accepted, any problem-solving technique that relies on explicit problem definition, on distinct goal orientation, on data collection, or even on non-adaptive algorithms will distort the design process and the human purposes involved.' Stanford Anderson, "Problem-Solving and Problem-Worrying". The works concentrates in the use of the computer as a perceptive device, a sort of virtual hand or "sense", capable of prompting an environment. From a set of data that conforms the environment (in this case the geometrical representation of the form of the site) this perceptive device is capable of differentiating and generating distinct patterns in its behavior, patterns that an observer has to interpret as meaningful information. As Nicholas Negroponte explains referring to the project GROPE in his Architecture Machine: 'In contrast to describing criteria and asking the machine to generate physical form, this exercise focuses on generating criteria from physical form.' 'The onlooking human or architecture machine observes what is "interesting" by observing GROPE's behavior rather than by receiving the testimony that this or that is "interesting".' The swarm as a learning device. In this case the work implements a Swarm as a perceptive device. Swarms constitute a paradigm of parallel systems: a multitude of simple individuals aggregate in colonies or groups, giving rise to collaborative behaviors. The individual sensors can't learn, but the swarm as a system can evolve in to more stable states. These states generate distinct patterns, a result of the inner mechanics of the swarm and of the particularities of the environment. The dynamics of the system allows it to learn and adapt to the environment; information is stored in the speed of the sensors (the more collisions, the slower) that acts as a memory. The speed increases in the absence of collisions and so providing the system with the ability to forget, indispensable for differentiation of information and emergence of patterns. The swarm is both a perceptive and a spatial phenomenon. For being able to Interact with an environment an observer requires some sort of embodiment. In the case of the swarm, its algorithms for moving, collision detection, and swarm mechanics conform its perceptive body. The way this body interacts with its environment in the process of learning and differentiation of spatial patterns constitutes also a spatial phenomenon. The enactive space of the Swarm. Enaction, a concept developed by Maturana and Varela for the description of perception in biological terms, is the understanding of perception as the result of the structural coupling of an environment and an observer. Enaction does not address cognition in the currently conventional sense as an internal manipulation of extrinsic 'information' or 'signals', but as the relation between environment and observer and the blurring of their identities. Thus, the space generated by the swarm is an enactive space, a space without explicit description, and an invention of the swarm-environment structural coupling. If we consider a gestalt as 'Some property -such as roundness- common to a set of sense data and appreciated by organisms or artefacts' (Gordon Pask), the swarm is also able to differentiate space 'gestalts' or spaces of some characteristics, such as 'narrowness', or 'fluidness' etc. Implicit surfaces and the wrapping algorithm. One of the many ways of describing this space is through the use of implicit surfaces. An implicit surface may be imagined as an infinitesimally thin band of some measurable quantity such as color, density, temperature, pressure, etc. Thus, an implicit surface consists of those points in three-space that satisfy some particular requirement. This allows as to wrap the regions of space where a difference of quantity has been produced, enclosing the spaces in which some particular events in the history of the Swarm have occurred. The wrapping method allows complex topologies, such as manifoldness in one continuous surface. It is possible to transform the information generated by the swarm in to a landscape that is the result of the particular reading of the site by the swarm. Working in real time. Because of the complex nature of the machine, the only possible way to evaluate the resulting behavior is in real time. For this purpose specific applications had to be developed, using OpenGL for the Windows programming environment. The package consisted on translators from DXF format to a specific format used by these applications and viceversa, the Swarm "engine", a simulated parallel environment, and the Wrapping programs, to generate the implicit surfaces. Different versions of each had been produced, in different stages of development of the work.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ec4d
authors Croser, J.
year 2001
title GDL Object
source The Architect’s Journal, 14 June 2001, pp. 49-50
summary It is all too common for technology companies to seek a new route to solving the same problem but for the most part the solutions address the effect and not the cause. The good old-fashioned pencil is the perfect example where inventors have sought to design-out the effect of the inherent brittleness of lead. Traditionally different methods of sharpening were suggested and more recently the propelling pencil has reigned king, the lead being supported by the dispensing sleeve thus reducing the likelihood of breakage. Developers convinced by the Single Building Model approach to design development have each embarked on a difficult journey to create an easy to use feature packed application. Unfortunately it seems that the two are not mutually compatible if we are to believe what we see emanating from Technology giants Autodesk in the guise of Architectural Desktop 3. The effect of their development is a feature rich environment but the cost and in this case the cause is a tool which is far from easy to use. However, this is only a small part of a much bigger problem, Interoperability. You see when one designer develops a model with one tool the information is typically locked in that environment. Of course the geometry can be distributed and shared amongst the team for use with their tools but the properties, or as often misquoted, the intelligence is lost along the way. The effect is the technological version of rubble; the cause is the low quality of data-translation available to us. Fortunately there is one company, which is making rapid advancements on the whole issue of collaboration, and data sharing. An old timer (Graphisoft - famous for ArchiCAD) has just donned a smart new suit, set up a new company called GDL Technology and stepped into the ring to do battle, with a difference. The difference is that GDL Technology does not rely on conquering the competition, quite the opposite in fact their success relies upon the continued success of all the major CAD platforms including AutoCAD, MicroStation and ArchiCAD (of course). GDL Technology have created a standard data format for manufacturers called GDL Objects. Product manufacturers such as Velux are now able to develop product libraries using GDL Objects, which can then be placed in a CAD model, or drawing using almost any CAD tool. The product libraries can be stored on the web or on CD giving easy download access to any building industry professional. These objects are created using scripts which makes them tiny for downloading from the web. Each object contains 3 important types of information: · Parametric scale dependant 2d plan symbols · Full 3d geometric data · Manufacturers information such as material, colour and price Whilst manufacturers are racing to GDL Technologies door to sign up, developers and clients are quick to see the benefit too. Porsche are using GDL Objects to manage their brand identity as they build over 300 new showrooms worldwide. Having defined the building style and interior Porsche, in conjunction with the product suppliers, have produced a CD-ROM with all of the selected building components such as cladding, doors, furniture, and finishes. Designing and detailing the various schemes will therefore be as straightforward as using Lego. To ease the process of accessing, sizing and placing the product libraries GDL Technology have developed a product called GDL Object Explorer, a free-standing application which can be placed on the CD with the product libraries. Furthermore, whilst the Object Explorer gives access to the GDL Objects it also enables the user to save the object in one of many file formats including DWG, DGN, DXF, 3DS and even the IAI's IFC. However, if you are an AutoCAD user there is another tool, which has been designed especially for you, it is called the Object Adapter and it works inside of AutoCAD 14 and 2000. The Object Adapter will dynamically convert all GDL Objects to AutoCAD Blocks during placement, which means that they can be controlled with standard AutoCAD commands. Furthermore, each object can be linked to an online document from the manufacturer web site, which is ideal for more extensive product information. Other tools, which have been developed to make the most of the objects, are the Web Plug-in and SalesCAD. The Plug-in enables objects to be dynamically modified and displayed on web pages and Sales CAD is an easy to learn and use design tool for sales teams to explore, develop and cost designs on a Notebook PC whilst sitting in the architects office. All sales quotations are directly extracted from the model and presented in HTML format as a mixture of product images, product descriptions and tables identifying quantities and costs. With full lifecycle information stored in each GDL Object it is no surprise that GDL Technology see their objects as the future for building design. Indeed they are not alone, the IAI have already said that they are going to explore the possibility of associating GDL Objects with their own data sharing format the IFC. So down to the dirty stuff, money and how much it costs? Well, at the risk of sounding like a market trader in Petticoat Lane, "To you guv? Nuffin". That's right as a user of this technology it will cost you nothing! Not a penny, it is gratis, free. The product manufacturer pays for the license to host their libraries on the web or on CD and even then their costs are small costing from as little as 50p for each CD filled with objects. GDL Technology has come up trumps with their GDL Objects. They have developed a new way to solve old problems. If CAD were a pencil then GDL Objects would be ballistic lead, which would never break or loose its point. A much better alternative to the strategy used by many of their competitors who seek to avoid breaking the pencil by persuading the artist not to press down so hard. If you are still reading and you have not already dropped the magazine and run off to find out if your favorite product supplier has already signed up then I suggest you check out the following web sites www.gdlcentral.com and www.gdltechnology.com. If you do not see them there, pick up the phone and ask them why.
series journal paper
email
last changed 2003/04/23 15:14

_id c6db
authors Heylighen, Ann
year 2000
title In Case of Architectural Design. Critique and Praise of Case-Based Design in Architecture
source Dissertation - Doct. Toegepaste wetenschappen, KU Leuven, Fac. Toegepaste wetenschappen, Dep. architectuur, stedebouw en ruimtelijke ordening (ISBN 90-5682-248-9)
summary Architects are said to learn design by experience. Learning design by experience is the essence of Case-Based Design (CBD), a sub-domain of Artificial Intelligence. Part I critically explores the CBD approach from an architectural point of view, tracing its origins in the Theory of Dynamic Memory and highlighting its potential for architectural design. Seven CBD systems are analysed, experienced architects and design teachers are interviewed, and an experiment is carried out to examine how cases affect the design performance of architecture students. The results of this exploration show that despite its sound view on how architects acquire (design) knowledge, CBD is limited in important respects: it reduces architectural design to problem solving, is difficult to implement and has to contend with prejudices among the target group. With a view to stretching these limits, part II covers the design, implementation and evaluation of DYNAMO (Dynamic Architectural Memory On-line). This Web-based design tool tailors the CBD approach to the complexity of architectural design by effecting three transformations: extending the concern with design products towards design processes, turning static case bases into dynamic memories and upgrading users from passive case consumers to active case-based designers.
keywords Architectural Design; Case-Based Design
series thesis:PhD
email
last changed 2002/12/14 19:29

_id ga0013
id ga0013
authors Annunziato, Mauro and Pierucci, Piero
year 2000
title Artificial Worlds, Virtual Generations
source International Conference on Generative Art
summary The progress in the scientific understanding/simulation of the evolution mechanisms and the first technological realizations (artificial life environments, robots, intelligent toys, self reproducing machines, agents on the web) are creating the base of a new age: the coming of the artificial beings and artificial societies. Although this aspect could seems a technological conquest, by our point of view it represent the foundation of a new step in the human evolution. The anticipation of this change is the development of a new cultural paradigm inherited from the theories of evolution and complexity: a new way to think to the culture, aesthetics and intelligence seen as emergent self-organizing qualities of a collectivity evolved along the time through genetic and language evolution. For these reasons artificial life is going to be an anticipatory and incredibly creative area for the artistic expression and imagination. In this paper we try to correlate some elements of the present research in the field of artificial life, art and technological grow up in order to trace a path of development for the creation of digital worlds where the artificial beings are able to evolve own culture, language and aesthetics and they are able to interact con the human people.Finally we report our experience in the realization of an interactive audio-visual art installation based on two connected virtual worlds realized with artificial life environments. In these worlds,the digital individuals can interact, reproduce and evolve through the mechanisms of genetic mutations. The real people can interact with the artificial individuals creating an hybrid ecosystem and generating emergent shapes, colors, sound architectures and metaphors for imaginary societies, virtual reflections of the real worlds.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id a172
authors Brian Jeffrey Palidar
year 2000
title Live and Direct:A Research and Development Facility for Robotics and Artificial Intelligence Applications
source University of Washington, Design Machine Group
summary This thesis proposed a design project focusing on creating a center for the incorporation, assembly, and demonstration of cutting edge research in AI applications. The project s client is an Institute dedicated to developing the platform for general intelligence by assembling current research and technologies into composite prototypes that push the boundaries of artificial beings. This center also proposes an interactive forum in which the general public can experience the results of the research first hand as well as learn about past projects, attend lectures and presentations, and other activities related to this endeavor and its implications to humanity.
series thesis:MSc
more http://dmg.caup.washington.edu/xmlSiteEngine/browsers/stylin/publications.html
last changed 2004/06/02 19:12

_id 958e
authors Coppola, Carlo and Ceso, Alessandro
year 2000
title Computer Aided Design and Artificial Intelligence in Urban and Architectural Design
doi https://doi.org/10.52842/conf.ecaade.2000.301
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 301-307
summary In general, computer-aided design is still limited to a rather elementary use of the medium, as it is mainly used for the representation/simulation of a design idea w an electronic drawing-table. hich is not computer-generated. The procedures used to date have been basically been those of an electronic drawing-table. At the first stage of development the objective was to find a different and better means of communication, to give form to an idea so as to show its quality. The procedures used were 2D design and 3D simulation models, usually used when the design was already defined. The second stage is when solid 3D modelling is used to define the formal design at the conception stage, using virtual models instead of study models in wood, plastic, etc. At the same time in other connected fields the objective is to evaluate the feasibility of the formal idea by means of structural and technological analysis. The third stage, in my opinion, should aim to develop procedures capable of contributing to both the generation of the formal idea and the simultaneous study of technical feasibility by means of a decision-making support system aided by an Artificial Intelligence procedure which will lead to what I would describe as the definition of the design in its totality. The approach to architectural and urban design has been strongly influenced by the first two stages, though these have developed independently and with very specific objectives. It is my belief that architectural design is now increasingly the result of a structured and complex process, not a simple act of pure artistic invention. Consequently, I feel that the way forward is a procedure able to virtually represent all the features of the object designed, not only in its definitive configuration but also and more importantly in the interactions which determine the design process as it develops. Thus A.I. becomes the means of synthesis for models which are hierarchically subordinated which together determine the design object in its developmental process, supporting decision-making by applying processing criteria which generative modelling has already identified. This trend is currently being experimented, giving rise to interesting results from process design in the field of industrial production.
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:56

_id sigradi2006_e183a
id sigradi2006_e183a
authors Costa Couceiro, Mauro
year 2006
title La Arquitectura como Extensión Fenotípica Humana - Un Acercamiento Basado en Análisis Computacionales [Architecture as human phenotypic extension – An approach based on computational explorations]
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 56-60
summary The study describes some of the aspects tackled within a current Ph.D. research where architectural applications of constructive, structural and organization processes existing in biological systems are considered. The present information processing capacity of computers and the specific software development have allowed creating a bridge between two holistic nature disciplines: architecture and biology. The crossover between those disciplines entails a methodological paradigm change towards a new one based on the dynamical aspects of forms and compositions. Recent studies about artificial-natural intelligence (Hawkins, 2004) and developmental-evolutionary biology (Maturana, 2004) have added fundamental knowledge about the role of the analogy in the creative process and the relationship between forms and functions. The dimensions and restrictions of the Evo-Devo concepts are analyzed, developed and tested by software that combines parametric geometries, L-systems (Lindenmayer, 1990), shape-grammars (Stiny and Gips, 1971) and evolutionary algorithms (Holland, 1975) as a way of testing new architectural solutions within computable environments. It is pondered Lamarck´s (1744-1829) and Weismann (1834-1914) theoretical approaches to evolution where can be found significant opposing views. Lamarck´s theory assumes that an individual effort towards a specific evolutionary goal can cause change to descendents. On the other hand, Weismann defended that the germ cells are not affected by anything the body learns or any ability it acquires during its life, and cannot pass this information on to the next generation; this is called the Weismann barrier. Lamarck’s widely rejected theory has recently found a new place in artificial and natural intelligence researches as a valid explanation to some aspects of the human knowledge evolution phenomena, that is, the deliberate change of paradigms in the intentional research of solutions. As well as the analogy between genetics and architecture (Estévez and Shu, 2000) is useful in order to understand and program emergent complexity phenomena (Hopfield, 1982) for architectural solutions, also the consideration of architecture as a product of a human extended phenotype can help us to understand better its cultural dimension.
keywords evolutionary computation; genetic architectures; artificial/natural intelligence
series SIGRADI
email
last changed 2016/03/10 09:49

_id 349e
authors Durmisevic, Sanja
year 2002
title Perception Aspects in Underground Spaces using Intelligent Knowledge Modeling
source Delft University of Technology
summary The intensification, combination and transformation are main strategies for future spatial development of the Netherlands, which are stated in the Fifth Bill regarding Spatial Planning. These strategies indicate that in the future, space should be utilized in a more compact and more efficient way requiring, at the same time, re-evaluation of the existing built environment and finding ways to improve it. In this context, the concept of multiple space usage is accentuated, which would focus on intensive 4-dimensional spatial exploration. The underground space is acknowledged as an important part of multiple space usage. In the document 'Spatial Exploration 2000', the underground space is recognized by policy makers as an important new 'frontier' that could provide significant contribution to future spatial requirements.In a relatively short period, the underground space became an important research area. Although among specialists there is appreciation of what underground space could provide for densely populated urban areas, there are still reserved feelings by the public, which mostly relate to the poor quality of these spaces. Many realized underground projects, namely subways, resulted in poor user satisfaction. Today, there is still a significant knowledge gap related to perception of underground space. There is also a lack of detailed documentation on actual applications of the theories, followed by research results and applied techniques. This is the case in different areas of architectural design, but for underground spaces perhaps most evident due to their infancv role in general architectural practice. In order to create better designs, diverse aspects, which are very often of qualitative nature, should be considered in perspective with the final goal to improve quality and image of underground space. In the architectural design process, one has to establish certain relations among design information in advance, to make design backed by sound rationale. The main difficulty at this point is that such relationships may not be determined due to various reasons. One example may be the vagueness of the architectural design data due to linguistic qualities in them. Another, may be vaguely defined design qualities. In this work, the problem was not only the initial fuzziness of the information but also the desired relevancy determination among all pieces of information given. Presently, to determine the existence of such relevancy is more or less a matter of architectural subjective judgement rather than systematic, non-subjective decision-making based on an existing design. This implies that the invocation of certain tools dealing with fuzzy information is essential for enhanced design decisions. Efficient methods and tools to deal with qualitative, soft data are scarce, especially in the architectural domain. Traditionally well established methods, such as statistical analysis, have been used mainly for data analysis focused on similar types to the present research. These methods mainly fall into a category of pattern recognition. Statistical regression methods are the most common approaches towards this goal. One essential drawback of this method is the inability of dealing efficiently with non-linear data. With statistical analysis, the linear relationships are established by regression analysis where dealing with non-linearity is mostly evaded. Concerning the presence of multi-dimensional data sets, it is evident that the assumption of linear relationships among all pieces of information would be a gross approximation, which one has no basis to assume. A starting point in this research was that there maybe both linearity and non-linearity present in the data and therefore the appropriate methods should be used in order to deal with that non-linearity. Therefore, some other commensurate methods were adopted for knowledge modeling. In that respect, soft computing techniques proved to match the quality of the multi-dimensional data-set subject to analysis, which is deemed to be 'soft'. There is yet another reason why soft-computing techniques were applied, which is related to the automation of knowledge modeling. In this respect, traditional models such as Decision Support Systems and Expert Systems have drawbacks. One important drawback is that the development of these systems is a time-consuming process. The programming part, in which various deliberations are required to form a consistent if-then rule knowledge based system, is also a time-consuming activity. For these reasons, the methods and tools from other disciplines, which also deal with soft data, should be integrated into architectural design. With fuzzy logic, the imprecision of data can be dealt with in a similar way to how humans do it. Artificial neural networks are deemed to some extent to model the human brain, and simulate its functions in the form of parallel information processing. They are considered important components of Artificial Intelligence (Al). With neural networks, it is possible to learn from examples, or more precisely to learn from input-output data samples. The combination of the neural and fuzzy approach proved to be a powerful combination for dealing with qualitative data. The problem of automated knowledge modeling is efficiently solved by employment of machine learning techniques. Here, the expertise of prof. dr. Ozer Ciftcioglu in the field of soft computing was crucial for tool development. By combining knowledge from two different disciplines a unique tool could be developed that would enable intelligent modeling of soft data needed for support of the building design process. In this respect, this research is a starting point in that direction. It is multidisciplinary and on the cutting edge between the field of Architecture and the field of Artificial Intelligence. From the architectural viewpoint, the perception of space is considered through relationship between a human being and a built environment. Techniques from the field of Artificial Intelligence are employed to model that relationship. Such an efficient combination of two disciplines makes it possible to extend our knowledge boundaries in the field of architecture and improve design quality. With additional techniques, meta know/edge, or in other words "knowledge about knowledge", can be created. Such techniques involve sensitivity analysis, which determines the amount of dependency of the output of a model (comfort and public safety) on the information fed into the model (input). Another technique is functional relationship modeling between aspects, which is derivation of dependency of a design parameter as a function of user's perceptions. With this technique, it is possible to determine functional relationships between dependent and independent variables. This thesis is a contribution to better understanding of users' perception of underground space, through the prism of public safety and comfort, which was achieved by means of intelligent knowledge modeling. In this respect, this thesis demonstrated an application of ICT (Information and Communication Technology) as a partner in the building design process by employing advanced modeling techniques. The method explained throughout this work is very generic and is possible to apply to not only different areas of architectural design, but also to other domains that involve qualitative data.
keywords Underground Space; Perception; Soft Computing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id ga0008
id ga0008
authors Koutamanis, Alexander
year 2000
title Redirecting design generation in architecture
source International Conference on Generative Art
summary Design generation has been the traditional culmination of computational design theory in architecture. Motivated either by programmatic and functional complexity (as in space allocation) or by the elegance and power of representational analyses (shape grammars, rectangular arrangements), research has produced generative systems capable of producing new designs that satisfied certain conditions or of reproducing exhaustively entire classes (such as all possible Palladian villas), comprising known and plausible new designs. Most generative systems aimed at a complete spatial design (detailing being an unpopular subject), with minimal if any intervention by the human user / designer. The reason for doing so was either to give a demonstration of the elegance, power and completeness of a system or simply that the replacement of the designer with the computer was the fundamental purpose of the system. In other words, the problem was deemed either already resolved by the generative system or too complex for the human designer. The ongoing democratization of the computer stimulates reconsideration of the principles underlying existing design generation in architecture. While the domain analysis upon which most systems are based is insightful and interesting, jumping to a generative conclusion was almost always based on a very sketchy understanding of human creativity and of the computer's role in designing and creativity. Our current perception of such matters suggests a different approach, based on the augmentation of intuitive creative capabilities with computational extensions. The paper proposes that architectural generative design systems can be redirected towards design exploration, including the development of alternatives and variations. Human designers are known to follow inconsistent strategies when confronted with conflicts in their designs. These strategies are not made more consistent by the emerging forms of design analysis. The use of analytical means such as simulation, couple to the necessity of considering a rapidly growing number of aspects, means that the designer is confronted with huge amounts of information that have to be processed and integrated in the design. Generative design exploration that can combine the analysis results in directed and responsive redesigning seems an effective method for the early stages of the design process, as well as for partial (local) problems in later stages. The transformation of generative systems into feedback support and background assistance for the human designer presupposes re-orientation of design generation with respect to the issues of local intelligence and autonomy. Design generation has made extensive use of local intelligence but has always kept it subservient to global schemes that tended to be holistic, rigid or deterministic. The acceptance of local conditions as largely independent structures (local coordinating devices) affords a more flexible attitude that permits not only the emergence of internal conflicts but also the resolution of such conflicts in a transparent manner. The resulting autonomy of local coordinating devices can be expanded to practically all aspects and abstraction levels. The ability to have intelligent behaviour built in components of the design representation, as well as in the spatial and building elements they signify, means that we can create the new, sharper tools required by the complexity resulting from the interpretation of the built environment as a dynamic configuration of co-operating yet autonomous parts that have to be considered independently and in conjunction with each other.   P.S. The content of the paper will be illustrated by a couple of computer programs that demonstrate the princples of local intelligence and autonomy in redesigning. It is possible that these programs could be presented as independent interactive exhibits but it all depends upon the time we can make free for the development of self-sufficient, self-running demonstrations until December.
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 03ad
authors Lottaz, C., Smith, I.F.C., Robert-Nicoud, Y. and Faltings, B.V.
year 2000
title Constraint-based support for negotiation in collaborative design
source Artificial Intelligence in Engineering, Vol: 14, Issue: 3, pp. 261-280.
summary Solution spaces are proposed, instead of single solutions only, to support collaborative tasks during design and construction. Currently, partners involved in construction projects typically assign single values for sub-sets of variables and then proceed, often after tedious negotiations with other partners, to integrate these partial solutions into more complete project descriptions. We suggest the use of constraint solving to express possibly large families of acceptable solutions in order to improve the negotiation process in two ways. On one hand, con ict detection can be performed in an automated manner. Through the constraints collaborators impose, they de ne large unfeasible areas where no solution to the problem at hand can be expected. An emty intersectidon of the solution spaces can thus point at a con ict of design goals of the di erent collaborators at an early stage of the design process. On the other hand, important decision support during negotiation is provided. When a solution space is found, collaborators know during negotiation that they are negotiating about feasible solutions. Negotiation is no longer a means to nd a solution to the problem but it takes place in order to nd a good or the best solution. Since the consistency of the design remains ensured, collaborators are expected to be less restrictive towards innovative ideas during negotiation. Moreover, constraint techniques using explicit representations of solution spaces can provide tools to visualize trade-o s and illustrate the impact of certain decisions on other parameters. Thus decision-making is improved during the negotiation. New algorithms have been developed at EPFL for solving multi-dimensional nonlinear inequality constraints on continuous variables. Together with intuitive user interfaces such constraint-based support leads to better change management and easier implementation of least commitment decision strategies. It is expected that the results of this research can improve both the e ciency of negotiation processes and the quality of the achieved results.
series journal paper
last changed 2003/04/23 15:50

_id 2ea9
authors Miranda, Pablo and Coates, Paul
year 2000
title Swarm modelling. The use of Swarm Intelligence to generate architectural form
source 4th International Conference on Generative Art, Politecnico di Milano University, Milan, Italy
summary In general the paper discusses the morphogenetic properties of swarm behaviour, and presents an example of mapping trajectories in the space of forms onto 3d flocking boids. This allows the construction of a kind of analogue to the string writing genetic algorithms and Genetic programming that are more familiar, and which have been reported by CECA. Earlier work with autonomous agents at CECA were concerned with the behaviour of agents embedded in an environment, and interactions between perceptive agents and their surrounding form. As elaborated below, the work covered in this paper is a refinement and abstraction of those experiments. This places the swarm back where perhaps it should have belonged, into the realms of abstract computation, where the emergent behaviours (the familiar flocking effect, and other observable morphologies) are used to control any number of alternative lower level morphological parameters, and to search the space of all possible variants in a directed and parallel way.
keywords Swarm Intelligence; Autonomous agents; Enactive Perception; Structural Coupling; Sensory-motor Perception; Stigmergy
series other
email
last changed 2003/03/24 15:46

_id 39c6
authors Miranda, Pablo and Coates, Paul
year 2000
title Swarm modelling. The use of Swarm Intelligence to generate architectural form
source 3th International Conference on Generative Art, Politecnico di Milano University, Milan, Italy
summary In general the paper discusses the morphogenetic properties of swarm behaviour, and presents an example of mapping trajectories in the space of forms onto 3d flocking boids. This allows the construction of a kind of analogue to the string writing genetic algorithms and Genetic programming that are more familiar, and which have been reported by CECA. Earlier work with autonomous agents at CECA were concerned with the behaviour of agents embedded in an environment, and interactions between perceptive agents and their surrounding form. As elaborated below, the work covered in this paper is a refinement and abstraction of those experiments. This places the swarm back where perhaps it should have belonged, into the realms of abstract computation, where the emergent behaviours (the familiar flocking effect, and other observable morphologies) are used to control any number of alternative lower level morphological parameters, and to search the space of all possible variants in a directed and parallel way.
keywords Swarm Intelligence; Autonomous agents; Enactive Perception; Structural Coupling; Sensory-motor Perception; Stigmergy
series other
email
last changed 2003/03/24 17:13

_id 60a7
authors Monedero, Javier
year 2000
title Parametric design: a review and some experiences
source Automation in Construction 9 (4) (2000) pp. 369-377
summary During the last few years there has been an extraordinary development of computer-aided tools intended to present or communicate the results of architectural projects. But there has not been a comparable progress in the development of tools intended to assist design to generate architectural forms in an easy and interactive way. Even worse, architects who use the powerful means provided by computers as a direct tool to create architectural forms are still an exception. Architecture continues to be produced by traditional means using the computer as little more than a drafting tool. The main reasons that may explain this situation can be identified rather easily, although there will be significant differences of opinion. In my opinion, it is a mistake trying to advance too rapidly and, for instance, proposing integrated design methods using expert systems and artificial intelligence while no adequate tools to generate and modify simple 3D-models are available. The modeling tools we have at the present moment are unsatisfactory. Their principal limitation is the lack of appropriate instruments to modify interactively the model once it has been created. This is a fundamental aspect in any design activity, where the designer is constantly going forward and backwards, re-elaborating once and again some particular aspect of the model, or its general layout, or even coming back to a previous solution that had been temporarily abandoned. This paper presents a general summary of the actual situation and recent developments that may be incorporated to architectural design tools in a near future, together with some critical remarks about their relevance to architecture.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id ga0010
id ga0010
authors Moroni, A., Zuben, F. Von and Manzolli, J.
year 2000
title ArTbitrariness in Music
source International Conference on Generative Art
summary Evolution is now considered not only powerful enough to bring about the biological entities as complex as humans and conciousness, but also useful in simulation to create algorithms and structures of higher levels of complexity than could easily be built by design. In the context of artistic domains, the process of human-machine interaction is analyzed as a good framework to explore creativity and to produce results that could not be obtained without this interaction. When evolutionary computation and other computational intelligence methodologies are involved, every attempt to improve aesthetic judgement we denote as ArTbitrariness, and is interpreted as an interactive iterative optimization process. ArTbitrariness is also suggested as an effective way to produce art through an efficient manipulation of information and a proper use of computational creativity to increase the complexity of the results without neglecting the aesthetic aspects [Moroni et al., 2000]. Our emphasis will be in an approach to interactive music composition. The problem of computer generation of musical material has received extensive attention and a subclass of the field of algorithmic composition includes those applications which use the computer as something in between an instrument, in which a user "plays" through the application's interface, and a compositional aid, which a user experiments with in order to generate stimulating and varying musical material. This approach was adopted in Vox Populi, a hybrid made up of an instrument and a compositional environment. Differently from other systems found in genetic algorithms or evolutionary computation, in which people have to listen to and judge the musical items, Vox Populi uses the computer and the mouse as real-time music controllers, acting as a new interactive computer-based musical instrument. The interface is designed to be flexible for the user to modify the music being generated. It explores evolutionary computation in the context of algorithmic composition and provides a graphical interface that allows to modify the tonal center and the voice range, changing the evolution of the music by using the mouse[Moroni et al., 1999]. A piece of music consists of several sets of musical material manipulated and exposed to the listener, for example pitches, harmonies, rhythms, timbres, etc. They are composed of a finite number of elements and basically, the aim of a composer is to organize those elements in an esthetic way. Modeling a piece as a dynamic system implies a view in which the composer draws trajectories or orbits using the elements of each set [Manzolli, 1991]. Nonlinear iterative mappings are associated with interface controls. In the next page two examples of nonlinear iterative mappings with their resulting musical pieces are shown.The mappings may give rise to attractors, defined as geometric figures that represent the set of stationary states of a non-linear dynamic system, or simply trajectories to which the system is attracted. The relevance of this approach goes beyond music applications per se. Computer music systems that are built on the basis of a solid theory can be coherently embedded into multimedia environments. The richness and specialty of the music domain are likely to initiate new thinking and ideas, which will have an impact on areas such as knowledge representation and planning, and on the design of visual formalisms and human-computer interfaces in general. Above and bellow, Vox Populi interface is depicted, showing two nonlinear iterative mappings with their resulting musical pieces. References [Manzolli, 1991] J. Manzolli. Harmonic Strange Attractors, CEM BULLETIN, Vol. 2, No. 2, 4 -- 7, 1991. [Moroni et al., 1999] Moroni, J. Manzolli, F. Von Zuben, R. Gudwin. Evolutionary Computation applied to Algorithmic Composition, Proceedings of CEC99 - IEEE International Conference on Evolutionary Computation, Washington D. C., p. 807 -- 811,1999. [Moroni et al., 2000] Moroni, A., Von Zuben, F. and Manzolli, J. ArTbitration, Las Vegas, USA: Proceedings of the 2000 Genetic and Evolutionary Computation Conference Workshop Program – GECCO, 143 -- 145, 2000.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id d8df
authors Naticchia, Berardo
year 1999
title Physical Knowledge in Patterns: Bayesian Network Models for Preliminary Design
doi https://doi.org/10.52842/conf.ecaade.1999.611
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 611-619
summary Computer applications in design have pursued two main development directions: analytical modelling and information technology. The former line has produced a large number of tools for reality simulation (i.e. finite element models), the latter is producing an equally large amount of advances in conceptual design support (i.e. artificial intelligence tools). Nevertheless we can trace rare interactions between computation models related to those different approaches. This lack of integration is the main reason of the difficulty of CAAD application to the preliminary stage of design, where logical and quantitative reasoning are closely related in a process that we often call 'qualitative evaluation'. This paper briefly surveys the current development of qualitative physical models applied in design and propose a general approach for modelling physical behaviour by means of Bayesian network we are employing to develop a tutoring and coaching system for natural ventilation preliminary design of halls, called VENTPad. This tool explores the possibility of modelling the causal mechanism that operate in real systems in order to allow a number of integrated logical and quantitative inference about the fluid-dynamic behaviour of an hall. This application could be an interesting connection tool between logical and analytical procedures in preliminary design aiding, able to help students or unskilled architects, both to guide them through the analysis process of numerical data (i.e. obtained with sophisticate Computational Fluid Dynamics software) or experimental data (i.e. obtained with laboratory test models) and to suggest improvements to the design.
keywords Qualitative Physical Modelling, Preliminary Design, Bayesian Networks
series eCAADe
email
last changed 2022/06/07 07:59

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 1bb0
authors Russell, S. and Norvig, P.
year 1995
title Artificial Intelligence: A Modern Approach
source Prentice Hall, Englewood Cliffs, NJ
summary Humankind has given itself the scientific name homo sapiens--man the wise--because our mental capacities are so important to our everyday lives and our sense of self. The field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves. But unlike philosophy and psychology, which are also concerned with AI strives to build intelligent entities as well as understand them. Another reason to study AI is that these constructed intelligent entities are interesting and useful in their own right. AI has produced many significant and impressive products even at this early stage in its development. Although no one can predict the future in detail, it is clear that computers with human-level intelligence (or better) would have a huge impact on our everyday lives and on the future course of civilization. AI addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain{brain}, whether biological or electronic, to perceive, understand, predict, and manipulate a world far larger and more complicated than itself? How do we go about making something with those properties? These are hard questions, but unlike the search for faster-than-light travel or an antigravity device, the researcher in AI has solid evidence that the quest is possible. All the researcher has to do is look in the mirror to see an example of an intelligent system. AI is one of the newest disciplines. It was formally initiated in 1956, when the name was coined, although at that point work had been under way for about five years. Along with modern genetics, it is regularly cited as the ``field I would most like to be in'' by scientists in other disciplines. A student in physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest, and that it takes many years of study before one can contribute new ideas. AI, on the other hand, still has openings for a full-time Einstein. The study of intelligence is also one of the oldest disciplines. For over 2000 years, philosophers have tried to understand how seeing, learning, remembering, and reasoning could, or should, be done. The advent of usable computers in the early 1950s turned the learned but armchair speculation concerning these mental faculties into a real experimental and theoretical discipline. Many felt that the new ``Electronic Super-Brains'' had unlimited potential for intelligence. ``Faster Than Einstein'' was a typical headline. But as well as providing a vehicle for creating artificially intelligent entities, the computer provides a tool for testing theories of intelligence, and many theories failed to withstand the test--a case of ``out of the armchair, into the fire.'' AI has turned out to be more difficult than many at first imagined, and modern ideas are much richer, more subtle, and more interesting as a result. AI currently encompasses a huge variety of subfields, from general-purpose areas such as perception and logical reasoning, to specific tasks such as playing chess, proving mathematical theorems, writing poetry{poetry}, and diagnosing diseases. Often, scientists in other fields move gradually into artificial intelligence, where they find the tools and vocabulary to systematize and automate the intellectual tasks on which they have been working all their lives. Similarly, workers in AI can choose to apply their methods to any area of human intellectual endeavor. In this sense, it is truly a universal field.
series other
last changed 2003/04/23 15:14

_id 735b
authors Tolone, W.J.
year 2000
title Virtual situation rooms: connecting people across enterprises for supply-chain agility
source Computer-Aided Design, Vol. 32 (2) (2000) pp. 109-117
summary Agility and time-based manufacturing are critical success factors for today's manufacturing enterprise. To be competitive, enterprises must integrate their supply chains moreeffectively and forge close memberships with customers and suppliers more quickly. Consequently, technologies must be developed that enable enterprises to respond toconsumer demand more quickly, integrate with suppliers more effectively, adapt to market variations more efficiently and evolve product designs with manufacturing practicesmore seamlessly. The mission of the Extended-Enterprise Coalition for Integrated Collaborative Manufacturing Systems coalition is to research, develop, and demonstratetechnologies to enable the integration of manufacturing applications in a multi-company supply chain planning and execution environment. We believe real-time andasynchronous collaboration technology will play a critical role in allowing manufacturers to increase their supply chain agility. We are realizing our efforts through our VirtualSituation Room (VSR) technology. The primary goal of the VSR technology is to enhance current ad-hoc, limited methods and mechanisms for spontaneous, real-timecommunication using feature-rich, industry standards-based building blocks and network protocols. VSR technology is being designed to find and engage quickly all relevantmembers of a problem solving team supported by highly interactive, conversational access to information and control and enabled by business processes, security policies andtechnologies, intelligence, and integration tools.
keywords Collaborative Systems, Supply Chain Integration, Real-Time Conferencing
series journal paper
email
last changed 2003/05/15 21:33

_id f91f
authors Elezkurtaj, Tomor and Franck, Georg
year 2000
title Geometry and Topology. A User-Interface to Artificial Evolution in Architectural Design
doi https://doi.org/10.52842/conf.ecaade.2000.309
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 309-312
summary The paper presents a system that supports architectural floor plan design interactively. The method of problem solving implemented is a combination of an evolutionary strategy (ES) and a genetic algorithm (GA). The problem to be solved consists of fitting a number of rooms (n) into an outline by observing functional requirements. The rooms themselves are specified concerning size, function and preferred proportion. The functional requirements entering the fitness functions are expressed in terms of the proportions of the rooms and the neighbourhood relations between them. The system is designed to deal with one of the core problems of computer supported creativity in architecture. For architecture, form not only, but also function is relevant. Without specifying the function that a piece of architecture is supposed to fulfil, it is hard to support its design by computerised methods of problem solving and optimisation. In architecture, however, function relates to comfort, easiness of use, and aesthetics as well. Since it is extraordinary hard, if not impossible, to operationalise aesthetics, computer aided support of creative architectural design is still in its infancy.
keywords New AI, Genetic Algorithms, Artificial Evolution, creative Architectural Design, Interactive Design, Topology
series eCAADe
email
more http://www.uni-weimar.de/ecaade/
last changed 2022/06/07 07:55

_id fa1b
authors Haapasalo, H.
year 2000
title Creative computer aided architectural design An internal approach to the design process
source University of Oulu (Finland)
summary This survey can be seen as quite multidisciplinary research. The basis for this study has been inapplicability of different CAD user interfaces in architectural design. The objective of this research is to improve architectural design from the creative problem-solving viewpoint, where the main goal is to intensify architectural design by using information technology. The research is linked to theory of methods, where an internal approach to design process means studying the actions and thinking of architects in the design process. The research approach has been inspired by hermeneutics. The human thinking process is divided into subconscious and conscious thinking. The subconscious plays a crucial role in creative work. The opposite of creative work is systematic work, which attempts to find solutions by means of logical inference. Both creative and systematic problem solving have had periods of predominance in the history of Finnish architecture. The perceptions in the present study indicate that neither method alone can produce optimal results. Logic is one of the tools of creativity, since the analysis and implementation of creative solutions require logical thinking. The creative process cannot be controlled directly, but by creating favourable work conditions for creativity, it can be enhanced. Present user interfaces can make draughting and the creation of alternatives quicker and more effective in the final stages of designing. Only two thirds of the architects use computers in working design, even the CAD system is being acquired in greater number of offices. User interfaces are at present inflexible in sketching. Draughting and sketching are the basic methods of creative work for architects. When working with the mouse, keyboard and screen the natural communication channel is impaired, since there is only a weak connection between the hand and the line being drawn on the screen. There is no direct correspondence between hand movements and the lines that appear on the screen, and the important items cannot be emphasized by, for example, pressing the pencil more heavily than normally. In traditional sketching the pen is a natural extension of the hand, as sketching can sometimes be controlled entirely by the unconscious. Conscious efforts in using the computer shift the attention away from the actual design process. However, some architects have reached a sufficiently high level of skill in the use of computer applications in order to be able to use them effectively in designing without any harmful effect on the creative process. There are several possibilities in developing CAD systems aimed at architectural design, but the practical creative design process has developed during a long period of time, in which case changing it in a short period of time would be very difficult. Although CAD has had, and will have, some evolutionary influences on the design process of architects as an entity, the future CAD user interface should adopt its features from the architect's practical and creative design process, and not vice versa.
keywords Creativity, Systematicism, Sketching
series thesis:PhD
email
more http://herkules.oulu.fi/isbn9514257545/
last changed 2003/02/12 22:37

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 29HOMELOGIN (you are user _anon_596106 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002