CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 623

_id 5a4e
authors Jeng, Taysheng
year 1999
title Design coordination modeling: A distributed computer environment for managing design activities
source College of Architecture, Georgia Institute of Technology
summary The objective of this thesis is to develop an effective multi-user computer environment supporting design collaboration. This research takes a knowledge-based approach to capture meaningful process semantics specified by designers to effectively realize work. It concentrates on establishing a process infrastructure and tools for managing activities for a building design team, with emphasis on remote collaboration and distributed coordination. The results of this research include a design coordination model (DCM) and the prototype of a future generation of distributed coordination environments. DCM provides a digital representation of design processes and support visibility of coordination logic within a CAD environment. Some extended features of distributed coordination are explored in DCM, equipped with a model server that is developed using a web-based three-tier computing system architecture approach.  
keywords Data Processing
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 170f
authors Mora Padrón, Víctor Manuel
year 1999
title Integration and Application of Technologies CAD in a Regional Reality - Methodological and Formative Experience in Industrial Design and Products Development
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 295-297
summary The experience to present is begun and developed during the academic year 1998, together to the course of IV pupils level of the Industrial Design career in the Universidad del Bío-Bío, labor that I have continued assuming during the present year, with a new youths generation. We have accomplished our academic work taking as original of study and base, the industrial and economic situation of the VIII Region, context in the one which we outline and we commit our needs formative as well as methodological to the teaching of the discipline of the Industrial Design. Consequently, we have defined a high-priority factor among pupils and teachers to reach the objectives and activities program of the course, the one which envisages first of all a commitment of attitude and integrative reflection among our academic activity and the territorial human context in the one which we inhabit. In Chile the activity of the industrial designer, his knowledge and by so much his capacity of producing innovation, it has been something practically unknown in the industrial productive area. However, the current national development challenges and the search by widening our markets, they have created and established a conscience of the fact that the Chilean industrial product must have a modern and effective competitiveness if wants be made participates in segments of the international marketing. It is in this new vision where the design provides in decisive form to consider and add a commercial and cultural value in our products. To the university corresponds the role of transmitting the knowledge generated in his classrooms toward the society, for thus to promote a development in the widest sense of the word. Under this prism the small and median regional industry in their various areas, have not integrated in the national arrangement in what concerns to the design and development of new and integral products. The design and the innovation as motor concept for a competitiveness and permanency in new markets, it has not entered yet in the entrepreneurial culture. If we want to save this situation, it is necessary that the regional entrepreneur knows the importance of the Design with new models development and examples of application, through concrete cases and with demands, that serve of base to demonstrate that the alliance among Designer and Industry, opens new perspectives of growth upon offering innovation and value added factors as new competitiveness tools. Today the communication and the managing of the information is a strategic weapon, to the moment of making changes in a social dynamics, so much at local level as global. It is with this look that our efforts and objective are centered in forming to our pupils with an integration speech and direct application toward the industrial community of our region, using the communication and the technological information as a tool validates and effective to solve the receipt in the visualization of our projects, designs and solutions of products. As complement to the development of the proposed topic will be exhibited a series of projects accomplished by the pupils for some regional industries, in which the three dimensional modeling and the use of programs vectoriales demonstrate the efficiency of communication and comprehension of the proposals, its complexity and constructive possibilities.
series SIGRADI
email
last changed 2016/03/10 09:55

_id ecaade03_059_29_russel
id ecaade03_059_29_russel
authors Russell, P., Stachelhaus, T. and Elger, D.
year 2003
title CSNCW: Computer Supported Non-Cooperative Work Barriers to Successful Virtual Design Studios
source Digital Design [21th eCAADe Conference Proceedings / ISBN 0-9541183-1-6] Graz (Austria) 17-20 September 2003, pp. 59-66
doi https://doi.org/10.52842/conf.ecaade.2003.059
summary The paper describes a design studio jointly undertaken by four Universities. With respect given to the groundbreaking work carried out by [Wojtowicz and Butelski (1998)] and [Donath et al 1999] and some of the problems described therein, the majority of the Studio partners had all had positive, if not exemplary experiences with co-operative studio projects carried out over the internet. The positive experience and development of concepts have been well documented in numerous publications over the last 5 years. A platform developed by one of the partners for this type of collaboration is in its third generation and has had well over 1000 students from 12 different universities in over 40 Projects. With this amount of experience, the four partners entered into the joint studio project with high expectations and little fear of failure. This experimental aspect of the studio, combined with the “well trodden” path of previous virtual design studios, lent an air of exploration to an otherwise well-worn format. Everything looked good, or so we thought. This is not to say that previous experiments were without tribulations, but the problems encountered earlier were usually spread over the studio partners and thus, the levels and distribution of frustration were more or less balanced. This raised a (theoretically) well-founded expectation of success. In execution, it was quite the opposite. In this case, the difficulties tended to be concentrated towards one or two of the partners. The partners spoke the same language, but came from different sets of goals, and hence, interpreted the agreements to suit their goals. This was not done maliciously, however the results were devastating to the project and most importantly, the student groups. The differing pedagogical methods of the various institutes played a strong role in steering the design critique at each school. Alongside these difficulties, the flexibility (or lack thereof) of each university’s calendar as well as national and university level holidays led to additional problems in coordination. And of course, (as if this was all not enough), the technical infrastructure, local capabilities and willingness to tackle technological problems were heterogeneous (to put it lightly).
keywords CSCW: Virtual Design Studio; Mistakes in Pedagogy
series eCAADe
email
more http://caad.arch.rwth-aachen.de
last changed 2022/06/07 07:56

_id 4a1a
authors Laird, J.E.
year 2001
title Using Computer Game to Develop Advanced AI
source Computer, 34 (7), July pp. 70-75
summary Although computer and video games have existed for fewer than 40 years, they are already serious business. Entertainment software, the entertainment industry's fastest growing segment, currently generates sales surpassing the film industry's gross revenues. Computer games have significantly affected personal computer sales, providing the initial application for CD-ROMs, driving advancements in graphics technology, and motivating the purchase of ever faster machines. Next-generation computer game consoles are extending this trend, with Sony and Toshiba spending $2 billion to develop the Playstation 2 and Microsoft planning to spend more than $500 million just to market its Xbox console [1]. These investments have paid off. In the past five years, the quality and complexity of computer games have advanced significantly. Computer graphics have shown the most noticeable improvement, with the number of polygons rendered in a scene increasing almost exponentially each year, significantly enhancing the games' realism. For example, the original Playstation, released in 1995, renders 300,000 polygons per second, while Sega's Dreamcast, released in 1999, renders 3 million polygons per second. The Playstation 2 sets the current standard, rendering 66 million polygons per second, while projections indicate the Xbox will render more than lOO million polygons per second. Thus, the images on today's $300 game consoles rival or surpass those available on the previous decade's $50,000 computers. The impact of these improvements is evident in the complexity and realism of the environments underlying today's games, from detailed indoor rooms and corridors to vast outdoor landscapes. These games populate the environments with both human and computer controlled characters, making them a rich laboratory for artificial intelligence research into developing intelligent and social autonomous agents. Indeed, computer games offer a fitting subject for serious academic study, undergraduate education, and graduate student and faculty research. Creating and efficiently rendering these environments touches on every topic in a computer science curriculum. The "Teaching Game Design " sidebar describes the benefits and challenges of developing computer game design courses, an increasingly popular field of study
series journal paper
last changed 2003/04/23 15:50

_id ga0010
id ga0010
authors Moroni, A., Zuben, F. Von and Manzolli, J.
year 2000
title ArTbitrariness in Music
source International Conference on Generative Art
summary Evolution is now considered not only powerful enough to bring about the biological entities as complex as humans and conciousness, but also useful in simulation to create algorithms and structures of higher levels of complexity than could easily be built by design. In the context of artistic domains, the process of human-machine interaction is analyzed as a good framework to explore creativity and to produce results that could not be obtained without this interaction. When evolutionary computation and other computational intelligence methodologies are involved, every attempt to improve aesthetic judgement we denote as ArTbitrariness, and is interpreted as an interactive iterative optimization process. ArTbitrariness is also suggested as an effective way to produce art through an efficient manipulation of information and a proper use of computational creativity to increase the complexity of the results without neglecting the aesthetic aspects [Moroni et al., 2000]. Our emphasis will be in an approach to interactive music composition. The problem of computer generation of musical material has received extensive attention and a subclass of the field of algorithmic composition includes those applications which use the computer as something in between an instrument, in which a user "plays" through the application's interface, and a compositional aid, which a user experiments with in order to generate stimulating and varying musical material. This approach was adopted in Vox Populi, a hybrid made up of an instrument and a compositional environment. Differently from other systems found in genetic algorithms or evolutionary computation, in which people have to listen to and judge the musical items, Vox Populi uses the computer and the mouse as real-time music controllers, acting as a new interactive computer-based musical instrument. The interface is designed to be flexible for the user to modify the music being generated. It explores evolutionary computation in the context of algorithmic composition and provides a graphical interface that allows to modify the tonal center and the voice range, changing the evolution of the music by using the mouse[Moroni et al., 1999]. A piece of music consists of several sets of musical material manipulated and exposed to the listener, for example pitches, harmonies, rhythms, timbres, etc. They are composed of a finite number of elements and basically, the aim of a composer is to organize those elements in an esthetic way. Modeling a piece as a dynamic system implies a view in which the composer draws trajectories or orbits using the elements of each set [Manzolli, 1991]. Nonlinear iterative mappings are associated with interface controls. In the next page two examples of nonlinear iterative mappings with their resulting musical pieces are shown.The mappings may give rise to attractors, defined as geometric figures that represent the set of stationary states of a non-linear dynamic system, or simply trajectories to which the system is attracted. The relevance of this approach goes beyond music applications per se. Computer music systems that are built on the basis of a solid theory can be coherently embedded into multimedia environments. The richness and specialty of the music domain are likely to initiate new thinking and ideas, which will have an impact on areas such as knowledge representation and planning, and on the design of visual formalisms and human-computer interfaces in general. Above and bellow, Vox Populi interface is depicted, showing two nonlinear iterative mappings with their resulting musical pieces. References [Manzolli, 1991] J. Manzolli. Harmonic Strange Attractors, CEM BULLETIN, Vol. 2, No. 2, 4 -- 7, 1991. [Moroni et al., 1999] Moroni, J. Manzolli, F. Von Zuben, R. Gudwin. Evolutionary Computation applied to Algorithmic Composition, Proceedings of CEC99 - IEEE International Conference on Evolutionary Computation, Washington D. C., p. 807 -- 811,1999. [Moroni et al., 2000] Moroni, A., Von Zuben, F. and Manzolli, J. ArTbitration, Las Vegas, USA: Proceedings of the 2000 Genetic and Evolutionary Computation Conference Workshop Program – GECCO, 143 -- 145, 2000.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id cf4d
authors Zamanian, M.K. and Pittman, J.H.
year 1999
title A software industry perspective on AEC information models for distributed collaboration
source Automation in Construction 8 (3) (1999) pp. 237-248
summary This paper focuses on information modeling and computing technologies that are most relevant to the emerging software for the Architecture, Engineering, and Construction (AEC) industry. After a brief introduction to the AEC industry and its present state of computer-based information sharing and collaboration, a set of requirements for AEC information models are identified. Next, a number of key information modeling and standards initiatives for the AEC domain are briefly discussed followed by a review of the emerging object and Internet technologies. The paper will then present our perspective on the challenges and potential directions for using object-based information models in a new generation of AEC software systems intended to offer component-based open architecture for distributed collaboration.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:23

_id cf2011_p109
id cf2011_p109
authors Abdelmohsen, Sherif; Lee Jinkook, Eastman Chuck
year 2011
title Automated Cost Analysis of Concept Design BIM Models
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 403-418.
summary AUTOMATED COST ANALYSIS OF CONCEPT DESIGN BIM MODELS Interoperability: BIM models and cost models This paper introduces the automated cost analysis developed for the General Services Administration (GSA) and the analysis results of a case study involving a concept design courthouse BIM model. The purpose of this study is to investigate interoperability issues related to integrating design and analysis tools; specifically BIM models and cost models. Previous efforts to generate cost estimates from BIM models have focused on developing two necessary but disjoint processes: 1) extracting accurate quantity take off data from BIM models, and 2) manipulating cost analysis results to provide informative feedback. Some recent efforts involve developing detailed definitions, enhanced IFC-based formats and in-house standards for assemblies that encompass building models (e.g. US Corps of Engineers). Some commercial applications enhance the level of detail associated to BIM objects with assembly descriptions to produce lightweight BIM models that can be used by different applications for various purposes (e.g. Autodesk for design review, Navisworks for scheduling, Innovaya for visual estimating, etc.). This study suggests the integration of design and analysis tools by means of managing all building data in one shared repository accessible to multiple domains in the AEC industry (Eastman, 1999; Eastman et al., 2008; authors, 2010). Our approach aims at providing an integrated platform that incorporates a quantity take off extraction method from IFC models, a cost analysis model, and a comprehensive cost reporting scheme, using the Solibri Model Checker (SMC) development environment. Approach As part of the effort to improve the performance of federal buildings, GSA evaluates concept design alternatives based on their compliance with specific requirements, including cost analysis. Two basic challenges emerge in the process of automating cost analysis for BIM models: 1) At this early concept design stage, only minimal information is available to produce a reliable analysis, such as space names and areas, and building gross area, 2) design alternatives share a lot of programmatic requirements such as location, functional spaces and other data. It is thus crucial to integrate other factors that contribute to substantial cost differences such as perimeter, and exterior wall and roof areas. These are extracted from BIM models using IFC data and input through XML into the Parametric Cost Engineering System (PACES, 2010) software to generate cost analysis reports. PACES uses this limited dataset at a conceptual stage and RSMeans (2010) data to infer cost assemblies at different levels of detail. Functionalities Cost model import module The cost model import module has three main functionalities: generating the input dataset necessary for the cost model, performing a semantic mapping between building type specific names and name aggregation structures in PACES known as functional space areas (FSAs), and managing cost data external to the BIM model, such as location and construction duration. The module computes building data such as footprint, gross area, perimeter, external wall and roof area and building space areas. This data is generated through SMC in the form of an XML file and imported into PACES. Reporting module The reporting module uses the cost report generated by PACES to develop a comprehensive report in the form of an excel spreadsheet. This report consists of a systems-elemental estimate that shows the main systems of the building in terms of UniFormat categories, escalation, markups, overhead and conditions, a UniFormat Level III report, and a cost breakdown that provides a summary of material, equipment, labor and total costs. Building parameters are integrated in the report to provide insight on the variations among design alternatives.
keywords building information modeling, interoperability, cost analysis, IFC
series CAAD Futures
email
last changed 2012/02/11 19:21

_id 4b48
authors Dourish, P.
year 1999
title Where the Footprints Lead: Tracking down other roles for social navigation
source Social Navigation of Information Space, eds. A. Munro, K. H. and D Benyon. London: Springer-Verlag, pp 15-34
summary Collaborative Filtering was proposed in the early 1990's as a means of managing access to large information spaces by capturing and exploiting aspects of the experiences of previous users of the same information. Social navigation is a more general form of this style of interaction, and with the widening scope of the Internet as an information provider, systems of this sort have rapidly moved from early research prototypes to deployed services in everyday use. On the other hand, to most of the HCI community, the term social navigation" is largely synonymous with "recommendation systems": systems that match your interests to those of others and, on that basis, provide recommendations about such things as music, books, articles and films that you might enjoy. The challenge for social navigation, as an area of research and development endeavour, is to move beyond this rather limited view of the role of social navigation; and to do this, we must try to take a broader view of both our remit and our opportunities. This chapter will revisit the original motivations, and chart something of the path that recent developments have taken. Based on reflections on the original concerns that motivated research into social navigation, it will explore some new avenues of research. In particular, it will focus on two. The first is social navigation within the framework of "awareness" provisions in collaborative systems generally; and the second is the relationship of social navigation systems to spatial models and the ideas of "space" and "place" in collaborative settings. By exploring these two ideas, two related goals can be achieved. The first is to draw attention to ways in which current research into social navigation can be made relevant to other areas of research endeavour; and the second is to re-motivate the idea of "social navigation" as a fundamental model for collaboration in information-seeking."
series other
last changed 2003/04/23 15:50

_id 54a6
authors Eastman, C. and Jeng, T.S.
year 1999
title A database supporting evolutionary product model development for design
source Automation in Construction 8 (3) (1999) pp. 305-323
summary This paper presents the facilities in the EDM-2 product modeling and database language that support model evolution. It reviews the need for model evolution as a system and/or language requirement to support product modeling. Four types of model evolution are considered: (1) translation between distinct models, (2) deriving views from a central model, (3) modification of an existing model, and (4) model evolution based on writable views associated with each application. While the facilities described support all for types of evolution, the last type is emphasized. The language based modeling capabilities described in EDM-2 include: (a) mapping facilities for defining derivations and views within a single model or between different models; (b) procedural language capabilities supporting model addition, deletion and modification; (c) support for object instance migration so as to partition the set of class instances into multiple classes; (d) support for managing practical deletion of portions of a model; (e) explicit specification and automatic management of integrity between a building model and various views. The rationale and language features, and in some cases, the implementation strategy for the features, are presented.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 803c
authors Gottfried, A., Angelis, E. De and Trani, M.L.
year 1999
title Results from the application of a performance-based housing regulation in Cadoneghe, Italy
source Automation in Construction 8 (4) (1999) pp. 445-453
summary The article aims to report the experience of a little town, Cadoneghe (suburbs of Padua, northern Italy), in managing a Performance based Building Code. Although pressed by a high housing demand, Cadoneghe asked a design team and a research team for a help to define new basic rules and control tools, to avoid the most usual failures of Italian mass housing projects. The administration pursued the application of these rules in four stages:
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 9088
authors Hartkopf, V. and Loftness, V.
year 1999
title Global relevance of total building performance
source Automation in Construction 8 (4) (1999) pp. 377-393
summary Global population and environmental trends demand a radical departure from current building and developmental processes. Applying total building performance thinking can reduce energy consumption, pollution and waste in existing and new construction by a factor of 4 and simultaneously can improve quality of life within buildings––measured through occupant satisfaction, health and productivity. The further development of advanced energy and water systems, and the application of appropriate technology and systems integration concepts will help to enable the elimination of `waste-streams', avoiding obsolescence, as well as managing industrial and agricultural nutrient streams. Instead of treating buildings and their contents as `pre-garbage', worse `pre toxic-waste', all material flows can be considered within life cycles for `cradle to cradle' use. These concepts can make major contributions towards the creation of more sustainable lifestyles with even greater quality in the industrialized countries and the development and implementation of sustainable urban and building infrastructures in rapidly emerging economies. Rather than the continued export of non-sustainable building solutions, this paper argues for the development and demonstration of such practices in the industrialized countries that would create a progressive 'pull' to enable the appropriate implementation of new practices.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 687c
authors Kosco, Igor
year 1999
title How the World Became Smaller
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 230-237
doi https://doi.org/10.52842/conf.ecaade.1999.230
summary The world of computers became fruitful and independent before the new millennium started. New technologies and methods are giving us new tools and possibilities every day as well as the challenge how to use them. The advantage of architecture and namely of architects teaching at the universities or schools is remarkable: new techniques reflect the education, research and practice - and what is important - by one person. The links between practice and university from the point of view: how the computer technologies and CAAD influences methods of designing, managing and collaboration are very important in both directions. It grows with the number of students who left university with good computer skills on one side and number of architectural and engineering offices using computers on the other. Networks and Internet enables to exchange data but also experiences. Internet itself is not only a tool for surfing and enjoying or the source of information, but preferably like a powerful tool for collaboration, workgroups, virtual studios or long distance education. This paper describes experiences from research and educational projects between Slovak Technical University, IUG Grenoble, University of Newcastle and others and their influence on architectural education and practice.
keywords Long-Distance Education, Research, Practice
series eCAADe
email
last changed 2022/06/07 07:51

_id 422f
authors Morozumi, M., Shounai, Y., Homma, R., Iki, K. and Murakami, Y.
year 1999
title A Group Ware for Asynchronous Design Communication and Project Management
source CAADRIA '99 [Proceedings of The Fourth Conference on Computer Aided Architectural Design Research in Asia / ISBN 7-5439-1233-3] Shanghai (China) 5-7 May 1999, pp. 171-180
doi https://doi.org/10.52842/conf.caadria.1999.171
summary The number of Virtual Design Studio experiment that used WWW (Digital Pin-up Board) and e-mail for a synchronous communication, is rapidly increasing. There is no doubt that those media are quite helpful, but it also became clear that writing and managing pages of DPB require extra work for designers and technical staff to proceed with collaborative design. To make VDS a popular approach of collaborative design, developing convenient tools to support writing and managing pages of DPB has become inevitable. This paper discusses a prototype of group ware that supports asynchronous design communication with DPB: GW-Notebook that can be used with common web browsers on net-PCs.
series CAADRIA
email
last changed 2022/06/07 07:59

_id 3d23
authors Sellgren, Ulf
year 1999
title Simulation-driven Design
source KTH Stockholm
summary Efficiency and innovative problem solving are contradictory requirements for product development (PD), and both requirements must be satisfied in companies that strive to remain or to become competitive. Efficiency is strongly related to ”doing things right”, whereas innovative problem solving and creativity is focused on ”doing the right things”. Engineering design, which is a sub-process within PD, can be viewed as problem solving or a decision-making process. New technologies in computer science and new software tools open the way to new approaches for the solution of mechanical problems. Product data management (PDM) technology and tools can enable concurrent engineering (CE) by managing the formal product data, the relations between the individual data objects, and their relation to the PD process. Many engineering activities deal with the relation between behavior and shape. Modern CAD systems are highly productive tools for concept embodiment and detailing. The finite element (FE) method is a general tool used to study the physical behavior of objects with arbitrary shapes. Since a modern CAD technology enables design modification and change, it can support the innovative dimension of engineering as well as the verification of physical properties and behavior. Concepts and detailed solutions have traditionally been evaluated and verified with physical testing. Numerical modeling and simulation is in many cases a far more time efficient method than testing to verify the properties of an artifact. Numerical modeling can also support the innovative dimension of problem solving by enabling parameter studies and observations of real and synthetic behavior. Simulation-driven design is defined as a design process where decisions related to the behavior and performance of the artifact are significantly supported by computer-based product modeling and simulation. A framework for product modeling, that is based on a modern CAD system with fully integrated FE modeling and simulation functionality provides the engineer with tools capable of supporting a number of engineering steps in all life-cycle phases of a product. Such a conceptual framework, that is based on a moderately coupled approach to integrate commercial PDM, CAD, and FE software, is presented. An object model and a supporting modular modeling methodology are also presented. Two industrial cases are used to illustrate the possibilities and some of the opportunities given by simulation-driven design with the presented methodology and framework.
keywords CAE; FE Method; Metamodel; Object Model; PDM; Physical Behavior, System
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 29c6
authors Shaw, N. and Kimber, W.E.
year 1999
title STEP and SGML/XML: what it means, how it works
source XML Europe ‘99 Conference Proceedings, Graphic Communication Association, 1999, pp. 267-70
summary The STEP standard, ISO 10303, is the primary standard for data representation and interchange in the product design and manufacturing world. Originally designed to enable the interchange of 3-D CAD models between different systems, like SGML, it has defined and uses a general mechanism for representing and managing complex data of any type. Increasingly products are defined as solid models that are stored in product databases. These databases are not limited to shape but contain a considerable wealth of other information, such as materials, failure modes, task descriptions, product related meta-data such as approvals and much more. The product world is of course also replete with documents, from requirements through specifications to user manuals. These documents both act as input to the product development processes and are output as well. Indeed in some cases documents form part of the product and are given part numbers. It is therefore not surprising to find that there are many companies where there are very real requirements to interact and interoperate between the product data and documents, specifically in the form of SGML-based data. This paper reports on work in progress to bring the two worlds together. This is primarily being done through the SGML and Industrial Data Preliminary Work Item under ISO TC184/SC4. The need for common capabilities for using STEP and SGML together has been obvious for a long time as can be seen from the inclusion of product data and SGML-based data within initiatives such as CALS. However, until recently, this requirement was never satisfied, for various reasons. For the last year or more, a small group has been actively pursuing this area and gaining the necessary understandings across the different standards. It is this work that is reported here. The basic thrust of the work is to answer the questions: Can STEP and SGML be used together and, if so, how?
series other
last changed 2003/04/23 15:50

_id avocaad_2001_17
id avocaad_2001_17
authors Ying-Hsiu Huang, Yu-Tung Liu, Cheng-Yuan Lin, Yi-Ting Cheng, Yu-Chen Chiu
year 2001
title The comparison of animation, virtual reality, and scenario scripting in design process
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary Design media is a fundamental tool, which can incubate concrete ideas from ambiguous concepts. Evolved from freehand sketches, physical models to computerized drafting, modeling (Dave, 2000), animations (Woo, et al., 1999), and virtual reality (Chiu, 1999; Klercker, 1999; Emdanat, 1999), different media are used to communicate to designers or users with different conceptual levels¡@during the design process. Extensively employed in design process, physical models help designers in managing forms and spaces more precisely and more freely (Millon, 1994; Liu, 1996).Computerized drafting, models, animations, and VR have gradually replaced conventional media, freehand sketches and physical models. Diversely used in the design process, computerized media allow designers to handle more divergent levels of space than conventional media do. The rapid emergence of computers in design process has ushered in efforts to the visual impact of this media, particularly (Rahman, 1992). He also emphasized the use of computerized media: modeling and animations. Moreover, based on Rahman's study, Bai and Liu (1998) applied a new design media¡Xvirtual reality, to the design process. In doing so, they proposed an evaluation process to examine the visual impact of this new media in the design process. That same investigation pointed towards the facilitative role of the computerized media in enhancing topical comprehension, concept realization, and development of ideas.Computer technology fosters the growth of emerging media. A new computerized media, scenario scripting (Sasada, 2000; Jozen, 2000), markedly enhances computer animations and, in doing so, positively impacts design processes. For the three latest media, i.e., computerized animation, virtual reality, and scenario scripting, the following question arises: What role does visual impact play in different design phases of these media. Moreover, what is the origin of such an impact? Furthermore, what are the similarities and variances of computing techniques, principles of interaction, and practical applications among these computerized media?This study investigates the similarities and variances among computing techniques, interacting principles, and their applications in the above three media. Different computerized media in the design process are also adopted to explore related phenomenon by using these three media in two projects. First, a renewal planning project of the old district of Hsinchu City is inspected, in which animations and scenario scripting are used. Second, the renewal project is compared with a progressive design project for the Hsinchu Digital Museum, as designed by Peter Eisenman. Finally, similarity and variance among these computerized media are discussed.This study also examines the visual impact of these three computerized media in the design process. In computerized animation, although other designers can realize the spatial concept in design, users cannot fully comprehend the concept. On the other hand, other media such as virtual reality and scenario scripting enable users to more directly comprehend what the designer's presentation.Future studies should more closely examine how these three media impact the design process. This study not only provides further insight into the fundamental characteristics of the three computerized media discussed herein, but also enables designers to adopt different media in the design stages. Both designers and users can more fully understand design-related concepts.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 5cba
authors Anders, Peter
year 1999
title Beyond Y2k: A Look at Acadia's Present and Future
source ACADIA Quarterly, vol. 18, no. 1, p. 10
doi https://doi.org/10.52842/conf.acadia.1999.x.o3r
summary The sky may not be falling, but it sure is getting closer. Where will you when the last three zeros of our millennial odometer click into place? Computer scientists tell us that Y2K will bring the world’s computer infrastructure to its knees. Maybe, maybe not. But it is interesting that Y2K is an issue at all. Speculating on the future is simultaneously a magnifying glass for examining our technologies and a looking glass for what we become through them. "The future" is nothing new. Orwell's vision of totalitarian mass media did come true, if only as Madison Avenue rather than Big Brother. Futureboosters of the '50s were convinced that each garage would house a private airplane by the year 2000. But world citizens of the 60's and 70's feared a nuclear catastrophe that would replace the earth with a smoking crater. Others - perhaps more optimistically -predicted that computers were going to drive all our activities by the year 2000. And, in fact, theymay not be far off... The year 2000 is symbolic marker, a point of reflection and assessment. And - as this date is approaching rapidly - this may be a good time to come to grips with who we are and where we want to be.
series ACADIA
email
last changed 2022/06/07 07:49

_id a8f2
authors Becker, R.
year 1999
title Research and development needs for better implementation of the performance concept in building
source Automation in Construction 8 (4) (1999) pp. 525-532
summary Gaps in basic knowledge, inadequacies in the procedural infrastructure and lack of working tools, that still prevent a more systematic application of the performance concept throughout the building process, are identified. One of the main conclusions is that, despite the vast knowledge accumulated during the years in the fields of ergonometrics, human needs, human factor engineering, architectural design, structural analysis, building physics, building materials and durability analysis, this knowledge is not applied systematically during the building process. The situation is attributed to lack of tools for some of the decision making phases in the process, and to the lack of a common, preferably computerized, design platform that would ensure a comprehensive and quantitative approach to all the relevant performance attributes, link smoothly between the various phases along the project development, and minimizes bias caused by human experts.
series journal paper
more http://www.elsevier.com/locate/autcon
last changed 2003/05/15 21:22

_id 3017
authors Carson, J. and Clark, A.
year 1999
title Multicast Shared Virtual Worlds Using VRML 97
source Proceedings of VRML 99 Fourth Symposium on the Virtual Reality Modeling language, The Association for Computing Machinery, Inc. New York, pp. 133-140
summary This paper describes a system for authoring and executing shared virtual worlds within existing VRML97 viewers such as Cosmo Player. As VRML97 does not contain any direct support for the construction of virtual worlds containing multiple users extensions are presented to provide support for shared behaviours, avatars and objects that can be manipulated and carried by participants in the world; these extensions are pre-processed into standard VRML97 and Java. A system infrastructure is described which allows worlds to be authored and executed within the context of the World Wide Web and the MBone. CR Categories and Subject Descriptors: C.2.2 [Computer Communication Networks]: Network Protocols - Applications; C.2.4 [Computer Communication Networks]: Distributed Sys- tems - Distributed Applications; H.5.1 [Information Interfaces and Presentation] Multimedia Information Systems - Artificial, Aug- mented and Virtual Realities; 1.3.2 [Computer Graphics]: Graphics Systems - Distributed/network graphics: 1.3.6 [Computer Graph- ics]: Methodology and Techniques - Interaction Techniques; 1.3.7 [Computer Graphics]: Three Dimensional Graphics and Realism - Virtual Reality.
series other
last changed 2003/04/23 15:50

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 31HOMELOGIN (you are user _anon_394394 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002