CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 801

_id 7ccd
authors Augenbroe, Godfried and Eastman, Chuck
year 1999
title Computers in Building: Proceedings of the CAADfutures '99 Conference
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, 398 p.
summary This is the eight CAADfutures Conference. Each of these bi-annual conferences identifies the state of the art in computer application in architecture. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. Early conferences, for example, addressed project work, either for real construction or done in academic studios, that approached the teaching or use of CAD tools in innovative ways. By the early 1990s, such project-based examples of CAD use disappeared from the conferences, as this area was no longer considered a research contribution. Computer-based design has become a basic way of doing business. This conference is marked by a similar evolutionary change. More papers were submitted about Web- based applications than about any other area. Rather than having multiple sessions on Web-based applications and communications, we instead came to the conclusion that the Web now is an integral part of digital computing, as are CAD applications. Using the conference as a sample, Web-based projects have been integrated into most research areas. This does not mean that the application of the Web is not a research area, but rather that the Web itself is an integral tool in almost all areas of CAAD research.
series CAAD Futures
email
last changed 2006/11/07 07:22

_id ab77
authors Iki, K., Shimoda, S., Kumadaki, N. and Homma, R.
year 1999
title Development and Use of Intranet-Based CAFM System
doi https://doi.org/10.52842/conf.caadria.1999.383
source CAADRIA '99 [Proceedings of The Fourth Conference on Computer Aided Architectural Design Research in Asia / ISBN 7-5439-1233-3] Shanghai (China) 5-7 May 1999, pp. 383-392
summary In the past CAFM system study, we proposed a system for supporting data-processing and plan-drafting, on the assumption that it is to be used in different stages of Building Construction, Interior Spatial Planning and Maintenance. By the above system, we have developed a CAFM system using the DBMS (Data Base Management System), CAD (Computer Aided Design) and Spread Sheet as the analysis tools. Management system with FM-related data editing functions such as 'Input', 'Modification', 'Deleting', etc, are proposed. To promote the FM business smoothly, information should be shared among departments concerned, and informative administrative framework should be organized. This time, we propose a prototype of CAFM system on INTRANET which is developed for general users that permits browsing and downloading of system database.
series CAADRIA
last changed 2022/06/07 07:50

_id b57c
authors Kvan, Thomas
year 1999
title Designing Together Apart
source Open University, Milton Keynes
summary The design of computer tools to assist in work has often attempted to replicate manual methods. This replication has been proven to fail in a diversity of fields such as business management, Computer-Aided Design (CAD) and Computer- Supported Collaborative Work (CSCW). To avoid such a failure being repeated in the field of Computer-Supported Collaborative Design (CSCD), this thesis explores the postulation that CSCD does not have to be supported by tools which replicate the face-to-face design context to support distal architectural design. The thesis closely examines the prevailing position that collaborative design is a social and situated act which must therefore be supported by high bandwidth tools. This formulation of architectural collaboration is rejected in favour of the formulation of a collaborative expert act. This proposal is tested experimentally, the results of which are presented. Supporting expert behaviour requires different tools than the support of situated acts. Surveying research in computer-supported collaborative work (CSCW), the thesis identifies tools that support expert work. The results of the research is transferred to two contexts: teaching and practice. The applications in these two contexts illustrate how CSCD can be applied in a variety of bandwidth and technological conditions. The conclusion is that supporting collaborative design as an expert and knowledge-based act can be beneficially implemented in the teaching and practice of architecture.
series thesis:PhD
email
last changed 2003/02/12 22:37

_id 4a1a
authors Laird, J.E.
year 2001
title Using Computer Game to Develop Advanced AI
source Computer, 34 (7), July pp. 70-75
summary Although computer and video games have existed for fewer than 40 years, they are already serious business. Entertainment software, the entertainment industry's fastest growing segment, currently generates sales surpassing the film industry's gross revenues. Computer games have significantly affected personal computer sales, providing the initial application for CD-ROMs, driving advancements in graphics technology, and motivating the purchase of ever faster machines. Next-generation computer game consoles are extending this trend, with Sony and Toshiba spending $2 billion to develop the Playstation 2 and Microsoft planning to spend more than $500 million just to market its Xbox console [1]. These investments have paid off. In the past five years, the quality and complexity of computer games have advanced significantly. Computer graphics have shown the most noticeable improvement, with the number of polygons rendered in a scene increasing almost exponentially each year, significantly enhancing the games' realism. For example, the original Playstation, released in 1995, renders 300,000 polygons per second, while Sega's Dreamcast, released in 1999, renders 3 million polygons per second. The Playstation 2 sets the current standard, rendering 66 million polygons per second, while projections indicate the Xbox will render more than lOO million polygons per second. Thus, the images on today's $300 game consoles rival or surpass those available on the previous decade's $50,000 computers. The impact of these improvements is evident in the complexity and realism of the environments underlying today's games, from detailed indoor rooms and corridors to vast outdoor landscapes. These games populate the environments with both human and computer controlled characters, making them a rich laboratory for artificial intelligence research into developing intelligent and social autonomous agents. Indeed, computer games offer a fitting subject for serious academic study, undergraduate education, and graduate student and faculty research. Creating and efficiently rendering these environments touches on every topic in a computer science curriculum. The "Teaching Game Design " sidebar describes the benefits and challenges of developing computer game design courses, an increasingly popular field of study
series journal paper
last changed 2003/04/23 15:50

_id f8b5
authors Oswald, Daniel and Pittioni, Gernot
year 1999
title AVOCAAD Exercises Facility Management Training on the web A Facility Management Survey Relevance for the Architects Business
source AVOCAAD Second International Conference [AVOCAAD Conference Proceedings / ISBN 90-76101-02-07] Brussels (Belgium) 8-10 April 1999, pp. 81-87
summary Facilities Management (FM) can't be seen as a subject with a specific area of knowledge with exactly defined borders relative to other subjects. Analysing the economic aspects of FM leads to the realisation that building management is experiencing a process of increasing specialisation and professionalism. It is possible to define FM from a variety of different points of origin. One possible approach views FM as an integral solution for the administration of buildings, their commercial activities, and technical maintenance from an economic perspective, during the whole life of a building. FM covers all strategies in order to efficiently provide, adequately operate and adapt buildings, their contents and systems to changing organisational demands. The current practice of limited analysis of specific administrative aspects, e.g. maintenance, is replaced by consideration of all factors that affect costs. Since all costs can be directly traced to space, the perfect procedure requires that FM is practised during the *hole living-cycle, starting with the definition of the program of construction until the day of conversion or demolition. Through successful FM, the real estate can contribute decisively to the improvement of productivity and the quality of life.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 7a81
authors Pinet, Céline
year 1999
title ACADIA'S Browser: Crossing Centuries, Blurring Boundaries
doi https://doi.org/10.52842/conf.acadia.1999.024.4
source ACADIA Quarterly, vol. 18, no. 4, pp. 24-25
summary New years are inspiring; they are times for new beginnings. As we are now starting a new century, I am inspired… and it looks like I am not the only one: Two graduates from Columbia University have recently launched a web- business and people are taking notice.
series ACADIA
email
last changed 2022/06/07 08:00

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id cf6b
authors Smith, S.
year 1999
title Document Management: Solutions for AutoCAD Workgroups
source Cadalyst, vol 16, No.4, April, 42-50
summary The volume of documents handled by companies today is about twice what it was ten years ago. Electronic files, plus the sea of paper you wade through on a daily basis, reach Mt. Everest proportions for everyone in a company. Productivity suffers from this overload-often documents are overwritten or lost just because there are so many of them. Also, when companies downsize to become more efficient, they may lose key people who manage the ever-growing mountain of information. Business cycles are shorter, technology keeps changing, and all this is managed by a smaller staff.
series journal paper
last changed 2003/04/23 15:14

_id 1570
authors Sowizral, H.A. and Deering, M.F.
year 1999
title The Java 3D API and Virtual Reality
source IEEE Computer Graphics and Applications, May/June
summary Java 3D proves a natural choice for any Java programmer wanting to write an interactive 3D graphics program. A programmer constructs a scene graph containing graphic objects, lights, sounds, environmental effects objects, and behavior objects that handle interactions or modify other objects in the scene graph. The programmer then hands that scene graph to Java 3D for execution. Java 3D starts rendering objects and executing behaviors in the scene graph. Virtual reality applications go through an identical writing process. However, before a user can use such an application, Java 3D must additionally know about the user's physical characteristics (height, eye separation, and so forth) and physical environment (number of displays, their location, trackers, and so on). Not surprisingly, such information varies from installation to installation and from user to user. So Java 3D lets application developers separate their application's operation from the vagaries of the user's final display environment. The Java 3D application programmer's interface (API) provides a very flexible platform for building a broad range of graphics applications. Developers have already used Java 3D to build applications in a variety of domains including mechanical CAD, molecular visualization, scientific visualization, animation previews, geographic information systems, business graphics, 3D logos, and educational offerings. Virtual reality applications have included immersive workbench applications, headtracked shutter-glass-based desktop applications, and portals (a cave-like room with multiple back-projected walls).
series journal paper
last changed 2003/04/23 15:50

_id e978
authors [Zupancic] Strojan, Tadeja Z.
year 1999
title CyberUniversity
doi https://doi.org/10.52842/conf.ecaade.1999.196
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 196-200
summary The study of a cyberuniversity derives from an analogy between real urban space and its virtual "substitution". It represents an attempt to balance some views, which seems to be contrary, exclusive, but they are just parts of the same wholeness. Especially the notion of a cyber society is lately considered such an exaggeration, that it is possible to forget the meaning of a real life experience and interactions, which are already threatened. One should contribute to the awarness that is used in such a comparison, it is "just" an analogy, not a real similarity. At the same time it is possible to point out some limitations of a cyberspace and indicate a more realistic view of the meaning of cyber communities. Awarness of the development processes could help to find a balance between reality and virtuality, using cyberfacilities not to destroy us (our identity) but to improve the quality of our (real) life.
keywords University, Cyberuniversity, Space, Cyberspace
series eCAADe
type normal paper
email
last changed 2022/06/07 07:54

_id bacd
authors Abadí Abbo, Isaac
year 1999
title APPLICATION OF SPATIAL DESIGN ABILITY IN A POSTGRADUATE COURSE
source Full-scale Modeling and the Simulation of Light [Proceedings of the 7th European Full-scale Modeling Association Conference / ISBN 3-85437-167-5] Florence (Italy) 18-20 February 1999, pp. 75-82
summary Spatial Design Ability (SDA) has been defined by the author (1983) as the capacity to anticipate the effects (psychological impressions) that architectural spaces or its components produce in observers or users. This concept, which requires the evaluation of spaces by the people that uses it, was proposed as a guideline to a Masters Degree Course in Architectural Design at the Universidad Autonoma de Aguascalientes in Mexico. The theory and the exercises required for the experience needed a model that could simulate spaces in terms of all the variables involved. Full-scale modeling as has been tested in previous research, offered the most effective mean to experiment with space. A simple, primitive model was designed and built: an articulated ceiling that allows variation in height and shape, and a series of wooden panels for the walls and structure. Several exercises were carried out, mainly to experience cause -effect relationships between space and the psychological impressions they produce. Students researched into spatial taxonomy, intentional sequences of space and spatial character. Results showed that students achieved the expected anticipation of space and that full-scale modeling, even with a simple model, proved to be an effective tool for this purpose. The low cost of the model and the short time it took to be built, opens an important possibility for Institutions involved in architectural studies, both as a research and as a learning tool.
keywords Spatial Design Ability, Architectural Space, User Evaluation, Learning, Model Simulation, Real Environments
series other
type normal paper
email
more http://info.tuwien.ac.at/efa
last changed 2004/05/04 11:27

_id cf2011_p109
id cf2011_p109
authors Abdelmohsen, Sherif; Lee Jinkook, Eastman Chuck
year 2011
title Automated Cost Analysis of Concept Design BIM Models
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 403-418.
summary AUTOMATED COST ANALYSIS OF CONCEPT DESIGN BIM MODELS Interoperability: BIM models and cost models This paper introduces the automated cost analysis developed for the General Services Administration (GSA) and the analysis results of a case study involving a concept design courthouse BIM model. The purpose of this study is to investigate interoperability issues related to integrating design and analysis tools; specifically BIM models and cost models. Previous efforts to generate cost estimates from BIM models have focused on developing two necessary but disjoint processes: 1) extracting accurate quantity take off data from BIM models, and 2) manipulating cost analysis results to provide informative feedback. Some recent efforts involve developing detailed definitions, enhanced IFC-based formats and in-house standards for assemblies that encompass building models (e.g. US Corps of Engineers). Some commercial applications enhance the level of detail associated to BIM objects with assembly descriptions to produce lightweight BIM models that can be used by different applications for various purposes (e.g. Autodesk for design review, Navisworks for scheduling, Innovaya for visual estimating, etc.). This study suggests the integration of design and analysis tools by means of managing all building data in one shared repository accessible to multiple domains in the AEC industry (Eastman, 1999; Eastman et al., 2008; authors, 2010). Our approach aims at providing an integrated platform that incorporates a quantity take off extraction method from IFC models, a cost analysis model, and a comprehensive cost reporting scheme, using the Solibri Model Checker (SMC) development environment. Approach As part of the effort to improve the performance of federal buildings, GSA evaluates concept design alternatives based on their compliance with specific requirements, including cost analysis. Two basic challenges emerge in the process of automating cost analysis for BIM models: 1) At this early concept design stage, only minimal information is available to produce a reliable analysis, such as space names and areas, and building gross area, 2) design alternatives share a lot of programmatic requirements such as location, functional spaces and other data. It is thus crucial to integrate other factors that contribute to substantial cost differences such as perimeter, and exterior wall and roof areas. These are extracted from BIM models using IFC data and input through XML into the Parametric Cost Engineering System (PACES, 2010) software to generate cost analysis reports. PACES uses this limited dataset at a conceptual stage and RSMeans (2010) data to infer cost assemblies at different levels of detail. Functionalities Cost model import module The cost model import module has three main functionalities: generating the input dataset necessary for the cost model, performing a semantic mapping between building type specific names and name aggregation structures in PACES known as functional space areas (FSAs), and managing cost data external to the BIM model, such as location and construction duration. The module computes building data such as footprint, gross area, perimeter, external wall and roof area and building space areas. This data is generated through SMC in the form of an XML file and imported into PACES. Reporting module The reporting module uses the cost report generated by PACES to develop a comprehensive report in the form of an excel spreadsheet. This report consists of a systems-elemental estimate that shows the main systems of the building in terms of UniFormat categories, escalation, markups, overhead and conditions, a UniFormat Level III report, and a cost breakdown that provides a summary of material, equipment, labor and total costs. Building parameters are integrated in the report to provide insight on the variations among design alternatives.
keywords building information modeling, interoperability, cost analysis, IFC
series CAAD Futures
email
last changed 2012/02/11 19:21

_id e336
authors Achten, H., Roelen, W., Boekholt, J.-Th., Turksma, A. and Jessurun, J.
year 1999
title Virtual Reality in the Design Studio: The Eindhoven Perspective
doi https://doi.org/10.52842/conf.ecaade.1999.169
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 169-177
summary Since 1991 Virtual Reality has been used in student projects in the Building Information Technology group. It started as an experimental tool to assess the impact of VR technology in design, using the environment of the associated Calibre Institute. The technology was further developed in Calibre to become an important presentation tool for assessing design variants and final design solutions. However, it was only sporadically used in student projects. A major shift occurred in 1997 with a number of student projects in which various computer technologies including VR were used in the whole of the design process. In 1998, the new Design Systems group started a design studio with the explicit aim to integrate VR in the whole design process. The teaching effort was combined with the research program that investigates VR as a design support environment. This has lead to increasing number of innovative student projects. The paper describes the context and history of VR in Eindhoven and presents the current set-UP of the studio. It discusses the impact of the technology on the design process and outlines pedagogical issues in the studio work.
keywords Virtual Reality, Design Studio, Student Projects
series eCAADe
email
last changed 2022/06/07 07:54

_id e719
authors Achten, Henri and Turksma, Arthur
year 1999
title Virtual Reality in Early Design: the Design Studio Experiences
source AVOCAAD Second International Conference [AVOCAAD Conference Proceedings / ISBN 90-76101-02-07] Brussels (Belgium) 8-10 April 1999, pp. 327-335
summary The Design Systems group of the Eindhoven University of Technology started a new kind of design studio teaching. With the use of high-end equipment, students use Virtual Reality from the very start of the design process. Virtual Reality technology up to now was primarily used for giving presentations. We use the same technology in the design process itself by means of reducing the time span in which one gets results in Virtual Reality. The method is based on a very brief cycle of modelling in AutoCAD, assigning materials in 3DStudio Viz, and then making a walkthrough in Virtual Reality in a standard landscape. Due to this cycle, which takes about 15 seconds, the student gets immediate feedback on design decisions which facilitates evaluation of the design in three dimensions much faster than usual. Usually the learning curve of this kind of software is quite steep, but with the use of templates the number of required steps to achieve results is reduced significantly. In this way, the potential of Virtual Reality is not only explored in research projects, but also in education. This paper discusses the general set-up of the design studio and shows how, via short workshops, students acquire knowledge of the cycle in a short time. The paper focuses on the added value of using Virtual Reality technology in this manner: improved spatial reasoning, translation from two-dimensional to three-dimensional representations, and VR feedback on design decisions. It discusses the needs for new design representations in this design environment, and shows how fast feedback in Virtual Reality can improve the spatial design at an early stage of the design process.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 6d88
authors Achten, Henri H. and Van Leeuwen, Jos P.
year 1999
title Feature-Based High Level Design Tools - A Classification
source Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures [ISBN 0-7923-8536-5] Atlanta, 7-8 June 1999, pp. 275-290
summary The VR-DIS project aims to provide design support in the early design stage using a Virtual Reality environment. The initial brief of the design system is based on an analysis of a design case. The paper describes the process of analysis and extraction of design knowledge and design concepts in terms of Features. It is demonstrated how the analysis has lead to a classification of design concepts. This classification forms one of the main specifications for the VR-based design aid system that is being developed in the VR-DIS programme. The paper concludes by discussing the particular approach used in the case analysis and discusses future work in the VR-DIS research programme.
keywords Features, Feature-Based modelling, Architectural Design, Design Process, Design Support
series CAAD Futures
email
last changed 2006/11/07 07:22

_id acadia21_530
id acadia21_530
authors Adel, Arash; Augustynowicz, Edyta; Wehrle, Thomas
year 2021
title Robotic Timber Construction
doi https://doi.org/10.52842/conf.acadia.2021.530
source ACADIA 2021: Realignments: Toward Critical Computation [Proceedings of the 41st Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 979-8-986-08056-7]. Online and Global. 3-6 November 2021. edited by S. Parascho, J. Scott, and K. Dörfler. 530-537.
summary Several research projects (Gramazio et al. 2014; Willmann et al. 2015; Helm et al. 2017; Adel et al. 2018; Adel Ahmadian 2020) have investigated the use of automated assembly technologies (e.g., industrial robotic arms) for the fabrication of nonstandard timber structures. Building on these projects, we present a novel and transferable process for the robotic fabrication of bespoke timber subassemblies made of off-the-shelf standard timber elements. A nonstandard timber structure (Figure 2), consisting of four bespoke subassemblies: three vertical supports and a Zollinger (Allen 1999) roof structure, acts as the case study for the research and validates the feasibility of the proposed process.
series ACADIA
type project
email
last changed 2023/10/22 12:06

_id ae61
authors Af Klercker, Jonas
year 1999
title CAAD - Integrated with the First Steps into Architecture
doi https://doi.org/10.52842/conf.ecaade.1999.266
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 266-272
summary How and when should CAAD be introduced in the curriculum of the School of Architecture? This paper begins with some arguments for starting CAAD education at the very beginning. At the School of Architecture in Lund teachers in the first year courses have tried to integrate CAAD with the introduction to architectural concepts and techniques. Traditionally the first year is divided by several subjects running courses separatly without any contact for coordination. From the academic year 96/97 the teachers of Aplied aestetics, Building Science, Architectural design and CAAD have decided to colaborate as much as possible to make the role of our different fields as clear as possible to the students. Therefore integrating CAAD was a natural step in the academic year 98/99. The computer techniques were taught one step in advance so that the students can practise their understanding of the programs in their tasks in the other subjects. The results were surprisingly good! The students have quickly learned to mix the manual and computer techniques to make expressive and interesting visual presentations of their ideas. Some students with antipaty to computers have overcome this handicap. Some interesting observations are discussed.
keywords Curriculum, First Year Studies, Integration, CAAD, Modelling
series eCAADe
email
last changed 2022/06/07 07:54

_id 36d3
authors Af Klercker, Jonas
year 1999
title A CAVE-Interface in CAAD-Education?
doi https://doi.org/10.52842/conf.caadria.1999.313
source CAADRIA '99 [Proceedings of The Fourth Conference on Computer Aided Architectural Design Research in Asia / ISBN 7-5439-1233-3] Shanghai (China) 5-7 May 1999, pp. 313-323
summary The so called "CAVE-interface" is a very interesting and thrilling development for architects! It supports a better illusion of space by exposing almost a 270° view of a computer model than the 60° which can be viewed on an ordinary computer screen. At the Lund University we have got the possibility to experiment with a CAVE-installation, using it in research and the education of CAAD. The technique and three experiments are discribed. The possibilities are discussed and some problems and questions are put forward.
series CAADRIA
email
last changed 2022/06/07 07:54

_id 37c2
authors Ahmad Rafi, M.E.
year 1999
title Visualisation of Design Using Animation for Virtual Prototyping
doi https://doi.org/10.52842/conf.ecaade.1999.519
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 519-525
summary Although recent technology in time-based representation has vastly improved, animation in virtual prototype design field remains the same. Some designers invest a huge amount of money in the latest visualisation and multimedia technology and yet may create even worse animation. They often cramp sequences resulting in many viewers failing to interpret the design positively as they miss a lot of vital information that explains the design. This paper basically reports the importance of film-making understanding for producing good virtual prototype animation. It will be based on a part of a research project on the use of time-based media in architectural practices. It also includes an empirical analysis of several architectural-based documentary films (including an interview with the film director) and past and present computer animation. This paper then concludes with recommendations of good techniques for making animated visualisation relative to the stage at which the animation is produced for better design decision.
keywords Virtual Prototype, Animation, Time-Based, Film-Making
series eCAADe
email
last changed 2022/06/07 07:54

_id alqawasmi
id alqawasmi
authors Al-Qawasmi, J., Clayton, M.J., Tassinary, L.G. and Johnson, R..
year 1999
title Observations on Collaborative Design and Multimedia Usage in Virtual Design Studio
source J. Woosely and T. Adair (eds.), Learning virtually: Proceedings of the 6th annual distance education conference, San Antonio, Texas, pp. 1-9
summary The virtual design studio (VDS) points to a new way of practicing and teaching architectural design. As a new phenomenon, little research has been done to evaluate design collaboration and multimedia usage in a distributed workplace like the virtual design studio. Our research provides empirical data on how students actually use multiple media during architectural collaborative design.
series other
email
last changed 2003/12/06 09:55

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 40HOMELOGIN (you are user _anon_471455 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002