CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 620

_id acadia21_530
id acadia21_530
authors Adel, Arash; Augustynowicz, Edyta; Wehrle, Thomas
year 2021
title Robotic Timber Construction
source ACADIA 2021: Realignments: Toward Critical Computation [Proceedings of the 41st Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 979-8-986-08056-7]. Online and Global. 3-6 November 2021. edited by S. Parascho, J. Scott, and K. Dörfler. 530-537.
doi https://doi.org/10.52842/conf.acadia.2021.530
summary Several research projects (Gramazio et al. 2014; Willmann et al. 2015; Helm et al. 2017; Adel et al. 2018; Adel Ahmadian 2020) have investigated the use of automated assembly technologies (e.g., industrial robotic arms) for the fabrication of nonstandard timber structures. Building on these projects, we present a novel and transferable process for the robotic fabrication of bespoke timber subassemblies made of off-the-shelf standard timber elements. A nonstandard timber structure (Figure 2), consisting of four bespoke subassemblies: three vertical supports and a Zollinger (Allen 1999) roof structure, acts as the case study for the research and validates the feasibility of the proposed process.
series ACADIA
type project
email
last changed 2023/10/22 12:06

_id 7082
authors Dawood, N.
year 1999
title A proposed system for integrating design and production in the precast building industry
source The Int. Journal of Construction IT 7(1), pp. 72-83
summary The UK construction industry is going through a major re-appraisal, with the objective of reducing construction costs by at least 30% by the end of the millennium. Precast and off-site construction are set to play a major role in improving construction productivity, reducing costs and improving working conditions. In a survey of current practices in the prefabrication industry, it was concluded that the industry is far behind other manufacturing-based industries in terms of the utilisation of IT in production planning and scheduling and other technical and managerial operations. It is suggested that a systematic, integrated, computer-aided, approach to presenting and processing information is needed. The objective of this paper is to introduce and discuss the specifications of an integrated intelligent computer-based information system for the precast concrete industry. The system should facilitate: the integration of design and manufacturing operations; automatic generation of production schedules directly from design data and factory attributes; and generation of erection schedules from site information, factory attributes and design data. It is hypothesised that the introduction of such a system would reduce the total cost of precasting by 10% and encourage clients to choose precast components more often.
series journal paper
last changed 2003/05/15 21:45

_id cf2011_p109
id cf2011_p109
authors Abdelmohsen, Sherif; Lee Jinkook, Eastman Chuck
year 2011
title Automated Cost Analysis of Concept Design BIM Models
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 403-418.
summary AUTOMATED COST ANALYSIS OF CONCEPT DESIGN BIM MODELS Interoperability: BIM models and cost models This paper introduces the automated cost analysis developed for the General Services Administration (GSA) and the analysis results of a case study involving a concept design courthouse BIM model. The purpose of this study is to investigate interoperability issues related to integrating design and analysis tools; specifically BIM models and cost models. Previous efforts to generate cost estimates from BIM models have focused on developing two necessary but disjoint processes: 1) extracting accurate quantity take off data from BIM models, and 2) manipulating cost analysis results to provide informative feedback. Some recent efforts involve developing detailed definitions, enhanced IFC-based formats and in-house standards for assemblies that encompass building models (e.g. US Corps of Engineers). Some commercial applications enhance the level of detail associated to BIM objects with assembly descriptions to produce lightweight BIM models that can be used by different applications for various purposes (e.g. Autodesk for design review, Navisworks for scheduling, Innovaya for visual estimating, etc.). This study suggests the integration of design and analysis tools by means of managing all building data in one shared repository accessible to multiple domains in the AEC industry (Eastman, 1999; Eastman et al., 2008; authors, 2010). Our approach aims at providing an integrated platform that incorporates a quantity take off extraction method from IFC models, a cost analysis model, and a comprehensive cost reporting scheme, using the Solibri Model Checker (SMC) development environment. Approach As part of the effort to improve the performance of federal buildings, GSA evaluates concept design alternatives based on their compliance with specific requirements, including cost analysis. Two basic challenges emerge in the process of automating cost analysis for BIM models: 1) At this early concept design stage, only minimal information is available to produce a reliable analysis, such as space names and areas, and building gross area, 2) design alternatives share a lot of programmatic requirements such as location, functional spaces and other data. It is thus crucial to integrate other factors that contribute to substantial cost differences such as perimeter, and exterior wall and roof areas. These are extracted from BIM models using IFC data and input through XML into the Parametric Cost Engineering System (PACES, 2010) software to generate cost analysis reports. PACES uses this limited dataset at a conceptual stage and RSMeans (2010) data to infer cost assemblies at different levels of detail. Functionalities Cost model import module The cost model import module has three main functionalities: generating the input dataset necessary for the cost model, performing a semantic mapping between building type specific names and name aggregation structures in PACES known as functional space areas (FSAs), and managing cost data external to the BIM model, such as location and construction duration. The module computes building data such as footprint, gross area, perimeter, external wall and roof area and building space areas. This data is generated through SMC in the form of an XML file and imported into PACES. Reporting module The reporting module uses the cost report generated by PACES to develop a comprehensive report in the form of an excel spreadsheet. This report consists of a systems-elemental estimate that shows the main systems of the building in terms of UniFormat categories, escalation, markups, overhead and conditions, a UniFormat Level III report, and a cost breakdown that provides a summary of material, equipment, labor and total costs. Building parameters are integrated in the report to provide insight on the variations among design alternatives.
keywords building information modeling, interoperability, cost analysis, IFC
series CAAD Futures
email
last changed 2012/02/11 19:21

_id f11d
authors Brown, K. and Petersen, D.
year 1999
title Ready-to-Run Java 3D
source Wiley Computer Publishing
summary Written for the intermediate Java programmer and Web site designer, Ready-to-Run Java 3D provides sample Java applets and code using Sun's new Java 3D API. This book provides a worthy jump-start for Java 3D that goes well beyond the documentation provided by Sun. Coverage includes downloading the Java 2 plug-in (needed by Java 3D) and basic Java 3D classes for storing shapes, matrices, and scenes. A listing of all Java 3D classes shows off its considerable richness. Generally, this book tries to cover basic 3D concepts and how they are implemented in Java 3D. (It assumes a certain knowledge of math, particularly with matrices, which are a staple of 3D graphics). Well-commented source code is printed throughout (though there is little additional commentary). An applet for orbiting planets provides an entertaining demonstration of transforming objects onscreen. You'll learn to add processing for fog effects and texture mapping and get material on 3D sound effects and several public domain tools for working with 3D artwork (including converting VRML [Virtual Reality Markup Language] files for use with Java 3D). In all, this book largely succeeds at being accessible for HTML designers while being useful to Java programmers. With Java 3D, Sun is betting that 3D graphics shouldn't require a degree in computer science. This book reflects that philosophy, though advanced Java developers will probably want more detail on this exciting new graphics package. --Richard Dragan Topics covered: Individual applets for morphing, translation, rotation, and scaling; support for light and transparency; adding motion and interaction to 3D objects (with Java 3D classes for behaviors and interpolators); and Java 3D classes used for event handling.
series other
last changed 2003/04/23 15:14

_id 0beb
authors Koch, Volker and Russell, Peter
year 2000
title VuuA.Org: The Virtual Upperrhine University of Architecture
source Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process [18th eCAADe Conference Proceedings / ISBN 0-9523687-6-5] Weimar (Germany) 22-24 June 2000, pp. 23-25
doi https://doi.org/10.52842/conf.ecaade.2000.023
summary In 1998, architecture schools in the three nation region of the upper Rhine came together to undertake a joint design studio. With the support of the Center for Entrepeneurship in Colmar, France, the schools worked on the reuse of the Kuenzer Mill situated near Herbolzheim, Germany. The students met jointly three times during the semester and then worked on the project at their home universities usng conventional methods. This project was essential to generating closer ties between the participating students, tutors and institutions and as such, the results were quite positive. So much so, that the organisers decided to repeat the exercise one year later. However, it became clear that although the students had met three times in large groups, the real success of a co-operative design studio would require mechanisms which allow far more intimate interaction among the participants, be they students, teachers or outside experts. The experiences from the Netzentwurf at the Institut für Industrielle Bauproduktion (ifib) showed the potential in a web based studio and the addition of ifib to the three nation group led to the development of the VuuA platform. The first project served to illuminate the the differences in teaching concepts among the partner institutions and their teaching staff as well as problems related to the integration of students from three countries with two languages and four different faculties: landscape architecture, interior design, architecture and urban planning. The project for the Fall of 1999 was the reuse of Fort Kléber in Wolfisheim by Strasbourg, France. The students again met on site to kick off the Semester but were also instructed to continue their cooperation and criticism using the VuuA platform.
keywords Virtual Design Studio, CSCW, International Cooperation, Planning Platform
series eCAADe
email
more http://www.vuua.org
last changed 2022/06/07 07:51

_id c0c4
authors Smith, Timothy M.
year 1999
title Suisse Telekom Headquarters Norton, Virginia
source ACADIA Quarterly, vol. 18, no. 3, p. 6
doi https://doi.org/10.52842/conf.acadia.1999.x.v8t
summary The design problem called for a mixed-use facility housing a bookstore, a secure telecommunications relay facility with training and conference areas, and a private employee fitness center. The site is at the end of the main street in Norton just off the main highway, and is where a four-story hotel project was abandoned twenty years prior. The structural steel frame for the hotel was erected and construction halted at this stage, leaving the skeletal frame and an empty lot at the end of the axis of the main street in Norton. Norton began as a coalmining town but has recently gained attention as a telecommunications hub after a national telecommunications firm located their TDD headquarters in Norton, making use of the fiber optic lines available in the area.
series ACADIA
email
last changed 2022/06/07 07:49

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id acadia06_426
id acadia06_426
authors Garber, R., Robertson, N.
year 2006
title The Pleated Cape: From the Mass-Standardization of Levittown to Mass Customization Today
source Synthetic Landscapes [Proceedings of the 25th Annual Conference of the Association for Computer-Aided Design in Architecture] pp. 426-439
doi https://doi.org/10.52842/conf.acadia.2006.426
summary In the 1950’s, the Levitts put mass-production and the reverse assembly line into use in the building of thousands of single-family houses. However, the lack of variation that made their construction process so successful ultimately produced a mundane suburban landscape of sameness. While there were many attempts to differentiate these Levitt Cape Cods, none matched the ingenuity of their original construction process. The notion of mass-customization has been heavily theorized since the 1990’s, first appearing in the field of management and ultimately finding its way into the field of architecture. Greg Lynn used mass-customization in his design for the Embryological House in which thousands of unique houses could be generated using biological rules of differentiation (Lynn 1999). Other industries have embraced the premise that computer-numerically-controlled technologies allow for the production of variation, though it has not been thoroughly studied in architecture. While digital fabrication has been integral in the realization of several high-profile projects, the notion of large-scale mass-customization in the spec-housing market has yet to become a reality. Through the execution of an addition to a Cape Cod-style house, we examine the intersection between prefabricated standardized panels and digital fabrication to produce a mass-customized approach to housing design. Through illustrations and a detailed description of our design process, we will show how digital fabrication technologies allow for customization of mass produced products.
series ACADIA
email
last changed 2022/06/07 07:50

_id 5cba
authors Anders, Peter
year 1999
title Beyond Y2k: A Look at Acadia's Present and Future
source ACADIA Quarterly, vol. 18, no. 1, p. 10
doi https://doi.org/10.52842/conf.acadia.1999.x.o3r
summary The sky may not be falling, but it sure is getting closer. Where will you when the last three zeros of our millennial odometer click into place? Computer scientists tell us that Y2K will bring the world’s computer infrastructure to its knees. Maybe, maybe not. But it is interesting that Y2K is an issue at all. Speculating on the future is simultaneously a magnifying glass for examining our technologies and a looking glass for what we become through them. "The future" is nothing new. Orwell's vision of totalitarian mass media did come true, if only as Madison Avenue rather than Big Brother. Futureboosters of the '50s were convinced that each garage would house a private airplane by the year 2000. But world citizens of the 60's and 70's feared a nuclear catastrophe that would replace the earth with a smoking crater. Others - perhaps more optimistically -predicted that computers were going to drive all our activities by the year 2000. And, in fact, theymay not be far off... The year 2000 is symbolic marker, a point of reflection and assessment. And - as this date is approaching rapidly - this may be a good time to come to grips with who we are and where we want to be.
series ACADIA
email
last changed 2022/06/07 07:49

_id 48a7
authors Brooks
year 1999
title What's Real About Virtual Reality
source IEEE Computer Graphics and Applications, Vol. 19, no. 6, Nov/Dec, 27
summary As is usual with infant technologies, the realization of the early dreams for VR and harnessing it to real work has taken longer than the wild hype predicted, but it is now happening. I assess the current state of the art, addressing the perennial questions of technology and applications. By 1994, one could honestly say that VR "almost works." Many workers at many centers could doe quite exciting demos. Nevertheless, the enabling technologies had limitations that seriously impeded building VR systems for any real work except entertainment and vehicle simulators. Some of the worst problems were end-to-end system latencies, low-resolution head-mounted displays, limited tracker range and accuracy, and costs. The technologies have made great strides. Today one can get satisfying VR experiences with commercial off-the-shelf equipment. Moreover, technical advances have been accompanied by dropping costs, so it is both technically and economically feasible to do significant application. VR really works. That is not to say that all the technological problems and limitations have been solved. VR technology today "barely works." Nevertheless, coming over the mountain pass from "almost works" to "barely works" is a major transition for the discipline. I have sought out applications that are now in daily productive use, in order to find out exactly what is real. Separating these from prototype systems and feasibility demos is not always easy. People doing daily production applications have been forthcoming about lessons learned and surprises encountered. As one would expect, the initial production applications are those offering high value over alternate approaches. These applications fall into a few classes. I estimate that there are about a hundred installations in daily productive use worldwide.
series journal paper
email
last changed 2003/04/23 15:14

_id 8735
authors James, Stephen
year 1999
title An Allegorical Architecture: A Proposed Interpretive Center for the Bonneville Salt Flats
source ACADIA Quarterly, vol. 18, no. 1, pp. 18-19
doi https://doi.org/10.52842/conf.acadia.1999.018
summary Architecture is the physical expression of man's relationship to the landscape- an emblem of our heritage. Such a noble statement sounds silly into today's context, because civilized society has largely disassociated itself from raw nature. We have tamed the elements with our environmental controls and turned the deserts into pasture. I find much of the built environment distracting. Current architecture is trite, compared to geologic form and order. I visited the Bonneville Salt Flats- (Utah's anti-landscape) in the summer of 1997. The experience of arriving at the flats exceeded my expectations. I was overpowered by a sense of personal insignificance - a small spot floating on a sea of salt. The horizon seemed to swallow up the sky. Off in the distance I noticed a dark fleck. It looked as foreign as I felt on this pure white plane. I drove across the sticky salt toward it, only to discover an old rusty oil barrel half submerged in salt. In my mind, the barrel has a history. It tells the story of a man's attempt at achieving a goal, or maybe it represents a broken dream left to corrode in the alkali flats. The barrel remains planted in the salt as a relic for those who venture into the white wilderness. This experience left me to ponder whether or not architecture can serve the same purpose - telling the story of a place through its relationship to a landscape, and connection to events.
series ACADIA
email
last changed 2022/06/07 07:52

_id 4a1a
authors Laird, J.E.
year 2001
title Using Computer Game to Develop Advanced AI
source Computer, 34 (7), July pp. 70-75
summary Although computer and video games have existed for fewer than 40 years, they are already serious business. Entertainment software, the entertainment industry's fastest growing segment, currently generates sales surpassing the film industry's gross revenues. Computer games have significantly affected personal computer sales, providing the initial application for CD-ROMs, driving advancements in graphics technology, and motivating the purchase of ever faster machines. Next-generation computer game consoles are extending this trend, with Sony and Toshiba spending $2 billion to develop the Playstation 2 and Microsoft planning to spend more than $500 million just to market its Xbox console [1]. These investments have paid off. In the past five years, the quality and complexity of computer games have advanced significantly. Computer graphics have shown the most noticeable improvement, with the number of polygons rendered in a scene increasing almost exponentially each year, significantly enhancing the games' realism. For example, the original Playstation, released in 1995, renders 300,000 polygons per second, while Sega's Dreamcast, released in 1999, renders 3 million polygons per second. The Playstation 2 sets the current standard, rendering 66 million polygons per second, while projections indicate the Xbox will render more than lOO million polygons per second. Thus, the images on today's $300 game consoles rival or surpass those available on the previous decade's $50,000 computers. The impact of these improvements is evident in the complexity and realism of the environments underlying today's games, from detailed indoor rooms and corridors to vast outdoor landscapes. These games populate the environments with both human and computer controlled characters, making them a rich laboratory for artificial intelligence research into developing intelligent and social autonomous agents. Indeed, computer games offer a fitting subject for serious academic study, undergraduate education, and graduate student and faculty research. Creating and efficiently rendering these environments touches on every topic in a computer science curriculum. The "Teaching Game Design " sidebar describes the benefits and challenges of developing computer game design courses, an increasingly popular field of study
series journal paper
last changed 2003/04/23 15:50

_id caadria2005_b_4b_d
id caadria2005_b_4b_d
authors Martin Tamke
year 2005
title Baking Light: Global Illumination in VR Environments as architectural design tool
source CAADRIA 2005 [Proceedings of the 10th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] New Delhi (India) 28-30 April 2005, vol. 2, pp. 214-228
doi https://doi.org/10.52842/conf.caadria.2005.214
summary As proven in the past, immersive Virtual Environments can be helpful in the process of architectural design (Achten et al. 1999). But still years later, these systems are not common in the architectural design process, neither in architectural education nor in professional work. The reasons might be the high price of e.g. CAVEs, the lack of intuitive navigation and design tools in those environments, the absence of useful and easy to handle design workflows, and the quality constraints of real-time display of 3D models. A great potential for VR in the architectural workflow is the review of design decisions: Display quality, comfortable navigation and realistic illumination are crucial ingredients here. Light is one of the principal elements in architectural design, so design reviews must enable the architect to judge the quality of his design in this respect. Realistic light simulations, e.g. via radiosity algorithms, are no longer the domain of high-end graphic workstations. Today's off-the-shelf hardware and 3D-software provide the architect with high-quality tools to simulate physically correct light distributions. But the quality and impression of light is hard to judge from looking at still renderings. In collaboration with the Institute of Computer Graphics at our university we have established a series of regular design reviews in their immersive virtual environment. This paper describes the workflow that has emerged from this collaboration, the tools that were developed and used, and our practical experiences with global-light-simulations. We share results which we think are helpful to others, and we highlight areas where further research is necessary.
series CAADRIA
email
last changed 2022/06/07 07:59

_id f02b
authors Mitchell, W.
year 1999
title E-topia: urban life, Jim –but not as we know it
source MIT press
summary The global digital network is not just a delivery system for email, Web pages, and digital television. It is a whole new urban infrastructure--one that will change the forms of our cities as dramatically as railroads, highways, electric power supply, and telephone networks did in the past. In this lucid, invigorating book, William J. Mitchell examines this new infrastructure and its implications for our future daily lives. Picking up where his best-selling City of Bits left off, Mitchell argues that we must extend the definitions of architecture and urban design to encompass virtual places as well as physical ones, and interconnection by means of telecommunication links as well as by pedestrian circulation and mechanized transportation systems. He proposes strategies for the creation of cities that not only will be sustainable but will make economic, social, and cultural sense in an electronically interconnected and global world. The new settlement patterns of the twenty-first century will be characterized by live/work dwellings, 24-hour pedestrian-scale neighborhoods rich in social relationships, and vigorous local community life, complemented by far-flung configurations of electronic meeting places and decentralized production, marketing, and distribution systems. Neither digiphile nor digiphobe, Mitchell advocates the creation of e-topias--cities that work smarter, not harder.
series other
last changed 2003/04/23 15:14

_id bf97
authors Roberts, Andrew and Counsell, John
year 1999
title The BEATL Project: Embedding Appropriate CAL in the Teaching of Architecture
source Architectural Computing from Turing to 2000 [eCAADe Conference Proceedings / ISBN 0-9523687-5-7] Liverpool (UK) 15-17 September 1999, pp. 334-340
doi https://doi.org/10.52842/conf.ecaade.1999.334
summary This paper is based upon the premise that Computer Aided Learning (CAL) has been poorly integrated into schools of Architecture and it identifies some of the barriers that have prevented this. The Built Environment Appropriate Technology for Learning (BEATL) project aims to promote a climate of change within which these barriers can be crossed. The focus of BEATL is on providing a framework within which technology assisted teaching can be adopted for particular elements of taught courses through a process of module pairing, and collaboration between Built Environment faculties at three UK Universities. The paper discusses the early stages of the Project and outlines the methodologies developed for embedding and transferring innovations between institutions, the support of 'Educational Technology Officers' and the evaluation strategies being utilised. Early results indicate the benefits of a focus on a individual element rather than a whole module and that generic innovations tend to be more successfully transferred than 'off the shelf' Computer Aided Learning products.
keywords CAL, Integration, Transferability, Collaboration
series eCAADe
email
last changed 2022/06/07 07:56

_id ca5e
authors Yamaguchi, Shigeyuki and Toizumi, Kanou
year 1999
title Computer Suported Face-to-Face Meeting Environment for Architectural Design Collaboration
source InterSymp-99[International Conference on Systems Research, Infomatics and Cybernetics/ISBN:0-921836-75-9] Baden-Baden(Germany), August2-6, 1999, pp. 39-47
summary This paper describes our current work in the development of a collaborative design meeting environment which includes hardware and software. It attempts to support the design collaboration in face-to-face meetings, instead of collaboration in Cyberspace. Pinup walls, a meeting table, white boards are metaphors on the proposed system. Digitized design information, CAD drawings, CG pictures or movies and other documents could be accessible to members sitting for testing, simulating, evaluating design ideas or concepts on the projected video screen using installed program modules or off-the-shelf application programs. They could concentrate on discussing design issues, without interruptions caused by looking for some lost information and preparing design models or documents at their desks.
keywords Collaborative Design, Design Meeting, Face-to-face Meeting, Interface to design information,Room-ware
series other
email
last changed 2002/09/14 11:26

_id ga9926
id ga9926
authors Antonini, Riccardo
year 1999
title Let's Improvise Together
source International Conference on Generative Art
summary The creators of ‘Let's-Improvise-Together’ adhere to the idea that while there is a multitude of online games now available in cyberspace, it appears that relatively few are focused on providing a positive, friendly and productive experience for the user. Producing this kind of experience is one the goals of our Amusement Project.To this end, the creation of ‘Let's Improvise Together’ has been guided by dedication to the importance of three themes:* the importance of cooperation,* the importance of creativity, and* the importance of emotion.Description of the GameThe avatar arrives in a certain area where there are many sound-blocks/objects. Or he may add sound "property" to existing ones. He can add new objects at will. Each object may represents a different sound, they do not have to though. The avatar walks around and chooses which objects he likes. Makes copies of these and add sounds or change the sounds on existing ones, then with all of the sound-blocks combined make his personalized "instrument". Now any player can make sounds on the instrument by approaching or bumping into a sound-block. The way that the avatar makes sounds on the instrument can vary. At the end of the improvising session, the ‘composition’ will be saved on the instrument site, along with the personalized instrument. In this way, each user of the Amusement Center will leave behind him a unique instrumental creation, that others who visit the Center later will be able to play on and listen to. The fully creative experience of making a new instrument can be obtained connecting to Active Worlds world ‘Amuse’ and ‘Amuse2’.Animated colorful sounding objects can be assembled by the user in the Virtual Environment as a sort of sounding instrument. We refrain here deliberately from using the word musical instrument, because the level of control we have on the sound in terms of rythm and melody, among other parameters, is very limited. It resembles instead, very closely, to the primitive instruments used by humans in some civilizations or to the experience made by children making sound out of ordinary objects. The dimension of cooperation is of paramount importance in the process of building and using the virtual sounding instrument. The instrument can be built on ones own effort but preferably by a team of cooperating users. The cooperation has as an important corolary: the sharing of the experience. The shared experience finds its permanence in the collective memory of the sounding instruments built. The sounding instrument can be seen also as a virtual sculpture, indeed this sculpture is a multimedial one. The objects have properties that ranges from video animation to sound to virtual physical properties like solidity. The role of the user representation in the Virtual World, called avatar, is important because it conveys, among other things, the user’s emotions. It is worth pointing out that the Avatar has no emotions on its own but it simply expresses the emotions of the user behind it. In a way it could be considered a sort of actor performing the script that the user gives it in real-time while playing.The other important element of the integration is related to the memory of the experience left by the user into the Virtual World. The new layout is explored and experienced. The layout is a permanent editable memory. The generative aspects of Let's improvise together are the following.The multi-media virtual sculpture left behind any participating avatar is not the creation of a single author/artist. The outcome of the sinergic interaction of various authors is not deterministic, nor predictable. The authors can indeed use generative algorythm in order to create the texture to be used on the objects. Usually, in our experience, the visitors of the Amuse worlds use shareware programs in order to generate their texture. In most cases the shareware programs are simple fractals generators. In principle, it is possible to generate also the shape of the object in a generative way. Taking into account the usual audience of our world, we expected visitors to use very simple algorythm that could generate shapes as .rwx files. Indeed, noone has attempted to do so insofar. As far as the music is concerned, the availability of shareware programs that allow simple generation of sounds sequences has made possible, for some users, to generate sounds sequences to be put in our world. In conclusion, the Let's improvise section of the Amuse worlds could be open for experimentation on generative art as a very simple entry point platform. We will be very happy to help anybody that for educational purposes would try to use our platform in order to create and exhibit generative forms of art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 2fe1
authors Arroyo, Julio and Chiarella, Mauro
year 1999
title Infographic: Its Incorporation and Relativity in Architectural Design Process
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 313-318
summary This paper is about an architectural design workshop regularly held at a public university in Santa Fe, Argentina. The class is about 150 students large, with different informatic capabilities and hardware facilities. The design problem of the workshop, which is one year long, is the relationship between architectural project and the construction of the urbanity. This implies both a physical intervention and a cultural expression. Pedagogy seeks students to overcome individualism, characteristic that is hardly induced by PCs, making a socialized design experience. A complementary and simultaneous use of graphic and infographic data is one of the main criteria of the workshop. The idea is to look for students to reach a wide vision by means of the use of different representation systems and means of information. Digital graphic is introduced early in the design process as an electronic model of urban context. It is considered as a one among many other graphic resources and is used together with ordinary models, geometric drawings, aerial and regular photography and hand made sketches. This paper relates the results that have been obtained when students were asked to make an analytic and sensitive approach to the relationship site - urban situation. This relationship has a great importance for the workshop since its goal is to make students to understand the the value of designing in and for the city.
series SIGRADI
email
last changed 2016/03/10 09:47

_id 9f35
authors Bhavnani, S. K., Garrett, J.H., Flemming, U. and Shaw, D.S.
year 1999
title Towards Active Assistance
source Bridging the Generations. The Future of Computer-Aided Engineering (eds. J. H. Garrett and D. R. Rehak) Department of Civil and Environmental Engineering, Carnegie Mellon University, Pittsburgh, PA (1999), 199-203
summary The exploding functionality of current computer-aided engineering (CAE) systems has provided today’s users with a vast, but under-utilized collection of tools and options. For example, MicroStation, a popular CAE system sold by Intergraph, offers more than 1000 commands including 16 ways to construct a line (in different contexts) and 28 ways to manipulate elements using a “fence”. This complex array of functionalities is bewildering and hardly exploited to its full extent even by frequent, experienced users. In a recent site visit to a federal design office, we observed ten architects and three draftsmen using MicroStation.
series other
email
last changed 2003/11/21 15:16

_id 2c1d
authors Castañé, D., Tessier, C., Álvarez, J. and Deho, C.
year 1999
title Patterns for Volumetric Recognition - Guidelines for the Creation of 3D-Models
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 171-175
summary This piece proposes new strategies and pedagogic methodologies applied to the recognition and study of the subjacent measurements of the architectural projects to be created. This proposal is the product of pedagogic experience, which stems from this instructional team of the department of tri-dimensional models of electronic models. This program constitutes an elective track for the architectural major at the college of architecture, design, and urbanism of the University of Buenos Aires and housed at the CAO center. One of the requirements that the students must complete, after doing research and analytical experimentation through the knowledge that they acquired through this course, is to practice the attained skills through exercises proposed by the department in this case, the student would be required to virtually rebuild a paradigmatic architectonic piece of several sample architects. Usually at this point, students experience some difficulties when they analyze the existing documents on the plants, views, picture, details, texts, etc., That they have obtained from magazines, books, and other sources. Afterwards, when they digitally begin to generate basic measurements of the architectural work to be modeled, they realize that there are great limitations in the comprehension of the tri-dimensional understanding of the work. This issue has brought us to investigate and develop proposals of volumetric understanding of patterns through examples of work already analyzed and digitalized tri-dimensionally in the department. Through a careful study of the existent documentation for that particular work, it is evaluated which would be the paths and basis to adopt through utilizing alternative technologies to arrive at a clear reconstruction of the projected architectural work, the study gets completed by implementing the proposal at the internet site http://www.datarq.fadu.uba.ar/catedra/dorcas
series SIGRADI
email
last changed 2016/03/10 09:48

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 30HOMELOGIN (you are user _anon_682692 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002