CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 80

_id cf2011_p170
id cf2011_p170
authors Barros, Mário; Duarte José, Chaparro Bruno
year 2011
title Thonet Chairs Design Grammar: a Step Towards the Mass Customization of Furniture
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 181-200.
summary The paper presents the first phase of research currently under development that is focused on encoding Thonet design style into a generative design system using a shape grammar. The ultimate goal of the work is the design and production of customizable chairs using computer assisted tools, establishing a feasible practical model of the paradigm of mass customization (Davis, 1987). The current research step encompasses the following three steps: (1) codification of the rules describing Thonet design style into a shape grammar; (2) implementing the grammar into a computer tool as parametric design; and (3) rapid prototyping of customized chair designs within the style. Future phases will address the transformation of the Thonet’s grammar to create a new style and the production of real chair designs in this style using computer aided manufacturing. Beginning in the 1830’s, Austrian furniture designer Michael Thonet began experimenting with forming steam beech, in order to produce lighter furniture using fewer components, when compared with the standards of the time. Using the same construction principles and standardized elements, Thonet produced different chairs designs with a strong formal resemblance, creating his own design language. The kit assembly principle, the reduced number of elements, industrial efficiency, and the modular approach to furniture design as a system of interchangeable elements that may be used to assemble different objects enable him to become a pioneer of mass production (Noblet, 1993). The most paradigmatic example of the described vision of furniture design is the chair No. 14 produced in 1858, composed of six structural elements. Due to its simplicity, lightness, ability to be stored in flat and cubic packaging for individual of collective transportation, respectively, No. 14 became one of the most sold chairs worldwide, and it is still in production nowadays. Iconic examples of mass production are formally studied to provide insights to mass customization studies. The study of the shape grammar for the generation of Thonet chairs aimed to ensure rules that would make possible the reproduction of the selected corpus, as well as allow for the generation of new chairs within the developed grammar. Due to the wide variety of Thonet chairs, six chairs were randomly chosen to infer the grammar and then this was fine tuned by checking whether it could account for the generation of other designs not in the original corpus. Shape grammars (Stiny and Gips, 1972) have been used with sucesss both in the analysis as in the synthesis of designs at different scales, from product design to building and urban design. In particular, the use of shape grammars has been efficient in the characterization of objects’ styles and in the generation of new designs within the analyzed style, and it makes design rules amenable to computers implementation (Duarte, 2005). The literature includes one other example of a grammar for chair design by Knight (1980). In the second step of the current research phase, the outlined shape grammar was implemented into a computer program, to assist the designer in conceiving and producing customized chairs using a digital design process. This implementation was developed in Catia by converting the grammar into an equivalent parametric design model. In the third phase, physical models of existing and new chair designs were produced using rapid prototyping. The paper describes the grammar, its computer implementation as a parametric model, and the rapid prototyping of physical models. The generative potential of the proposed digital process is discussed in the context of enabling the mass customization of furniture. The role of the furniture designer in the new paradigm and ideas for further work also are discussed.
keywords Thonet; furniture design; chair; digital design process; parametric design; shape grammar
series CAAD Futures
email
last changed 2012/02/11 19:21

_id a587
authors Cohen, Elaine, Lyche, Tom and Riesenfeld, Richard F.
year 1980
title Discrete B-Splines and Subdivision Techniques in Computer-Aided Geometric Design and Computer Graphics
source computer Graphics and Image Processing. October, 1980. vol. 14: pp. 87- 111 : ill. includes bibliography
summary The relevant theory of discrete B-splines with associated new algorithms is extended to provide a framework for understanding and implementing general subdivision schemes for nonuniform B-splines. The new derived polygon corresponding to an arbitrary refinement of the knot vector for an existing B-spline curve, including multiplicities, is shown to be formed by successive evaluations of the discrete B-spline defined by the original vertices, the original knot vector, and the refined knot vector. Existing subdivision algorithms can be seen as proper special cases. General subdivision has widespread applications in computer-aided geometric design, computer graphics, and numerical analysis. The new algorithms resulting from the new theory lead to a unification of the display model, the analysis model, and other needed models into a single geometric model from which other necessary models are easily derived. New sample algorithms for interference calculation, contouring, surface rendering, and other important calculations are presented
keywords computational geometry, theory, algorithms, computer graphics, B-splines, curved surfaces
series CADline
last changed 2003/06/02 13:58

_id 076e
authors Ennis, G. and Lindsay, M.
year 1999
title VRML Possibilities: The evolution of the Glasgow Model
source Proceedings of International Conference on Virtual Systems and MultiMedia. University of Abertay. Dundee
summary During the 1980's, ABACUS, a research unit at the University of Strathclyde developed an interest in the ability to model and manipulate large geometrical databases of urban topography. Initially, this interest lay solely in the ability to source, capture and store the relevant data. However, once constructed, these models proved genuinely useful to a wide range of users and there was soon a demand for more functionality relating to the manipulation not just of the graphics, but also the range of urban attributes. Although a number of improvements were implemented there were drawbacks to the wide adoption of the software produced. The problems were almost all due to deficiencies in the then current hardware and software system available to the professions, and although this strand of research continued to be pursued, most of the development had to be focused on research applications and deployment. However, the recent advent of the Virtual Reality Modelling Language (VRML) standards have rekindled interest in this field since this language enables many of the issues that have proved problematic in the past to be addressed and solved. The potential now exists to provide wide access to large scale urban models. This paper focuses on the application of VRML as applied to the 'Glasgow Model'.
series other
email
last changed 2003/04/23 15:50

_id 48db
authors Proctor, George
year 2001
title CADD Curriculum - The Issue of Visual Acuity
source Architectural Information Management [19th eCAADe Conference Proceedings / ISBN 0-9523687-8-1] Helsinki (Finland) 29-31 August 2001, pp. 192-200
doi https://doi.org/10.52842/conf.ecaade.2001.192
summary Design educators attempt to train the eyes and minds of students to see and comprehend the world around them with the intention of preparing those students to become good designers, critical thinkers and ultimately responsible architects. Over the last eight years we have been developing the digital media curriculum of our architecture program with these fundamental values. We have built digital media use and instruction on the foundation of our program which has historically been based in physical model making. Digital modeling has gradually replaced the capacity of physical models as an analytical and thinking tool, and as a communication and presentation device. The first year of our program provides a foundation and introduction to 2d and 3d design and composition, the second year explores larger buildings and history, the third year explores building systems and structure through design studies of public buildings, fourth year explores urbanism, theory and technology through topic studios and, during the fifth year students complete a capstone project. Digital media and CADD have and are being synchronized with the existing NAAB accredited regimen while also allowing for alternative career options for students. Given our location in the Los Angeles region, many students with a strong background in digital media have gone on to jobs in video game design and the movie industry. Clearly there is much a student of architecture must learn to attain a level of professional competency. A capacity to think visually is one of those skills and is arguably a skill that distinguishes members of the visual arts (including Architecture) from other disciplines. From a web search of information posted by the American Academy of Opthamology, Visual Acuity is defined as an ability to discriminate fine details when looking at something and is often measured with the Snellen Eye Chart (the 20/20 eye test). In the context of this paper visual acuity refers to a subject’s capacity to discriminate useful abstractions in a visual field for the purposes of Visual Thinking- problem solving through seeing (Arnheim, 1969, Laseau 1980, Hoffman 1998). The growing use of digital media and the expanding ability to assemble design ideas and images through point-and-click methods makes the cultivation and development of visual skills all the more important to today’s crop of young architects. The advent of digital media also brings into question the traditional, static 2d methods used to build visual skills in a design education instead of promoting active 3d methods for teaching, learning and developing visual skills. Interactive digital movies provide an excellent platform for promoting visual acuity, and correlating the innate mechanisms of visual perception with the abstractions and notational systems used in professional discourse. In the context of this paper, pedagogy for building visual acuity is being considered with regard to perception of the real world, for example the visual survey of an environment, a site or a street scene and how that visual survey works in conjunction with practice.
keywords Curriculum, Seeing, Abstracting, Notation
series eCAADe
email
last changed 2022/06/07 08:00

_id c4b8
authors Lane, Jeffrey M. and Riesenfeld, Richard F.
year 1980
title A Theoretical Development for the Computer Generation and Display of Piecewise Polynomial Surfaces
source IEEE Transactions on Pattern Analysis and Machine Intelligence. January, 1980 Vol. PAM 1-2: pp. 35-46 : ill.
summary includes a short bibliography. Two algorithms for parametric piecewise polynomial evaluation and generation are described. The mathematical development of these algorithms is shown to generalize to new algorithms for obtaining curve and surface intersections and for the computer display of parametric curves and surfaces
keywords display, algorithms, intersection, CAD, computer graphics, B-splines, curved surfaces
series CADline
last changed 2003/06/02 13:58

_id eb7b
authors Liggett, Robin S.
year 1980
title The Quadratic Assignment problem: an Analysis of Applications and Solution Strategies
source Environment and Planning B. 1980. vol. 7: pp. 141-162 : tables. includes bibliography
summary A wide variety of practical problem in design, planning and management can be formulated as quadratic assignment problems, and this paper discusses this class of problem. Since algorithms for producing optimal solutions to such problems are computationally infeasible for all but small problems of this type, heuristic techniques must usually be employed for the solution of real practical problems. This paper explores and compares a variety of solution techniques found in the literature considering the trade-offs between computational efficiency and quality of solutions generated. Recommendations are made about the key factors to be considered in developing and applying heuristic solution procedures
keywords design process, algorithms, graphs, quadratic assignment, operations research, optimization, automation, synthesis, heuristics, space allocation, floor plans, management, planning
series CADline
email
last changed 2003/06/02 13:58

_id e3c1
authors Rasdorf, William J. and Fenves, Stephen J.
year 1980
title Design Specification Representation and Analysis
source Computing in Civil Engineering Conference Proceedings (2nd : 1980 : Baltimore, MD.). American Society of Civil Engineers, pp. 102- 111. CADLINE has abstract only
summary The conventional structures of decision tables, information networks, and outlines define the current methodology for the representation and use of design specifications. This paper explores the relationships at the interfaces between these three representational tools. New analysis strategies are presented that provide flexibility at the lower boundary of the information network by converting decision tables to subnetworks within the information network and by compressing multiple subtables into larger tables representing higher- level nodes in the network. Both generation and compression of the information network provide flexibility in organizing a specification. The ability to both generate and compress nodes and subnodes establishes a mean of representing all the relations among the data items of a specification and gives one more direct control over the level of detail of the information network. As a direct consequence of the ability to generate new nodes, new classifiers can be progressively attached to the nodes of the subnetwork, as well as to the nodes in the information network. As a result, specification requirements are more logically identified by the outline and requirements and data items which were previously hidden within decision table conditions and actions are now directly accessible from the outline. Conversely, items inconsequential to the outline can be compressed into nodes and removed from the outline. A computer program is presented that implements these network transformations. The program accurately represents the interface between the network and the decision table
keywords civil engineering, decision making, representation, analysis
series CADline
last changed 2003/06/02 13:58

_id acadia21_76
id acadia21_76
authors Smith, Rebecca
year 2021
title Passive Listening and Evidence Collection
source ACADIA 2021: Realignments: Toward Critical Computation [Proceedings of the 41st Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 979-8-986-08056-7]. Online and Global. 3-6 November 2021. edited by B. Bogosian, K. Dörfler, B. Farahi, J. Garcia del Castillo y López, J. Grant, V. Noel, S. Parascho, and J. Scott. 76-81.
doi https://doi.org/10.52842/conf.acadia.2021.076
summary In this paper, I present the commercial, urban-scale gunshot detection system ShotSpotter in contrast with a range of ecological sensing examples which monitor animal vocalizations. Gunshot detection sensors are used to alert law enforcement that a gunshot has occurred and to collect evidence. They are intertwined with processes of criminalization, in which the individual, rather than the collective, is targeted for punishment. Ecological sensors are used as a “passive” practice of information gathering which seeks to understand the health of a given ecosystem through monitoring population demographics, and to document the collective harms of anthropogenic change (Stowell and Sueur 2020). In both examples, the ability of sensing infrastructures to “join up and speed up” (Gabrys 2019, 1) is increasing with the use of machine learning to identify patterns and objects: a new form of expertise through which the differential agendas of these systems are implemented and made visible. I trace the differential agendas of these systems as they manifest through varied components: the spatial distribution of hardware in the existing urban environment and / or landscape; the software and other informational processes that organize and translate the data; the visualization of acoustical sensing data; the commercial factors surrounding the production of material components; and the apps, platforms, and other forms of media through which information is made available to different stakeholders. I take an interpretive and qualitative approach to the analysis of these systems as cultural artifacts (Winner 1980), to demonstrate how the political and social stakes of the technology are embedded throughout them.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id 452c
authors Vanier, D. J. and Worling, Jamie
year 1986
title Three-dimensional Visualization: A Case Study
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 92-102
summary Three-dimensional computer visualization has intrigued both building designers and computer scientists for decades. Research and conference papers present an extensive list of existing and potential uses for threedimensional geometric data for the building industry (Baer et al., 1979). Early studies on visualization include urban planning (Rogers, 1980), treeshading simulation (Schiler and Greenberg, 1980), sun studies (Anon, 1984), finite element analysis (Proulx, 1983), and facade texture rendering (Nizzolese, 1980). With the advent of better interfaces, faster computer processing speeds and better application packages, there had been interest on the part of both researchers and practitioners in three-dimensional -models for energy analysis (Pittman and Greenberg, 1980), modelling with transparencies (Hebert, 1982), super-realistic rendering (Greenberg, 1984), visual impact (Bridges, 1983), interference clash checking (Trickett, 1980), and complex object visualization (Haward, 1984). The Division of Building Research is currently investigating the application of geometric modelling in the building delivery process using sophisticated software (Evans, 1985). The first stage of the project (Vanier, 1985), a feasibility study, deals with the aesthetics of the mode. It identifies two significant requirements for geometric modelling systems: the need for a comprehensive data structure and the requirement for realistic accuracies and tolerances. This chapter presents the results of the second phase of this geometric modelling project, which is the construction of 'working' and 'presentation' models for a building.
series CAAD Futures
email
last changed 2003/05/16 20:58

_id cf2011_p127
id cf2011_p127
authors Benros, Deborah; Granadeiro Vasco, Duarte Jose, Knight Terry
year 2011
title Integrated Design and Building System for the Provision of Customized Housing: the Case of Post-Earthquake Haiti
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 247-264.
summary The paper proposes integrated design and building systems for the provision of sustainable customized housing. It advances previous work by applying a methodology to generate these systems from vernacular precedents. The methodology is based on the use of shape grammars to derive and encode a contemporary system from the precedents. The combined set of rules can be applied to generate housing solutions tailored to specific user and site contexts. The provision of housing to shelter the population affected by the 2010 Haiti earthquake illustrates the application of the methodology. A computer implementation is currently under development in C# using the BIM platform provided by Revit. The world experiences a sharp increase in population and a strong urbanization process. These phenomena call for the development of effective means to solve the resulting housing deficit. The response of the informal sector to the problem, which relies mainly on handcrafted processes, has resulted in an increase of urban slums in many of the big cities, which lack sanitary and spatial conditions. The formal sector has produced monotonous environments based on the idea of mass production that one size fits all, which fails to meet individual and cultural needs. We propose an alternative approach in which mass customization is used to produce planed environments that possess qualities found in historical settlements. Mass customization, a new paradigm emerging due to the technological developments of the last decades, combines the economy of scale of mass production and the aesthetics and functional qualities of customization. Mass customization of housing is defined as the provision of houses that respond to the context in which they are built. The conceptual model for the mass customization of housing used departs from the idea of a housing type, which is the combined result of three systems (Habraken, 1988) -- spatial, building system, and stylistic -- and it includes a design system, a production system, and a computer system (Duarte, 2001). In previous work, this conceptual model was tested by developing a computer system for existing design and building systems (Benr__s and Duarte, 2009). The current work advances it by developing new and original design, building, and computer systems for a particular context. The urgent need to build fast in the aftermath of catastrophes quite often overrides any cultural concerns. As a result, the shelters provided in such circumstances are indistinct and impersonal. However, taking individual and cultural aspects into account might lead to a better identification of the population with their new environment, thereby minimizing the rupture caused in their lives. As the methodology to develop new housing systems is based on the idea of architectural precedents, choosing existing vernacular housing as a precedent permits the incorporation of cultural aspects and facilitates an identification of people with the new housing. In the Haiti case study, we chose as a precedent a housetype called “gingerbread houses”, which includes a wide range of houses from wealthy to very humble ones. Although the proposed design system was inspired by these houses, it was decided to adopt a contemporary take. The methodology to devise the new type was based on two ideas: precedents and transformations in design. In architecture, the use of precedents provides designers with typical solutions for particular problems and it constitutes a departing point for a new design. In our case, the precedent is an existing housetype. It has been shown (Duarte, 2001) that a particular housetype can be encoded by a shape grammar (Stiny, 1980) forming a design system. Studies in shape grammars have shown that the evolution of one style into another can be described as the transformation of one shape grammar into another (Knight, 1994). The used methodology departs takes off from these ideas and it comprises the following steps (Duarte, 2008): (1) Selection of precedents, (2) Derivation of an archetype; (3) Listing of rules; (4) Derivation of designs; (5) Cataloguing of solutions; (6) Derivation of tailored solution.
keywords Mass customization, Housing, Building system, Sustainable construction, Life cycle energy consumption, Shape grammar
series CAAD Futures
email
last changed 2012/02/11 19:21

_id c3f4
authors Joy, William
year 1980
title An Introduction to Display Editing with VI
source September, 1980. 30 p
summary VI (Visual) is a display oriented interactive text editor. When using VI the screen of the terminal acts as a window into the file which is being editing. Changes which made to the file are reflected in what is seen. Using VI the user can insert new text any place in the file quite easily. Most of the commands to VI move the cursor around in the file. There are commands to move the cursor forward and backward in units of characters, words, sentences and paragraphs. A small set of operators, like d for delete and c for change, are combined with the motion commands to form operations such as delete word or change paragraph, in a simple and natural way. This regularity and the mnemonic assignment of commands to keys makes the editor command set easy to remember and to use. VI works on a large number of display terminals, and new terminals are easily driven after editing a terminal description file. While it is advantageous to have an intelligent terminal which can locally insert and delete lines and characters from the display, the editor will function quite well on dumb terminals over slow phone lines. The editor makes allowances for the low bandwidth in these situations and uses smaller window sizes and different display updating algorithms to make best use of the limited speed available. It is also possible to use the command set of VI on hardcopy terminals, storage tubes and 'glass ty's' using a one line editing window; thus VI's command set is available on all terminals. The full command set of the more traditional, line oriented editor ED is available within VI; it is quite simple to switch between the two modes of editing
keywords UNIX, display, word processing, software
series CADline
last changed 1999/02/12 15:08

_id cdc2008_243
id cdc2008_243
authors Loukissas, Yanni
year 2008
title Keepers of the Geometry: Architects in a Culture of Simulation
source First International Conference on Critical Digital: What Matters(s)? - 18-19 April 2008, Harvard University Graduate School of Design, Cambridge (USA), pp. 243-244
summary “Why do we have to change? We’ve been building buildings for years without CATIA?” Roger Norfleet, a practicing architect in his thirties poses this question to Tim Quix, a generation older and an expert in CATIA, a computer-aided design tool developed by Dassault Systemes in the early 1980’s for use by aerospace engineers. It is 2005 and CATIA has just come into use at Paul Morris Associates, the thirty-person architecture firm where Norfleet works; he is struggling with what it will mean for him, for his firm, for his profession. Computer-aided design is about creativity, but also about jurisdiction, about who controls the design process. In Architecture: The Story of Practice, Architectural theorist Dana Cuff writes that each generation of architects is educated to understand what constitutes a creative act and who in the system of their profession is empowered to use it and at what time. Creativity is socially constructed and Norfleet is coming of age as an architect in a time of technological but also social transition. He must come to terms with the increasingly complex computeraided design tools that have changed both creativity and the rules by which it can operate. In today’s practices, architects use computer-aided design software to produce threedimensional geometric models. Sometimes they use off-the-shelf commercial software like CATIA, sometimes they customize this software through plug-ins and macros, sometimes they work with software that they have themselves programmed. And yet, conforming to Larson’s ideas that they claim the higher ground by identifying with art and not with science, contemporary architects do not often use the term “simulation.” Rather, they have held onto traditional terms such as “modeling” to describe the buzz of new activity with digital technology. But whether or not they use the term, simulation is creating new architectural identities and transforming relationships among a range of design collaborators: masters and apprentices, students and teachers, technical experts and virtuoso programmers. These days, constructing an identity as an architect requires that one define oneself in relation to simulation. Case studies, primarily from two architectural firms, illustrate the transformation of traditional relationships, in particular that of master and apprentice, and the emergence of new roles, including a new professional identity, “keeper of the geometry,” defined by the fusion of person and machine. Like any profession, architecture may be seen as a system in flux. However, with their new roles and relationships, architects are learning that the fight for professional jurisdiction is increasingly for jurisdiction over simulation. Computer-aided design is changing professional patterns of production in architecture, the very way in which professionals compete with each other by making new claims to knowledge. Even today, employees at Paul Morris squabble about the role that simulation software should play in the office. Among other things, they fight about the role it should play in promotion and firm hierarchy. They bicker about the selection of new simulation software, knowing that choosing software implies greater power for those who are expert in it. Architects and their collaborators are in a continual struggle to define the creative roles that can bring them professional acceptance and greater control over design. New technologies for computer-aided design do not change this reality, they become players in it.
email
last changed 2009/01/07 08:05

_id 244d
authors Monedero, J., Casaus, A. and Coll, J.
year 1992
title From Barcelona. Chronicle and Provisional Evaluation of a New Course on Architectural Solid Modelling by Computerized Means
source CAAD Instruction: The New Teaching of an Architect? [eCAADe Conference Proceedings] Barcelona (Spain) 12-14 November 1992, pp. 351-362
doi https://doi.org/10.52842/conf.ecaade.1992.351
summary The first step made at the ETSAB in the computer field goes back to 1965, when professors Margarit and Buxade acquired an IBM computer, an electromechanical machine which used perforated cards and which was used to produce an innovative method of structural calculation. This method was incorporated in the academic courses and, at that time, this repeated question "should students learn programming?" was readily answered: the exercises required some knowledge of Fortran and every student needed this knowledge to do the exercises. This method, well known in Europe at that time, also provided a service for professional practice and marked the beginning of what is now the CC (Centro de Calculo) of our school. In 1980 the School bought a PDP1134, a computer which had 256 Kb of RAM, two disks of 5 Mb and one of lO Mb, and a multiplexor of 8 lines. Some time later the general politics of the UPC changed their course and this was related to the purchase of a VAX which is still the base of the CC and carries most of the administrative burden of the school. 1985 has probably been the first year in which we can talk of a general policy of the school directed towards computers. A report has been made that year, which includes an inquest adressed to the six Departments of the School (Graphic Expression, Projects, Structures, Construction, Composition and Urbanism) and that contains interesting data. According to the report, there were four departments which used computers in their current courses, while the two others (Projects and Composition) did not use them at all. The main user was the Department of Structures while the incidence of the remaining three was rather sporadic. The kind of problems detected in this report are very typical: lack of resources for hardware and software and for maintenance of the few computers that the school had at that moment; a demand (posed by the students) greatly exceeding the supply (computers and teachers). The main problem appeared to be the lack of computer graphic devices and proper software.

series eCAADe
email
last changed 2022/06/07 07:58

_id 952f
authors Soloway, E., Guzdial, M. and Hay, K.
year 1994
title Learner-Centered Design: The Challenge for HCI in the 21st Century
source Interactions , no. April (1994): 36-48
summary In the 1980's a major transformation took place in the computing world: attention was finally being paid to making computers easier-to-use. You know the history: in the 1970's folks at Xerox were exploring so-called personal computers and developing graphical, point-and-click interfaces. The goal was to make using computers less cognitively taxing, there- by permitting the user to focus more mental cycles on getting the job done. For some time people had recognized that there would be benefits if users could interact with computers using visual cues and motor movements instead of testu- al/linguistic strings. However, computer cycles were costly; they could hardly be wasted on supporting a non-textual interface. There was barely enough zorch (i.e., computer power, measured in your favorite unit) to simply calculate the payroll.
series journal paper
last changed 2003/04/23 15:50

_id acadia19_392
id acadia19_392
authors Steinfeld, Kyle
year 2019
title GAN Loci
source ACADIA 19:UBIQUITY AND AUTONOMY [Proceedings of the 39th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-59179-7] (The University of Texas at Austin School of Architecture, Austin, Texas 21-26 October, 2019) pp. 392-403
doi https://doi.org/10.52842/conf.acadia.2019.392
summary This project applies techniques in machine learning, specifically generative adversarial networks (or GANs), to produce synthetic images intended to capture the predominant visual properties of urban places. We propose that imaging cities in this manner represents the first computational approach to documenting the Genius Loci of a city (Norberg-Schulz, 1980), which is understood to include those forms, textures, colors, and qualities of light that exemplify a particular urban location and that set it apart from similar places. Presented here are methods for the collection of urban image data, for the necessary processing and formatting of this data, and for the training of two known computational statistical models (StyleGAN (Karras et al., 2018) and Pix2Pix (Isola et al., 2016)) that identify visual patterns distinct to a given site and that reproduce these patterns to generate new images. These methods have been applied to image nine distinct urban contexts across six cities in the US and Europe, the results of which are presented here. While the product of this work is not a tool for the design of cities or building forms, but rather a method for the synthetic imaging of existing places, we nevertheless seek to situate the work in terms of computer-assisted design (CAD). In this regard, the project is demonstrative of a new approach to CAD tools. In contrast with existing tools that seek to capture the explicit intention of their user (Aish, Glynn, Sheil 2017), in applying computational statistical methods to the production of images that speak to the implicit qualities that constitute a place, this project demonstrates the unique advantages offered by such methods in capturing and expressing the tacit.
series ACADIA
type normal paper
email
last changed 2022/06/07 07:56

_id fc80
authors Ubbelohde, S. and Humann, C.
year 1998
title Comparative Evaluation of Four Daylighting Software Programs
source 1998 ACEEE Summer Study on Energy Efficiency in Buildings Proceedings. American Council for an Energy-Efficient Economy
summary By the mid-1980's, a number of software packages were under development to predict daylighting performance in buildings, in particular illumination levels in daylighted spaces. An evaluation in 1988 by Ubbelohde et al. demonstrated that none of the software then available was capable of predicting the simplest of real daylighting designs. In the last ten years computer capabilities have evolved rapidly and we have four major packages widely available in the United States. This paper presents a comparative evaluation from the perspective of building and daylighting design practice. A contemporary building completed in 1993 was used as a base case for evaluation. We present the results from field measurements, software predictions and physical modeling as a basis for discussing the capabilities of the software packages in architectural design practice. We found the current software packages far more powerful and nuanced in their ability to predict daylight than previously. Some can accurately predict quantitative daylight performance under varying sky conditions and produce handsome and accurate visualizations of the space. The programs differ significantly, however, in their ease of use, modeling basis and the emphasis between quantitative predictions and visualization in the output.
series other
last changed 2003/04/23 15:50

_id b04c
authors Goerger, S., Darken, R., Boyd, M., Gagnon, T., Liles, S., Sullivan, J. and Lawson, J.
year 1996
title Spatial Knowledge Acquisition from Maps and Virtual Environments in Complex Architectural Space
source Proc. 16 th Applied Behavioral Sciences Symposium, 22-23 April, U.S. Airforce Academy, Colorado Springs, CO., 1996, 6-10
summary It has often been suggested that due to its inherent spatial nature, a virtual environment (VE) might be a powerful tool for spatial knowledge acquisition of a real environment, as opposed to the use of maps or some other two-dimensional, symbolic medium. While interesting from a psychological point of view, a study of the use of a VE in lieu of a map seems nonsensical from a practical point of view. Why would the use of a VE preclude the use of a map? The more interesting investigation would be of the value added of the VE when used with a map. If the VE could be shown to substantially improve navigation performance, then there might be a case for its use as a training tool. If not, then we have to assume that maps continue to be the best spatial knowledge acquisition tool available. An experiment was conducted at the Naval Postgraduate School to determine if the use of an interactive, three-dimensional virtual environment would enhance spatial knowledge acquisition of a complex architectural space when used in conjunction with floor plan diagrams. There has been significant interest in this research area of late. Witmer, Bailey, and Knerr (1995) showed that a VE was useful in acquiring route knowledge of a complex building. Route knowledge is defined as the procedural knowledge required to successfully traverse paths between distant locations (Golledge, 1991). Configurational (or survey) knowledge is the highest level of spatial knowledge and represents a map-like internal encoding of the environment (Thorndyke, 1980). The Witmer study could not confirm if configurational knowledge was being acquired. Also, no comparison was made to a map-only condition, which we felt is the most obvious alternative. Comparisons were made only to a real world condition and a symbolic condition where the route is presented verbally.
series other
last changed 2003/04/23 15:50

_id 4580
authors Borgerson, B. R. and Johnson, Robert H.
year 1980
title Beyond CAD to Computer Aided Engineering
source (8) p. : ill. Manufacturing Data Systems Incorporated, 1980? includes bibliography
summary Current CAD systems significantly aid the drafting function and many provide some aid to selected design activities. For the development of mechanical systems, much more can be done. Future systems will aid the interactive engineering process of design, analysis, control, documentation, and manufacturing engineering. Computer based systems which address this broader spectrum of engineering activities are referred to as `Computer Aided Engineering,' or `CAE,' systems. CAE systems will use volumetric techniques to create and evaluate the individual components of a machine design in conjunction with data base management schemas to support the interrelationships of the components of machines. This paper focuses on computer assistance to the engineering of mechanical systems
keywords mechanical engineering, CAE, solid modeling, objects
series CADline
last changed 2003/06/02 13:58

_id 0439
authors Kant, Elaine
year 1980
title A Knowledge-Based Approach to Using Efficiency Estimation in Program Synthesis
source 1980? pp. 457-462. includes bibliography
summary This paper describes a system for using efficiency knowledge in program synthesis. The system, called LIBRA, uses a combination of knowledge-based rules and algebraic cost estimates to compare potential program implementations. Efficiency knowledge is used to control the selection of algorithm and data structure implementations and the application of optimizing transformations. Prototypes of programming constructs and of cost estimation techniques are used to simplify the efficiency analysis process and to assist in the acquisition of efficiency knowledge associated with new coding knowledge. LIBRA has been used to guide the selection of implementations for several programs that classify, retrieve information, sort, and generate prime numbers
keywords knowledge base, systems, programming, performance, synthesis, evaluation
series CADline
last changed 1999/02/12 15:08

_id c5c4
authors Samet, Hanan
year 1980
title Region Representation : Quadtrees from Boundary Codes
source Communications of the ACM. March, 1980. vol. 23: pp. 163-170 : some ill. includes bibliography
summary An algorithm is presented for constructing a quadtree for a region given its boundary in the form of a chain code. Analysis of the algorithm reveals that its execution time is proportional to the product of the perimeter and the log of the diameter of the region
keywords representation, data structures, quadtree, image processing
series CADline
last changed 1999/02/12 15:09

For more results click below:

this is page 0show page 1show page 2show page 3HOMELOGIN (you are user _anon_85218 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002