CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 74

_id cdc2008_243
id cdc2008_243
authors Loukissas, Yanni
year 2008
title Keepers of the Geometry: Architects in a Culture of Simulation
source First International Conference on Critical Digital: What Matters(s)? - 18-19 April 2008, Harvard University Graduate School of Design, Cambridge (USA), pp. 243-244
summary “Why do we have to change? We’ve been building buildings for years without CATIA?” Roger Norfleet, a practicing architect in his thirties poses this question to Tim Quix, a generation older and an expert in CATIA, a computer-aided design tool developed by Dassault Systemes in the early 1980’s for use by aerospace engineers. It is 2005 and CATIA has just come into use at Paul Morris Associates, the thirty-person architecture firm where Norfleet works; he is struggling with what it will mean for him, for his firm, for his profession. Computer-aided design is about creativity, but also about jurisdiction, about who controls the design process. In Architecture: The Story of Practice, Architectural theorist Dana Cuff writes that each generation of architects is educated to understand what constitutes a creative act and who in the system of their profession is empowered to use it and at what time. Creativity is socially constructed and Norfleet is coming of age as an architect in a time of technological but also social transition. He must come to terms with the increasingly complex computeraided design tools that have changed both creativity and the rules by which it can operate. In today’s practices, architects use computer-aided design software to produce threedimensional geometric models. Sometimes they use off-the-shelf commercial software like CATIA, sometimes they customize this software through plug-ins and macros, sometimes they work with software that they have themselves programmed. And yet, conforming to Larson’s ideas that they claim the higher ground by identifying with art and not with science, contemporary architects do not often use the term “simulation.” Rather, they have held onto traditional terms such as “modeling” to describe the buzz of new activity with digital technology. But whether or not they use the term, simulation is creating new architectural identities and transforming relationships among a range of design collaborators: masters and apprentices, students and teachers, technical experts and virtuoso programmers. These days, constructing an identity as an architect requires that one define oneself in relation to simulation. Case studies, primarily from two architectural firms, illustrate the transformation of traditional relationships, in particular that of master and apprentice, and the emergence of new roles, including a new professional identity, “keeper of the geometry,” defined by the fusion of person and machine. Like any profession, architecture may be seen as a system in flux. However, with their new roles and relationships, architects are learning that the fight for professional jurisdiction is increasingly for jurisdiction over simulation. Computer-aided design is changing professional patterns of production in architecture, the very way in which professionals compete with each other by making new claims to knowledge. Even today, employees at Paul Morris squabble about the role that simulation software should play in the office. Among other things, they fight about the role it should play in promotion and firm hierarchy. They bicker about the selection of new simulation software, knowing that choosing software implies greater power for those who are expert in it. Architects and their collaborators are in a continual struggle to define the creative roles that can bring them professional acceptance and greater control over design. New technologies for computer-aided design do not change this reality, they become players in it.
email
last changed 2009/01/07 08:05

_id c444
authors Forrest, Robin A.
year 1980
title The Twisted Cubic Curve : A Computer-Aided Geometric Design Approach
source Computer Aided Design. July, 1980. vol. 12: pp. 165-172 : ill. includes bibliography
summary The twisted cubic curve has the attraction of combining both commonly used curve definitions, the conic section and the parametric cubic, in a single form. A definition of the twisted cubic is developed in terms of geometric `handles' convenient for CAD and independent of parametrization, analogous to a well-known definition of conics. Conditions for the occurrence of asymptotes are investigated and shown to be considerably more complex than those for conics. Several more controllable subsets of the general curve are described. The paper concludes that use of the full generality of the twisted cubic is in most cases unjustified
keywords computational geometry, curves, CAD, parametrization
series CADline
last changed 2003/06/02 13:58

_id ga9809
id ga9809
authors Kälviäinen, Mirja
year 1998
title The ideological basis of generative expression in design
source International Conference on Generative Art
summary This paper will discuss issues concerning the design ideology supporting the use and development of generative design. This design ideology is based on the unique qualities of craft production and on the forms or ideas from nature or the natural characteristics of materials. The main ideology presented here is the ideology of the 1980´s art craft production in Finland. It is connected with the general Finnish design ideology and with the design ideology of other western countries. The ideology for these professions is based on the common background of design principles stated in 19th century England. The early principles developed through the Arts and Crafts tradition which had a great impact on design thinking in Europe and in the United States. The strong continuity of this design ideology from 19th century England to the present computerized age can be detected. The application of these design principles through different eras shows the difference in the interpretations and in the permission of natural decorative forms. The ideology of the 1980ïs art craft in Finland supports the ideas and fulfilment of generative design in many ways. The reasons often given as the basis for making generative design with computers are in very many respects the same as the ideology for art craft. In Finland there is a strong connection between art craft and design ideology. The characteristics of craft have often been seen as the basis for industrial design skills. The main themes in the ideology of the 1980´s art craft in Finland can be compared to the ideas of generative design. The main issues in which the generative approach reflects a distinctive ideological thinking are: Way of Life: The work is the communication of the maker´s inner ideas. The concrete relationship with the environment, personality, uniqueness, communication, visionary qualities, development and growth of the maker are important. The experiments serve as a media for learning. Taste and Aesthetic Education: The real love affair is created by the non living object with the help of memories and thought. At their best objects create the basis in their stability and communication for durable human relationships. People have warm relationships especially with handmade products in which they can detect unique qualities and the feeling that the product has been made solely for them. Counter-culture: The aim of the work is to produce alternatives for technoburocracy and mechanical production and to bring subjective and unique experiences into the customerïs monotonious life. This ideology rejects the usual standardized mass production of our times. Mythical character: There is a metamorphosis in the birth of the product. In many ways the design process is about birth and growth. The creative process is a development story of the maker. The complexity of communication is the expression of the moments that have been lived. If you can sense the process of making in the product it makes it more real and nearer to life. Each piece of wood has its own beauty. Before you can work with it you must find the deep soul of its quality. The distinctive traits of the material, technique and the object are an essential part of the metamorphosis which brings the product into life. The form is not only for formïs sake but for other purposes, too. You cannot find loose forms in nature. Products have their beginnings in the material and are a part of the nature. This art craft ideology that supports the ideas of generative design can be applied either to the hand made crafts production or to the production exploiting new technology. The unique characteristics of craft and the expression of the material based development are a way to broaden the expression and forms of industrial products. However, for a crafts person it is not meaningful to fill the world with objects. In generative, computer based production this is possible. But maybe the production of unique pieces is still slower and makes the industrial production in that sense more ecological. People will be more attached to personal and unique objects, and thus the life cycle of the objects produced will be longer.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ddss2006-hb-187
id DDSS2006-HB-187
authors Lidia Diappi and Paola Bolchi
year 2006
title Gentrification Waves in the Inner-City of Milan - A multi agent / cellular automata model based on Smith's Rent Gap theory
source Van Leeuwen, J.P. and H.J.P. Timmermans (eds.) 2006, Innovations in Design & Decision Support Systems in Architecture and Urban Planning, Dordrecht: Springer, ISBN-10: 1-4020-5059-3, ISBN-13: 978-1-4020-5059-6, p. 187-201
summary The aim of this paper is to investigate the gentrification process by applying an urban spatial model of gentrification, based on Smith's (1979; 1987; 1996) Rent Gap theory. The rich sociological literature on the topic mainly assumes gentrification to be a cultural phenomenon, namely the result of a demand pressure of the suburban middle and upper class, willing to return to the city (Ley, 1980; Lipton, 1977, May, 1996). Little attempt has been made to investigate and build a sound economic explanation on the causes of the process. The Rent Gap theory (RGT) of Neil Smith still represents an important contribution in this direction. At the heart of Smith's argument there is the assumption that gentrification takes place because capitals return to the inner city, creating opportunities for residential relocation and profit. This paper illustrates a dynamic model of Smith's theory through a multi-agent/ cellular automata system approach (Batty, 2005) developed on a Netlogo platform. A set of behavioural rules for each agent involved (homeowner, landlord, tenant and developer, and the passive 'dwelling' agent with their rent and level of decay) are formalised. The simulations show the surge of neighbouring degradation or renovation and population turn over, starting with different initial states of decay and estate rent values. Consistent with a Self Organized Criticality approach, the model shows that non linear interactions at local level may produce different configurations of the system at macro level. This paper represents a further development of a previous version of the model (Diappi, Bolchi, 2005). The model proposed here includes some more realistic factors inspired by the features of housing market dynamics in the city of Milan. It includes the shape of the potential rent according to city form and functions, the subdivision in areal submarkets according to the current rents, and their maintenance levels. The model has a more realistic visualisation of the city and its form, and is able to show the different dynamics of the emergent neighbourhoods in the last ten years in Milan.
keywords Multi agent systems, Housing market, Gentrification, Emergent systems
series DDSS
last changed 2006/08/29 12:55

_id caadria2014_102
id caadria2014_102
authors Lopes, João V.; Alexandra C. Paio and José P. Sousa
year 2014
title Parametric Urban Models Based on Frei Otto’s Generative Form-Finding Processes
source Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer-Aided Architectural Design Research in Asia (CAADRIA 2014) / Kyoto 14-16 May 2014, pp. 595–604
doi https://doi.org/10.52842/conf.caadria.2014.595
summary Presently there is a progressive tendency to incorporate parametric design strategies in urban planning and design. Although the computational technologies that allow it are recent, fundamental theories and thinking processes behind it can be traced back to the work conducted at the Institute for Lightweight Structures (IL) in Stuttgart, between the 1960’s and 1980’s. This paper describes an experimental urban research work based on Frei Otto and Eda Schaur's thoughts on unplanned settlements, and on the form-finding experiences carried out at IL. By exploring the digital development of parametric and algorithmic interactive models, two urban design proposals were developed for a site in Porto city. Out of this experience, this paper suggests that today the act of design can benefit from a deeper understanding of the natural processes of occupation and connection.
keywords Parametric urbanism; generative design; form-finding; Frei Otto
series CAADRIA
email
last changed 2022/06/07 07:59

_id acadia21_76
id acadia21_76
authors Smith, Rebecca
year 2021
title Passive Listening and Evidence Collection
source ACADIA 2021: Realignments: Toward Critical Computation [Proceedings of the 41st Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 979-8-986-08056-7]. Online and Global. 3-6 November 2021. edited by B. Bogosian, K. Dörfler, B. Farahi, J. Garcia del Castillo y López, J. Grant, V. Noel, S. Parascho, and J. Scott. 76-81.
doi https://doi.org/10.52842/conf.acadia.2021.076
summary In this paper, I present the commercial, urban-scale gunshot detection system ShotSpotter in contrast with a range of ecological sensing examples which monitor animal vocalizations. Gunshot detection sensors are used to alert law enforcement that a gunshot has occurred and to collect evidence. They are intertwined with processes of criminalization, in which the individual, rather than the collective, is targeted for punishment. Ecological sensors are used as a “passive” practice of information gathering which seeks to understand the health of a given ecosystem through monitoring population demographics, and to document the collective harms of anthropogenic change (Stowell and Sueur 2020). In both examples, the ability of sensing infrastructures to “join up and speed up” (Gabrys 2019, 1) is increasing with the use of machine learning to identify patterns and objects: a new form of expertise through which the differential agendas of these systems are implemented and made visible. I trace the differential agendas of these systems as they manifest through varied components: the spatial distribution of hardware in the existing urban environment and / or landscape; the software and other informational processes that organize and translate the data; the visualization of acoustical sensing data; the commercial factors surrounding the production of material components; and the apps, platforms, and other forms of media through which information is made available to different stakeholders. I take an interpretive and qualitative approach to the analysis of these systems as cultural artifacts (Winner 1980), to demonstrate how the political and social stakes of the technology are embedded throughout them.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ddss9860
id ddss9860
authors Vakalo, E-G. and Fahmy, A.
year 1998
title A Theoretical Framework for the Analysis and Derivation of Orthogonal Building Plans and Sections
source Timmermans, Harry (Ed.), Fourth Design and Decision Support Systems in Architecture and Urban Planning Maastricht, the Netherlands), ISBN 90-6814-081-7, July 26-29, 1998
summary Architects are generally perceived as “Formgivers with an extraordinary gift” (Ackerman, 1980:12). Implicit in this statement is the belief that the operations that architects employ to compose their designs are the product of a creative faculty that is beyond the reach of rational discourse, and thereby cannot be subjected to logical investigation. This view is detrimental to the advancement of knowledge about architectural composition and adversely affects both practice and education in architecture. More specifically, it prevents the architectural community from acquiring of a more refined conception about how architects derive their designs. In contrast to this view, this study demonstrates that architectural form-making is amenable to logical analysis. In specific, this is to be done through a theoretical and computational framework that describe and explain the tasks involved in the making of orthogonal building plans and sections. In addition to illustrating the susceptibility of architectural form-making to logical analysis, the frameworks proposed in this study overcome the limitations of previously established theories thatdeal with architectural form-making. These can be divided into two categories: normative and positive theories.Normative theories include architectural treatises and manifestos. A major limitation of normativetheories is that they have limited explanatory power. Their concern is with promoting a specific aesthetic ideology and prescribing rules that can be used to derive compositions that conform to it. Therefore, they cannot be used to explain form-making in general. Positive frameworks, such asshape grammar, rely on rules to describe derivation and analysis processes. Nevertheless, they do not provide a comprehensive description of the tasks involved in architectural form-making. This causes the relation between the rules and compositional tasks to be ambiguous. It also affects adversely the ability of these frameworks to provide architects with a complete understanding of the role of compositional rules in derivation or analysis processes.
series DDSS
type normal paper
last changed 2010/05/16 09:11

_id caadria2006_589
id caadria2006_589
authors YU-NAN YEH
year 2006
title FREEDOM OF FORM: THE ORIENTAL CALLIGRAPHY AND AESTHETICS IN DIGITAL FABRICATION
source CAADRIA 2006 [Proceedings of the 11th International Conference on Computer Aided Architectural Design Research in Asia] Kumamoto (Japan) March 30th - April 2nd 2006, 589-591
doi https://doi.org/10.52842/conf.caadria.2006.x.v6f
summary Computer-Aided Design (CAD) / Computer-Aided Manufacturing (CAM) related research has been discussed since the 1960's (Ryder, G. et al, 2002, Mark Burry, 2002). Indeed, both Frank O. Gehry and Toyo Ito utilized CAD/CAM to create rich architectural form and in so doing gave birth to a new type of aesthetics. The visualization and liberalization of form space is the single most important characteristic attributable to the use of computers as a design tool. By the 1980's, Laser cutting and Rapid Prototyping techniques developed from CAM, became important new digital tools when researchers and designers discussed the development of form in architecture.
series CAADRIA
email
last changed 2022/06/07 07:49

_id cf2011_p170
id cf2011_p170
authors Barros, Mário; Duarte José, Chaparro Bruno
year 2011
title Thonet Chairs Design Grammar: a Step Towards the Mass Customization of Furniture
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 181-200.
summary The paper presents the first phase of research currently under development that is focused on encoding Thonet design style into a generative design system using a shape grammar. The ultimate goal of the work is the design and production of customizable chairs using computer assisted tools, establishing a feasible practical model of the paradigm of mass customization (Davis, 1987). The current research step encompasses the following three steps: (1) codification of the rules describing Thonet design style into a shape grammar; (2) implementing the grammar into a computer tool as parametric design; and (3) rapid prototyping of customized chair designs within the style. Future phases will address the transformation of the Thonet’s grammar to create a new style and the production of real chair designs in this style using computer aided manufacturing. Beginning in the 1830’s, Austrian furniture designer Michael Thonet began experimenting with forming steam beech, in order to produce lighter furniture using fewer components, when compared with the standards of the time. Using the same construction principles and standardized elements, Thonet produced different chairs designs with a strong formal resemblance, creating his own design language. The kit assembly principle, the reduced number of elements, industrial efficiency, and the modular approach to furniture design as a system of interchangeable elements that may be used to assemble different objects enable him to become a pioneer of mass production (Noblet, 1993). The most paradigmatic example of the described vision of furniture design is the chair No. 14 produced in 1858, composed of six structural elements. Due to its simplicity, lightness, ability to be stored in flat and cubic packaging for individual of collective transportation, respectively, No. 14 became one of the most sold chairs worldwide, and it is still in production nowadays. Iconic examples of mass production are formally studied to provide insights to mass customization studies. The study of the shape grammar for the generation of Thonet chairs aimed to ensure rules that would make possible the reproduction of the selected corpus, as well as allow for the generation of new chairs within the developed grammar. Due to the wide variety of Thonet chairs, six chairs were randomly chosen to infer the grammar and then this was fine tuned by checking whether it could account for the generation of other designs not in the original corpus. Shape grammars (Stiny and Gips, 1972) have been used with sucesss both in the analysis as in the synthesis of designs at different scales, from product design to building and urban design. In particular, the use of shape grammars has been efficient in the characterization of objects’ styles and in the generation of new designs within the analyzed style, and it makes design rules amenable to computers implementation (Duarte, 2005). The literature includes one other example of a grammar for chair design by Knight (1980). In the second step of the current research phase, the outlined shape grammar was implemented into a computer program, to assist the designer in conceiving and producing customized chairs using a digital design process. This implementation was developed in Catia by converting the grammar into an equivalent parametric design model. In the third phase, physical models of existing and new chair designs were produced using rapid prototyping. The paper describes the grammar, its computer implementation as a parametric model, and the rapid prototyping of physical models. The generative potential of the proposed digital process is discussed in the context of enabling the mass customization of furniture. The role of the furniture designer in the new paradigm and ideas for further work also are discussed.
keywords Thonet; furniture design; chair; digital design process; parametric design; shape grammar
series CAAD Futures
email
last changed 2012/02/11 19:21

_id e825
authors Baybars, Ilker and Eastman, Charles M.
year 1980
title Enumerating Architectural Arrangements by Generating Their Underlying Graphs
source Environment and Planning B. 1980. vol. 7: pp. 289- 310 : ill. includes bibliography. -- See also 'Enumerating Architectural Arrangements: Comment on a Recent Paper by Baybars and Eastman' by C.F. Earl
summary One mathematical correspondence to the partitioning of the plane is a Weighted Plane Graph (WPG). This paper first focuses on the systematic generation of WPGs, in a fashion similar to crystal growth. During this process, the WPGs are represented by adjacency matrices. The authors, thus, present a method for embedding the WPG in the plane, given its adjacency matrix. These graphs can, then, be mapped into floor plans. The common practice here is the use of the `geometric dual' of a WPG. The authors propose, instead, the use of the `Pseudogeometric dual' of a WPG directly to translate (part of) a design brief into alternative spatial layouts. Also discussed is the ability to create courtyards and/or circulation spaces given a specific WPG, without increasing the size of the problem
keywords enumeration, architecture, floor plans, graphs, design process, automation, algorithms, space allocation, CAD
series CADline
email
last changed 2003/05/17 10:15

_id c361
authors Logan, Brian S.
year 1986
title Representing the Structure of Design Problems
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 158-170
summary In recent years several experimental CAD systems have emerged which, focus specifically on the structure of design problems rather than on solution generation or appraisal (Sussman and Steele, 1980; McCallum, 1982). However, the development of these systems has been hampered by the lack of an adequate theoretical basis. There is little or no argument as to what the statements comprising these models actually mean, or on the types of operations that should be provided. This chapter describes an attempt to develop a semantically adequate basis for a model of the structure of design problems and presents a representation of this model in formal logic.
series CAAD Futures
last changed 1999/04/03 17:58

_id e3c1
authors Rasdorf, William J. and Fenves, Stephen J.
year 1980
title Design Specification Representation and Analysis
source Computing in Civil Engineering Conference Proceedings (2nd : 1980 : Baltimore, MD.). American Society of Civil Engineers, pp. 102- 111. CADLINE has abstract only
summary The conventional structures of decision tables, information networks, and outlines define the current methodology for the representation and use of design specifications. This paper explores the relationships at the interfaces between these three representational tools. New analysis strategies are presented that provide flexibility at the lower boundary of the information network by converting decision tables to subnetworks within the information network and by compressing multiple subtables into larger tables representing higher- level nodes in the network. Both generation and compression of the information network provide flexibility in organizing a specification. The ability to both generate and compress nodes and subnodes establishes a mean of representing all the relations among the data items of a specification and gives one more direct control over the level of detail of the information network. As a direct consequence of the ability to generate new nodes, new classifiers can be progressively attached to the nodes of the subnetwork, as well as to the nodes in the information network. As a result, specification requirements are more logically identified by the outline and requirements and data items which were previously hidden within decision table conditions and actions are now directly accessible from the outline. Conversely, items inconsequential to the outline can be compressed into nodes and removed from the outline. A computer program is presented that implements these network transformations. The program accurately represents the interface between the network and the decision table
keywords civil engineering, decision making, representation, analysis
series CADline
last changed 2003/06/02 13:58

_id 40ad
authors Yessios, Chris I.
year 1980
title Generation and Visualization of Architectural Forms with Tekton
source 1980? pp. 68-79 : ill. includes bibliography
summary Tekton is an interactive computer aided architectural design software system. It incorporates graphic input and 3-D modeling capabilities, a potent notational system which is based on an algebra like linguistic model for the representation of transformation and spatial compositions, hidden face elimination, shadowing and texture rendering. The latter feature has been specifically designed for the visualization of architectural forms and materials, through renderings of a free hand drawing quality. They are derived by generative semi-random models, included in the system. The Tekton language allows for interactive unlimited editing and modification of previously generated compositions
keywords CAD, architecture, modeling, computer graphics, rendering
series CADline
last changed 2003/06/02 13:58

_id b190
authors Goldberg, Adele and Robson, David
year 1983
title Smalltalk-80: The language and its implementation
source New York, NY: Addison Wesley Co
summary Smalltalk-80 is the classic standard Smalltalk language as described in Smalltalk-80: The Language and Its Implementation by Goldberg and Robson. This book is commonly called "the Blue Book". Squeak implements the dialect of Smalltalk described in this book, but has a different implementation. Overview of the Smalltalk Language Smalltalk is a general purpose, high level programming language. It was the first original "pure" object oriented language, but not the first to use the object oriented concept, which is credited to Simula 67. The explosive growth of Object Oriented Programming (OOP) technologies began in the early 1980's, with Smalltalk's introduction. Behind it was the idea that the individual human user should be the most important component of any computing system, and that programming should be a natural extension of thinking, and also a dynamic and evolutionary process consistent with the model of human learning activity. In Smalltalk, these ideas are embodied in a framework for human-computer communication. In a sense, Smalltalk is yet another language like C and Pascal, and programs can be written in Smalltalk that have the look and feel of such conventional languages. The difference lies * in the amount of code that can be reduced, * less cryptic syntax, * and code that is easier to handle for application maintenance and enhancement. But Smalltalk's most powerful feature is easy code reuse. Smalltalk makes reuse of programs, routines, and subroutines (methods) far easier. Though procedural languages allow reuse too, it is harder to do, and much easier to cheat. It is no surprise that Smalltalk is relatively easy to learn, mainly due to its simple syntax and semantics, as well as few concepts. Objects, classes, messages, and methods form the basis of programming in Smalltalk. The general methodology to use Smalltalk The notion of human-computer interface also results in Smalltalk promoting the development of safer systems. Errors in Smalltalk may be viewed as objects telling users that confusion exists as to how to perform a desired function.
series other
last changed 2003/04/23 15:14

_id c3f4
authors Joy, William
year 1980
title An Introduction to Display Editing with VI
source September, 1980. 30 p
summary VI (Visual) is a display oriented interactive text editor. When using VI the screen of the terminal acts as a window into the file which is being editing. Changes which made to the file are reflected in what is seen. Using VI the user can insert new text any place in the file quite easily. Most of the commands to VI move the cursor around in the file. There are commands to move the cursor forward and backward in units of characters, words, sentences and paragraphs. A small set of operators, like d for delete and c for change, are combined with the motion commands to form operations such as delete word or change paragraph, in a simple and natural way. This regularity and the mnemonic assignment of commands to keys makes the editor command set easy to remember and to use. VI works on a large number of display terminals, and new terminals are easily driven after editing a terminal description file. While it is advantageous to have an intelligent terminal which can locally insert and delete lines and characters from the display, the editor will function quite well on dumb terminals over slow phone lines. The editor makes allowances for the low bandwidth in these situations and uses smaller window sizes and different display updating algorithms to make best use of the limited speed available. It is also possible to use the command set of VI on hardcopy terminals, storage tubes and 'glass ty's' using a one line editing window; thus VI's command set is available on all terminals. The full command set of the more traditional, line oriented editor ED is available within VI; it is quite simple to switch between the two modes of editing
keywords UNIX, display, word processing, software
series CADline
last changed 1999/02/12 15:08

_id c5c4
authors Samet, Hanan
year 1980
title Region Representation : Quadtrees from Boundary Codes
source Communications of the ACM. March, 1980. vol. 23: pp. 163-170 : some ill. includes bibliography
summary An algorithm is presented for constructing a quadtree for a region given its boundary in the form of a chain code. Analysis of the algorithm reveals that its execution time is proportional to the product of the perimeter and the log of the diameter of the region
keywords representation, data structures, quadtree, image processing
series CADline
last changed 1999/02/12 15:09

_id 6f57
authors Searle, John R.
year 1980
title Minds, Brains, and Programs
source The Behavioral and Brain Sciences. Cambridge University Press., 1980. vol. 3: pp. 417-457. includes bibliography
summary This article can be viewed as an attempt to explore the consequences of two propositions: (1) Intentionallity in human beings (and animals) is a product of causal features of the brain. The author assumes this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionallity. (2) Instantiating a computer program is never by itself a sufficient condition of intentionallity. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionallity. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionallity cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionallity must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionallity artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 'Could a machine think?' On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines withÔ h) 0*0*0*°° ÔŒ internal causal powers equivalent to those of brains
keywords And that is why strong AI has little to tell us about thinking, since
series CADline
last changed 2003/06/02 10:24

_id c46e
authors Fuchs, H., Kedem, Z.M. and Naylor, B.F.
year 1980
title On Visible Surface Generation by a Priori Tree Structures
source SIGGRAPH '80 Conference Proceedings. July, 1980. vol. 14 ; no. 3: pp. 124-133 : ill. includes bibliography
summary This paper describes a new algorithm for solving the hidden surface (or line) problem, to more rapidly generate realistic images of 3-D scenes composed of polygons, and presents the development of theoretical foundations in the area as well as additional related algorithms. As in many applications the environment to be displayed consists of polygons many of whose relative geometric relations are static. It is attempted to capitalize on this by preprocessing the environment's database so as to decrease the run-time computations required to generate a scene. This preprocessing is based on generating a 'binary space partitioning' tree whose inorder traversal of visibility priority at run-time will produce a linear order, dependent upon the viewing position, on (parts of) the polygons, which can then be used to easily solve the hidden surface problem. In the application where the entire environment is static with only the viewing-position changing, as is common in simulation, the results presented will be sufficient to solve completely the hidden surface problem
keywords hidden lines, hidden surfaces, algorithms, computer graphics, polygons
series CADline
last changed 2003/06/02 14:42

_id 0830
authors Ball, A. A.
year 1980
title How to Make the Bicubic Patch Work Using Reparametrisation
source 1980 ? 11 p. includes bibliography
summary This paper comprises a series of examples in numerical surface definition, loosely strung together, to show the practical limitations of the bicubic patch and how they can be overcome by reparametrisation. The concept of reparametrisation is more general than that used in computer- aided geometric design insofar as the reparametrisation is modeled in addition to the basic parametric equation
keywords CAD, computational geometry, curved surfaces, parametrization
series CADline
last changed 2003/06/02 13:58

_id 2fdd
authors Barsky, Brian A. and Thomas, Spencer W.
year 1980
title Transpline Curve Representation System
source April, 1980. 19 p. : ill. includes bibliography
summary An interactive curve representation system has been developed based on the concept of transforming among several parametric spline curve formulations. The available formulations are the interpolatory spline, uniform B-spline, spline under tension, and NU-spline. The system implementation is described in the context of a sample design session
keywords computational geometry, curves, representation, splines
series CADline
last changed 2003/06/02 13:58

For more results click below:

this is page 0show page 1show page 2show page 3HOMELOGIN (you are user _anon_782215 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002