CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 96

_id a6f1
authors Bridges, A.H.
year 1986
title Any Progress in Systematic Design?
source Computer-Aided Architectural Design Futures [CAAD Futures Conference Proceedings / ISBN 0-408-05300-3] Delft (The Netherlands), 18-19 September 1985, pp. 5-15
summary In order to discuss this question it is necessary to reflect awhile on design methods in general. The usual categorization discusses 'generations' of design methods, but Levy (1981) proposes an alternative approach. He identifies five paradigm shifts during the course of the twentieth century which have influenced design methods debate. The first paradigm shift was achieved by 1920, when concern with industrial arts could be seen to have replaced concern with craftsmanship. The second shift, occurring in the early 1930s, resulted in the conception of a design profession. The third happened in the 1950s, when the design methods debate emerged; the fourth took place around 1970 and saw the establishment of 'design research'. Now, in the 1980s, we are going through the fifth paradigm shift, associated with the adoption of a holistic approach to design theory and with the emergence of the concept of design ideology. A major point in Levy's paper was the observation that most of these paradigm shifts were associated with radical social reforms or political upheavals. For instance, we may associate concern about public participation with the 1970s shift and the possible use (or misuse) of knowledge, information and power with the 1980s shift. What has emerged, however, from the work of colleagues engaged since the 1970s in attempting to underpin the practice of design with a coherent body of design theory is increasing evidence of the fundamental nature of a person's engagement with the design activity. This includes evidence of the existence of two distinctive modes of thought, one of which can be described as cognitive modelling and the other which can be described as rational thinking. Cognitive modelling is imagining, seeing in the mind's eye. Rational thinking is linguistic thinking, engaging in a form of internal debate. Cognitive modelling is externalized through action, and through the construction of external representations, especially drawings. Rational thinking is externalized through verbal language and, more formally, through mathematical and scientific notations. Cognitive modelling is analogic, presentational, holistic, integrative and based upon pattern recognition and pattern manipulation. Rational thinking is digital, sequential, analytical, explicatory and based upon categorization and logical inference. There is some relationship between the evidence for two distinctive modes of thought and the evidence of specialization in cerebral hemispheres (Cross, 1984). Design methods have tended to focus upon the rational aspects of design and have, therefore, neglected the cognitive aspects. By recognizing that there are peculiar 'designerly' ways of thinking combining both types of thought process used to perceive, construct and comprehend design representations mentally and then transform them into an external manifestation current work in design theory is promising at last to have some relevance to design practice.
series CAAD Futures
email
last changed 2003/11/21 15:16

_id ga9928
id ga9928
authors Goulthorpe
year 1999
title Hyposurface: from Autoplastic to Alloplastic Space
source International Conference on Generative Art
summary By way of immediate qualification to an essay which attempts to orient current technical developments in relation to a series of dECOi projects, I would suggest that the greatest liberation offered by new technology in architecture is not its formal potential as much as the patterns of creativity and practice it engenders. For increasingly in the projects presented here dECOi operates as an extended network of technical expertise: Mark Burry and his research team at Deakin University in Australia as architects and parametric/ programmatic designers; Peter Wood in New Zealand as programmer; Alex Scott in London as mathematician; Chris Glasow in London as systems engineer; and the engineers (structural/services) of David Glover’s team at Ove Arup in London. This reflects how we’re working in a new technical environment - a new form of practice, in a sense - a loose and light network which deploys highly specialist technical skill to suit a particular project. By way of a second disclaimer, I would suggest that the rapid technological development we're witnessing, which we struggle to comprehend given the sheer pace of change that overwhelms us, is somehow of a different order than previous technological revolutions. For the shift from an industrial society to a society of mass communication, which is the essential transformation taking place in the present, seems to be a subliminal and almost inexpressive technological transition - is formless, in a sense - which begs the question of how it may be expressed in form. If one holds that architecture is somehow the crystallization of cultural change in concrete form, one suspects that in the present there is no simple physical equivalent for the burst of communication technologies that colour contemporary life. But I think that one might effectively raise a series of questions apropos technology by briefly looking at 3 or 4 of our current projects, and which suggest a range of possibilities fostered by new technology. By way of a third doubt, we might qualify in advance the apparent optimism of architects for CAD technology by thinking back to Thomas More and his island ‘Utopia’, which marks in some way the advent of Modern rationalism. This was, if not quite a technological utopia, certainly a metaphysical one, More’s vision typically deductive, prognostic, causal. But which by the time of Francis Bacon’s New Atlantis is a technological utopia availing itself of all the possibilities put at humanity’s disposal by the known machines of the time. There’s a sort of implicit sanction within these two accounts which lies in their nature as reality optimized by rational DESIGN as if the very ethos of design were sponsored by Modern rationalist thought and its utopian leanings. The faintly euphoric ‘technological’ discourse of architecture at present - a sort of Neue Bauhaus - then seems curiously misplaced historically given the 20th century’s general anti-, dis-, or counter-utopian discourse. But even this seems to have finally run its course, dissolving into the electronic heterotopia of the present with its diverse opportunities of irony and distortion (as it’s been said) as a liberating potential.1 This would seem to mark the dissolution of design ethos into non-causal process(ing), which begs the question of ‘design’ itself: who 'designs' anymore? Or rather, has 'design' not become uncoupled from its rational, deterministic, tradition? The utopianism that attatches to technological discourse in the present seems blind to the counter-finality of technology's own accomplishments - that transparency has, as it were, by its own more and more perfect fulfillment, failed by its own success. For what we seem to have inherited is not the warped utopia depicted in countless visions of a singular and tyrranical technology (such as that in Orwell's 1984), but a rich and diverse heterotopia which has opened the possibility of countless channels of local dialect competing directly with the channels of power. Undoubtedly such multiplicitous and global connectivity has sent creative thought in multiple directions…
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 653f
authors Hedelman, Harold
year 1984
title A Data Flow Approach to Procedural Modeling
source IEEE Computer Graphics and Applications January, 1984. vol. 4: pp. 16-26 : ill. (some col.). includes bibliography.
summary Computer graphics tasks generally involve either modeling or viewing. Modeling combines primitive building blocks (polygons, patches, etc.) into data structures that represent entire objects and scenes. To visualize a modeled object, its data structure is input to appropriate viewing routines. While a great deal has been done on modeling and viewing with geometric primitives, little has been published on the use of procedural primitives. A procedural model is a step-by-step guide for constructing a representation of an object or process, i.e., a program. It is also a function, a 'black box' with a set of inputs and outputs. Two questions are especially pertinent to the work presented in this article
keywords First, what are the advantages of both data flow methods and procedural modeling? Second, how can such models be used in composition? computer graphics, modeling, information, management
series CADline
last changed 1999/02/12 15:08

_id caadria2004_k-1
id caadria2004_k-1
authors Kalay, Yehuda E.
year 2004
title CONTEXTUALIZATION AND EMBODIMENT IN CYBERSPACE
doi https://doi.org/10.52842/conf.caadria.2004.005
source CAADRIA 2004 [Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia / ISBN 89-7141-648-3] Seoul Korea 28-30 April 2004, pp. 5-14
summary The introduction of VRML (Virtual Reality Markup Language) in 1994, and other similar web-enabled dynamic modeling software (such as SGI’s Open Inventor and WebSpace), have created a rush to develop on-line 3D virtual environments, with purposes ranging from art, to entertainment, to shopping, to culture and education. Some developers took their cues from the science fiction literature of Gibson (1984), Stephenson (1992), and others. Many were web-extensions to single-player video games. But most were created as a direct extension to our new-found ability to digitally model 3D spaces and to endow them with interactive control and pseudo-inhabitation. Surprisingly, this technologically-driven stampede paid little attention to the core principles of place-making and presence, derived from architecture and cognitive science, respectively: two principles that could and should inform the essence of the virtual place experience and help steer its development. Why are the principles of place-making and presence important for the development of virtual environments? Why not simply be content with our ability to create realistically-looking 3D worlds that we can visit remotely? What could we possibly learn about making these worlds better, had we understood the essence of place and presence? To answer these questions we cannot look at place-making (both physical and virtual) from a 3D space-making point of view alone, because places are not an end unto themselves. Rather, places must be considered a locus of contextualization and embodiment that ground human activities and give them meaning. In doing so, places acquire a meaning of their own, which facilitates, improves, and enriches many aspects of our lives. They provide us with a means to interpret the activities of others and to direct our own actions. Such meaning is comprised of the social and cultural conceptions and behaviors imprinted on the environment by the presence and activities of its inhabitants, who in turn, ‘read’ by them through their own corporeal embodiment of the same environment. This transactional relationship between the physical aspects of an environment, its social/cultural context, and our own embodiment of it, combine to create what is known as a sense of place: the psychological, physical, social, and cultural framework that helps us interpret the world around us, and directs our own behavior in it. In turn, it is our own (as well as others’) presence in that environment that gives it meaning, and shapes its social/cultural character. By understanding the essence of place-ness in general, and in cyberspace in particular, we can create virtual places that can better support Internet-based activities, and make them equal to, in some cases even better than their physical counterparts. One of the activities that stands to benefit most from understanding the concept of cyber-places is learning—an interpersonal activity that requires the co-presence of others (a teacher and/or fellow learners), who can point out the difference between what matters and what does not, and produce an emotional involvement that helps students learn. Thus, while many administrators and educators rush to develop webbased remote learning sites, to leverage the economic advantages of one-tomany learning modalities, these sites deprive learners of the contextualization and embodiment inherent in brick-and-mortar learning institutions, and which are needed to support the activity of learning. Can these qualities be achieved in virtual learning environments? If so, how? These are some of the questions this talk will try to answer by presenting a virtual place-making methodology and its experimental implementation, intended to create a sense of place through contextualization and embodiment in virtual learning environments.
series CAADRIA
type normal paper
last changed 2022/06/07 07:52

_id ceb1
authors Maver, T.
year 1984
title What is eCAADe?
doi https://doi.org/10.52842/conf.ecaade.1984.x.d0s
source The Third European Conference on CAD in the Education of Architecture [eCAADe Conference Proceedings] Helsinki (Finnland) 20-22 September 1984.
summary The main interest of the organisation is to improve the design, teaching. The design remains the core of the professional education, while computer science can support a better understanding of the design methods. Computers should amplify the human capabilities like engines allowed to carry higher forces, radio and television enabled communication over larger distances and computers today should aid the human intellectual activities, to gain a better insight in design methodology, to investigate the design process.Design research should study more extensively how buildings behave, the integration and interaction of different disciplines which contribute to the optimization of a design and the design criteria. Computers could increase the possibility to satisfy building regulations, to access and update information, to model the design process and to understand how decisions affect the building quality (functional and economical as well as formal aspects). More effort and money should be spent on this research. The organisation has been sponsored by the EEC for bringing CAAD (Computer Aided Architectural Design) educational material at the disposal of the design teachers. The Helsinki conference is the third European meeting (after Delft 1982 and Brussels 1983) which concentrates on information and experience exchange in CAAD-education and looks for common interests and collaboration. A specific joint study program works on typical audiovisual material and lecture notes, which will be updated according to teacher's needs. A demand has been done to implement an integrated CAAD package. eCAADe focuses to integrate computer approaches across country boundaries as well as across disciplinary boundaries, as to reach a higher quality of the design education.

series eCAADe
email
last changed 2022/06/07 07:50

_id ac8b
authors Mitchell, W.
year 1984
title CAD Technology, Its Effects on Practice and the Response of Education - an Overview
doi https://doi.org/10.52842/conf.ecaade.1984.x.b3c
source The Third European Conference on CAD in the Education of Architecture [eCAADe Conference Proceedings] Helsinki (Finnland) 20-22 September 1984.
summary Related with the evolution of hardware there also is an evolution of CAD techniques. The very first CAD/CAM packages were developed on mainframes. They moved into practice when 16-bit minicomputers became available. The packages mainly were production drafting applications. The 32-bit super minicomputers give wider possibilities, but at the same time some software problems arise, namely the complexity of CAD- databases and the development and maintenance cost of large programs. With VLSI the distribution of intelligence becomes possible, the enthousiasm for CAD increases, but still the gap between available hardware and high quality software, remains high.Concerning CAD teaching there are severe problems. First of all there are not enough really good designers which know CAD in such a way that they can teach it. Second there is a shortage of equipment and a financial problem. Thirdly there is the question what the students need to know about CAD. which is not clear at the moment. At the University of California, Los Angeles, the following 5 subjects are teached: Computer Support, Computer Literacy, Professional Practice Implications, Exploration of CAD as a Design Medium and Theoretical Foundations of CAD. To use computers as a medium it is necessary to understand architecture, its objects, its operators and its evaluation criteria. The last topic is considered at research level.
series eCAADe
email
more www.ecaade.org
last changed 2022/06/07 07:50

_id ddssar0031
id ddssar0031
authors Witt, Tom
year 2000
title Indecision in quest of design
source Timmermans, Harry (Ed.), Fifth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Nijkerk, the Netherlands)
summary Designers all start with a solution (Darke, 1984), with what is known (Rittel, 1969, 1970). Hans Menghol, Svein Gusrud and Peter Opvik did so with the chair in the 1970s. Not content with the knowledge of the chair, however, they walked backward to the ignorance of the question that has always elicited the solution of chair and asked themselves the improbable question, “What is a chair?” Their answer was the Balans chair. “Until the introduction of the Norwegian Balans (balance) chair, the multi-billion dollar international chair industry had been surprisingly homogeneous. This chair is the most radical of the twentieth century and probably since the invention of the chair-throne itself (Cranz 1998). Design theorists have tried to understand in a measurable way what is not measurable: the way that designers think. Rather than attempt to analyze something that cannot be taken apart, I attempt to illuminate methods for generating new knowledge through ways of seeing connections that are not logical, and in fact are sometimes ironic. Among the possibilities discussed in this dialogue are the methodological power of language in the form of metaphor, the power of the imagination in mind experiments, the power of mythological story telling, and the power of immeasurable intangibles in the generation of the new knowledge needed to design.
series DDSS
last changed 2003/08/07 16:36

_id 2c1b
authors Woolf, Beverly and McDonald, David D.
year 1984
title Building a Computer Tutor : Design Issues
source IEEE Computer. September, 1984. vol. 17: pp. 61-73 : diagrams. includes bibliography
summary An effective tutor must deal with a fundamental problem of communication: to determine how messages are received and understood and to formulate appropriate answers. This means that a tutor, more than a speaker, must verify that both parties know what information has been covered, what is missing, and which communication might be erroneous. In this article the authors discuss how an understanding of a student can be constructed in an artificial intelligence program and how this understanding, coupled with a facility for language generation, can be used to build flexible machine tutor
keywords education, communication, information, learning, AI, systems
series CADline
last changed 2003/06/02 13:58

_id 0ecb
authors Waerum, Jens and Rüdiger Kristiansen, Bjarne
year 1989
title CAAD Education at the School of Architecture Copenhagen
doi https://doi.org/10.52842/conf.ecaade.1989.x.q8k
source CAAD: Education - Research and Practice [eCAADe Conference Proceedings / ISBN 87-982875-2-4] Aarhus (Denmark) 21-23 September 1989, pp. 4.5.1-4.5.9
summary The establishment of Datacentret (the Data Centre) in summer 1985 was preceded by 15 years slow- moving, arduous work from the early experiments in what was then the computing laboratory under the supervision of architect Per Jacobi, author of the Danish 3D drawing system MONSTER, until 1984, when a special committee was commissioned to draw up proposals for the introduction of teaching in computing at the Architects School. In spring 1985 the school administrators decided that a central computer workshop should be set up and in cooperation with the school's institutes placed jointly in charge of instructing teachers and students, carrying out research and development within the field of architecture and taking steps to work out a curriculum of supplementary training for practising architects. With the aid of a special grant, 12 PC's were successfully acquired in the 2 years that followed, as well as a screen projector and other peripherals.
series eCAADe
last changed 2022/06/07 07:50

_id 409c
authors Akin, Omer, Flemming, Ulrich and Woodbury, Robert F.
year 1984
title Development of Computer Systems for Use in Architectural Education
source 1984. ii, 47 p. includes bibliography
summary Computers have not been used in education in a way that fosters intellectual development of alternate approaches to design. Sufficient theory exists to use computing devices to support other potentially fruitful approaches to design. A proposal is made for the development of a computer system for architectural education which is built upon a particular model for design, that of rational decision making. Within the framework provided by the model, a series of courseware development projects are proposed which together with hardware acquisitions constitute a comprehensive computer system for architectural education
keywords architecture, education, design, decision making
series CADline
email
last changed 2003/06/02 13:58

_id d5c8
authors Angelo, C.V., Bueno, A.P., Ludvig, C., Reis, A.F. and Trezub, D.
year 1999
title Image and Shape: Two Distinct Approaches
source III Congreso Iberoamericano de Grafico Digital [SIGRADI Conference Proceedings] Montevideo (Uruguay) September 29th - October 1st 1999, pp. 410-415
summary This paper is the result of two researches done at the district of Campeche, Florianópolis, by the Grupo PET/ARQ/UFSC/CAPES. Different aspects and conceptual approaches were used to study the spatial attributes of this district located in the Southern part of Santa Catarina Island. The readings and analysis of two researches were based on graphic pistures builded with the use of Corel 7.0 e AutoCadR14. The first research – "Urban Development in the Island of Santa Catarina: Public Space Study"- examined the urban structures of Campeche based on the Spatial Syntax Theory developed by Hillier and Hanson (1984) that relates form and social appropriation of public spaces. The second research – "Topoceptive Characterisation of Campeche: The Image of a Locality in Expansion in the Island of Santa Catarina" -, based on the methodology developed by Kohlsdorf (1996) and also on the visual analysis proposed by Lynch (1960), identified characteristics of this locality with the specific goal of selecting attributes that contributed to the ideas of the place its population held. The paper consists of an initial exercise of linking these two methods in order to test the complementarity of their analytical tools. Exemplifying the analytical procedures undertaken in the two approaches, the readings done - global (of the locality as a whole) and partial (from parts of the settlement) - are presented and compared.
series SIGRADI
email
last changed 2016/03/10 09:47

_id 44b1
authors Balas, Egon
year 1984
title On the Facial Structure of Scheduling Polyhedra
source 49 p., 6 p. of appendix : ill. Pittsburgh, PA: Design Research Center, Carnegie Mellon Univ., December, 1984. includes bibliography
summary A well-known job shop scheduling problem can be formulated as follows. Given a graph G with node set N and with directed and undirected arcs, find an orientation of the undirected arcs that minimizes the length of a longest path in G. The author treats the problem as a disjunctive program, without recourse to integer variables, and give a partial characterization of the scheduling polyhedron P(N), i.e., the convex hull of feasible schedules. In particular, he derives all the facets inducing inequalities for the scheduling polyhedron P(K) defined on some clique with node set K, and gives a sufficient condition for such inequalities to also induce facets of P(N). One of our results is that any inequality that induces a facet of P(H) for some HCK, also induces a facet of P(K). Another one is a recursive formula for deriving a facet inducing inequality with p positive coefficients from one with p-1 positive coefficients. The author also addresses the constraint identification problem, and gives a procedure for finding an inequality that cuts off a given solution to a subset of the constraints
keywords polyhedra, graphs, optimization, convex hull
series CADline
last changed 1999/02/12 15:07

_id 4685
authors Barsky, Brian A.
year 1984
title A Description and Evaluation of Various 3-D Models
source IEEE Computer Graphics and Applications. January, 1984. vol. 4: pp. 38-52 : ill. Includes bibliography
summary The use of parametric curves and surfaces for object modeling in computer graphics is becoming increasingly popular. There is sometimes, however, a reluctance to use them because it seems that the added power they give is more than offset by the complexity of their formulations and their computations. The purpose of this article is to clarify their meanings and uses and show how much they have in common behind the diversity of their formulations. The author discusses the properties and benefits of using the parametric Hermite, Coons, Bezier, B-spline, and Beta-spline curve and surface formulations
keywords Hermite, Coons, curved surfaces, Bezier, curves, B- splines, computational geometry, computer graphics
series CADline
last changed 2003/06/02 10:24

_id c9c1
authors Basili, Victor R. and Perricone, Barry T.
year 1984
title Software Errors and Complexity : An Empirical Investigation
source communications of the ACM. January, 1984. vol. 27: pp. 42-52 : ill. includes bibliography
summary The relationships between the frequency and distribution of errors during software development, the maintenance of the developed software, and a the influence of a variety of environmental factors on software development were analyzed. These factors include the complexity of the software, the developer's experience with the application, and the reuse of existing design and code. Such relationships can not only provide an insight into the characteristics of computer software development and the effects that the environment can have on the product, but also improve its reliability and quality. The study is based on data derived from a medium- scale software development project
keywords software, engineering, programming, reliability
series CADline
last changed 2003/06/02 13:58

_id ecaadesigradi2019_449
id ecaadesigradi2019_449
authors Becerra Santacruz, Axel
year 2019
title The Architecture of ScarCity Game - The craft and the digital as an alternative design process
doi https://doi.org/10.52842/conf.ecaade.2019.3.045
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 3, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 45-52
summary The Architecture of ScarCity Game is a board game used as a pedagogical tool that challenges architecture students by involving them in a series of experimental design sessions to understand the design process of scarcity and the actual relation between the craft and the digital. This means "pragmatic delivery processes and material constraints, where the exchange between the artisan of handmade, representing local skills and technology of the digitally conceived is explored" (Huang 2013). The game focuses on understanding the different variables of the crafted design process of traditional communities under conditions of scarcity (Michel and Bevan 1992). This requires first analyzing the spatial environmental model of interaction, available human and natural resources, and the dynamic relationship of these variables in a digital era. In the first stage (Pre-Agency), the game set the concept of the craft by limiting students design exploration from a minimum possible perspective developing locally available resources and techniques. The key elements of the design process of traditional knowledge communities have to be identified (Preez 1984). In other words, this stage is driven by limited resources + chance + contingency. In the second stage (Post-Agency) students taking the architects´ role within this communities, have to speculate and explore the interface between the craft (local knowledge and low technological tools), and the digital represented by computation data, new technologies available and construction. This means the introduction of strategy + opportunity + chance as part of the design process. In this sense, the game has a life beyond its mechanics. This other life challenges the participants to exploit the possibilities of breaking the actual boundaries of design. The result is a tool to challenge conventional methods of teaching and leaning controlling a prescribed design process. It confronts the rules that professionals in this field take for granted. The game simulates a 'fake' reality by exploring in different ways with surveyed information. As a result, participants do not have anything 'real' to lose. Instead, they have all the freedom to innovate and be creative.
keywords Global south, scarcity, low tech, digital-craft, design process and innovation by challenge.
series eCAADeSIGraDi
email
last changed 2022/06/07 07:54

_id 6050
authors Bentley, Jon L.
year 1984
title Algorithm Design Techniques -- Programming Pearls
source communications of the ACM. September, 1984. vol. 27: pp. 865-871 : ill
summary The problem arose in one-dimensional pattern recognition: The input is a vector X of N real numbers; the output is the maximum sum found in any contiguous subvector of the input. The problem is when some of the numbers are negative. This column is built around that problem with an emphasis on the algorithms that solve it and the techniques used to design them
keywords techniques, programming, algorithms, pattern recognition
series CADline
last changed 2003/06/02 13:58

_id 6118
authors Bentley, Jon L.
year 1984
title Code Tuning -- Programming Pearls
source communications of the ACM. February, 1984. vol. 27: pp. 91-96
summary Efficiency is one of many problems in programming, and there are many ways to achieve it. This column is about a low-level approach . 'Code tuning' locates the expensive parts of an existing program and then modifies that code to improve its performance
keywords programming, search, algorithms, techniques
series CADline
last changed 2003/06/02 13:58

_id 8087
authors Boehm, Barry W., Penedo, Maria H. and Stuckle, Don E. (et al)
year 1984
title A Software Development Environment for Improving Productivity
source IEEE Computer. June, 1984. pp. 30-44 : ill. includes bibliography
summary The software productivity system (SPS) was developed to support project activities. It involves a set of strategies, including the work environment; the evaluation and procurement of hardware equipment; the provision for immediate access to computing resources through local area networks; the building of an integrated set of tools to support the software development life cycle and all project personnel; and a user support function to transfer new technology. All of these strategies are being accomplished incrementally. The current architecture is VAX-based and uses the Unix operating system, a wideband local network, and a set of software tools. The article describes the steps that led to the creation of the software productivity project and its components and summarized the requirements analyses on which the SPS was based
keywords productivity, software, hardware, programming
series CADline
last changed 2003/06/02 10:24

_id 4e4e
authors Boissonnat, Jean-Daniel
year 1984
title Geometric Structures for Three- Dimensional Shape Representation
source ACM Transactions on Graphics. October, 1984. vol. 3: pp. 266-286 : ill. includes bibliography
summary Different geometric structures are investigated in the context of discrete surface representation. It is shown that minimal representations (i.e., polyhedra) can be provided by a surface-based method using nearest neighbors structures or by a volume-based method using the Delaunay triangulation. Both approaches are compared with respect to various criteria, such as space requirements, computation time, constraints on the distribution of the points, facilities for further calculations, and agreement with the actual shape of the object
keywords algorithms, polyhedra, curves, curved surfaces, solids, representation, geometric modeling, data structures
series CADline
last changed 1999/02/12 15:07

_id ea4c
authors Chang, Hsi and Iyengar, S. Sitharama
year 1984
title Efficient Algorithms to Globally Balance a Binary Search Tree
source Communications of the ACM. July, 1984. vol. 27: pp. 695-702. includes bibliography
summary A binary search tree can be globally balanced by readjustment of pointers or with a sorting process in O(n) time, n being the total number of nodes. This paper presents three global balancing algorithms, one of which uses folding with the other two adopting parallel procedures. These algorithms show improvement in time efficiency over some sequential algorithms when applied to large binary search trees. A comparison of various algorithms is presented
keywords techniques, parallel processing, algorithms, search, sorting
series CADline
last changed 2003/06/02 13:58

For more results click below:

this is page 0show page 1show page 2show page 3show page 4HOMELOGIN (you are user _anon_327398 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002