CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 122

_id ce52
authors Abram, Greg, Weslover, Lee and Whitted, Turner
year 1985
title Efficient Alias-Free Rendering using Bit-masks and Look-up Tables
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 53-59 : ill. (some col.). includes bibliography
summary The authors demonstrate methods of rendering alias-free synthetic images using a precomputed convolution integral. The method is based on the observation that a visible polygon fragment's contribution to an image is solely a function of its position and shape, and that within a reasonable level of accuracy, a limited number of shapes represent the majority of cases encountered in images commonly rendered. The basic technique has been applied to several different rendering algorithms. A version of the newly non-uniform sampling technique implemented in the same program but with different tables values was introduced
keywords algorithms, computer graphics, anti-aliasing
series CADline
last changed 2003/06/02 13:58

_id ga0024
id ga0024
authors Ferrara, Paolo and Foglia, Gabriele
year 2000
title TEAnO or the computer assisted generation of manufactured aesthetic goods seen as a constrained flux of technological unconsciousness
source International Conference on Generative Art
summary TEAnO (Telematica, Elettronica, Analisi nell'Opificio) was born in Florence, in 1991, at the age of 8, being the direct consequence of years of attempts by a group of computer science professionals to use the digital computers technology to find a sustainable match among creation, generation (or re-creation) and recreation, the three basic keywords underlying the concept of “Littérature potentielle” deployed by Oulipo in France and Oplepo in Italy (see “La Littérature potentielle (Créations Re-créations Récréations) published in France by Gallimard in 1973). During the last decade, TEAnO has been involving in the generation of “artistic goods” in aesthetic domains such as literature, music, theatre and painting. In all those artefacts in the computer plays a twofold role: it is often a tool to generate the good (e.g. an editor to compose palindrome sonnets of to generate antonymic music) and, sometimes it is the medium that makes the fruition of the good possible (e.g. the generator of passages of definition literature). In that sense such artefacts can actually be considered as “manufactured” goods. A great part of such creation and re-creation work has been based upon a rather small number of generation constraints borrowed from Oulipo, deeply stressed by the use of the digital computer massive combinatory power: S+n, edge extraction, phonetic manipulation, re-writing of well known masterpieces, random generation of plots, etc. Regardless this apparently simple underlying generation mechanisms, the systematic use of computer based tools, as weel the analysis of the produced results, has been the way to highlight two findings which can significantly affect the practice of computer based generation of aesthetic goods: ? the deep structure of an aesthetic work persists even through the more “desctructive” manipulations, (such as the antonymic transformation of the melody and lyrics of a music work) and become evident as a sort of profound, earliest and distinctive constraint; ? the intensive flux of computer generated “raw” material seems to confirm and to bring to our attention the existence of what Walter Benjamin indicated as the different way in which the nature talk to a camera and to our eye, and Franco Vaccari called “technological unconsciousness”. Essential references R. Campagnoli, Y. Hersant, “Oulipo La letteratura potenziale (Creazioni Ri-creazioni Ricreazioni)”, 1985 R. Campagnoli “Oupiliana”, 1995 TEAnO, “Quaderno n. 2 Antologia di letteratura potenziale”, 1996 W. Benjiamin, “Das Kunstwerk im Zeitalter seiner technischen Reprodizierbarkeit”, 1936 F. Vaccari, “Fotografia e inconscio tecnologico”, 1994
series other
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id a65f
authors Primrose, P.L., Creamer, G.D. and Leonard, R.
year 1985
title Identifying and Quantifying the Company-Wide Benefits of CAD Within the Structure of a Comprehensive Investment Program
source Computer Aided Design. Butterworth & Co. Pub., February, 1985. vol. 17: pp. 3-8 : ill. flow charts
summary This paper discusses the costs and benefits associated with introducing CAD. It is shown that by suitably defining the terms involved, all the so-called 'intangible benefits' can be quantified and used within a rigorous financial evaluation. Because 45 specific factors must be considered if a genuine investment appraisal of CAD is to be performed, a computer program has been specifically written to overcome the difficulties normally associated with the DCF evaluation of major projects. The results from the program demonstrate that not only are the benefits of CAD company-wide, but that when these benefits are quantified, the economic case for CAD is greatly strengthened. The problem of CAD systems being regarded as nothing more than a 'drawing office tool to make draftsmen redundant' is overcome. In particular, the use of the program within a number of major companies reveals that CAD systems not only give a much greater potential return on investment than has been suggested by previous authors, but that the greatest benefits accrue in areas outside the drawing office. This is illustrated by a case study
keywords CAD, evaluation, business, cost, practice, economics
series CADline
last changed 2003/06/02 13:58

_id avocaad_2001_16
id avocaad_2001_16
authors Yu-Ying Chang, Yu-Tung Liu, Chien-Hui Wong
year 2001
title Some Phenomena of Spatial Characteristics of Cyberspace
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary "Space," which has long been an important concept in architecture (Bloomer & Moore, 1977; Mitchell, 1995, 1999), has attracted interest of researchers from various academic disciplines in recent years (Agnew, 1993; Benko & Strohmayer, 1996; Chang, 1999; Foucault, 1982; Gould, 1998). Researchers from disciplines such as anthropology, geography, sociology, philosophy, and linguistics regard it as the basis of the discussion of various theories in social sciences and humanities (Chen, 1999). On the other hand, since the invention of Internet, Internet users have been experiencing a new and magic "world." According to the definitions in traditional architecture theories, "space" is generated whenever people define a finite void by some physical elements (Zevi, 1985). However, although Internet is a virtual, immense, invisible and intangible world, navigating in it, we can still sense the very presence of ourselves and others in a wonderland. This sense could be testified by our naming of Internet as Cyberspace -- an exotic kind of space. Therefore, as people nowadays rely more and more on the Internet in their daily life, and as more and more architectural scholars and designers begin to invest their efforts in the design of virtual places online (e.g., Maher, 1999; Li & Maher, 2000), we cannot help but ask whether there are indeed sensible spaces in Internet. And if yes, these spaces exist in terms of what forms and created by what ways?To join the current interdisciplinary discussion on the issue of space, and to obtain new definition as well as insightful understanding of "space", this study explores the spatial phenomena in Internet. We hope that our findings would ultimately be also useful for contemporary architectural designers and scholars in their designs in the real world.As a preliminary exploration, the main objective of this study is to discover the elements involved in the creation/construction of Internet spaces and to examine the relationship between human participants and Internet spaces. In addition, this study also attempts to investigate whether participants from different academic disciplines define or experience Internet spaces in different ways, and to find what spatial elements of Internet they emphasize the most.In order to achieve a more comprehensive understanding of the spatial phenomena in Internet and to overcome the subjectivity of the members of the research team, the research design of this study was divided into two stages. At the first stage, we conducted literature review to study existing theories of space (which are based on observations and investigations of the physical world). At the second stage of this study, we recruited 8 Internet regular users to approach this topic from different point of views, and to see whether people with different academic training would define and experience Internet spaces differently.The results of this study reveal that the relationship between human participants and Internet spaces is different from that between human participants and physical spaces. In the physical world, physical elements of space must be established first; it then begins to be regarded as a place after interaction between/among human participants or interaction between human participants and the physical environment. In contrast, in Internet, a sense of place is first created through human interactions (or activities), Internet participants then begin to sense the existence of a space. Therefore, it seems that, among the many spatial elements of Internet we found, "interaction/reciprocity" Ñ either between/among human participants or between human participants and the computer interface Ð seems to be the most crucial element.In addition, another interesting result of this study is that verbal (linguistic) elements could provoke a sense of space in a degree higher than 2D visual representation and no less than 3D visual simulations. Nevertheless, verbal and 3D visual elements seem to work in different ways in terms of cognitive behaviors: Verbal elements provoke visual imagery and other sensory perceptions by "imagining" and then excite personal experiences of space; visual elements, on the other hand, provoke and excite visual experiences of space directly by "mapping".Finally, it was found that participants with different academic training did experience and define space differently. For example, when experiencing and analyzing Internet spaces, architecture designers, the creators of the physical world, emphasize the design of circulation and orientation, while participants with linguistics training focus more on subtle language usage. Visual designers tend to analyze the graphical elements of virtual spaces based on traditional painting theories; industrial designers, on the other hand, tend to treat these spaces as industrial products, emphasizing concept of user-center and the control of the computer interface.The findings of this study seem to add new information to our understanding of virtual space. It would be interesting for future studies to investigate how this information influences architectural designers in their real-world practices in this digital age. In addition, to obtain a fuller picture of Internet space, further research is needed to study the same issue by examining more Internet participants who have no formal linguistics and graphical training.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id cc15
authors Ansaldi, Silvia, De Floriani, Leila and Falcidieno, Bianca
year 1985
title Geometric Modeling of Solid Objects by Using a Face Adjacency Graph Representation
source SIGGRAPH '85 Conference Proceedings. July, 1985. vol. 19 ; no. 3: pp. 131-139 : ill. includes bibliography
summary A relational graph structure based on a boundary representation of solid objects is described. In this structure, called Face Adjacency Graph, nodes represent object faces, whereas edges and vertices are encoded into arcs and hyperarcs. Based on the face adjacency graph, the authors define a set of primitive face-oriented Euler operators, and a set of macro operators for face manipulation, which allow a compact definition and an efficient updating of solid objects. The authors briefly describe a hierarchical graph structure based on the face adjacency graph, which provides a representation of an object at different levels of detail. Thus it is consistent with the stepwise refinement process through which the object description is produced
keywords geometric modeling, graphs, objects, representation, data structures,B-rep, solid modeling, Euler operators
series CADline
last changed 2003/06/02 10:24

_id 8ae8
authors Ayala, D., P. Brunet and Juan (et al)
year 1985
title Object Representation by Means of Nominimal Division Quadtrees and Octrees
source ACM Transactions on Graphics. January, 1985. vol. 4: pp. 41-59 : ill. includes bibliography
summary Quadtree representation of two-dimensional objects is performed with a tree that describes the recursive subdivision of the more complex parts of a picture until the desired resolution is reached. At the end, all the leaves of the tree are square cells that lie completely inside or outside the object. There are two great disadvantages in the use of quadtrees as a representation scheme for objects in geometric modeling system: The amount of memory required for polygonal objects is too great, and it is difficult to recompute the boundary representation of the object after some Boolean operations have been performed. In the present paper a new class of quadtrees, in which nodes may contain zero or one edge, is introduced. By using these quadtrees, storage requirements are reduced and it is possible to obtain the exact backward conversion to boundary representation. Algorithms for the generation of the quadtree, boolean operation, and recomputation of the boundary representation are presented, and their complexities in time and space are discussed. Three- dimensional algorithms working on octrees are also presented. Their use in the geometric modeling of three-dimensional polyhedral objects is discussed
keywords geometric modeling, algorithms, octree, quadtree, curves, curved surfaces, boolean operations
series CADline
last changed 2003/06/02 13:58

_id 2730
authors Balkovich, Edward, Lerman, Steven and Parmelee, Richard P.
year 1985
title Computing in Higher Education : The ATHENA Experience
source communications of the ACM. November, 1985. vol. 28: pp. 1214- 1224
summary In this article the use of computation in higher education is approached from the broad sense of its actual use in the curriculum. The authors try to identify areas where current educational methods have observable deficiencies that might be alleviated by the use of appropriate software/hardware combinations. Project ATHENA at MIT is the example the article is based on
keywords networks, software, hardware, UNIX, education
series CADline
last changed 2003/06/02 13:58

_id ddssar0206
id ddssar0206
authors Bax, M.F.Th. and Trum, H.M.G.J.
year 2002
title Faculties of Architecture
source Timmermans, Harry (Ed.), Sixth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings Avegoor, the Netherlands), 2002
summary In order to be inscribed in the European Architect’s register the study program leading to the diploma ‘Architect’ has to meet the criteria of the EC Architect’s Directive (1985). The criteria are enumerated in 11 principles of Article 3 of the Directive. The Advisory Committee, established by the European Council got the task to examine such diplomas in the case some doubts are raised by other Member States. To carry out this task a matrix was designed, as an independent interpreting framework that mediates between the principles of Article 3 and the actual study program of a faculty. Such a tool was needed because of inconsistencies in the list of principles, differences between linguistic versions ofthe Directive, and quantification problems with time, devoted to the principles in the study programs. The core of the matrix, its headings, is a categorisation of the principles on a higher level of abstractionin the form of a taxonomy of domains and corresponding concepts. Filling in the matrix means that each study element of the study programs is analysed according to their content in terms of domains; thesummation of study time devoted to the various domains results in a so-called ‘profile of a faculty’. Judgement of that profile takes place by committee of peers. The domains of the taxonomy are intrinsically the same as the concepts and categories, needed for the description of an architectural design object: the faculties of architecture. This correspondence relates the taxonomy to the field of design theory and philosophy. The taxonomy is an application of Domain theory. This theory,developed by the authors since 1977, takes as a view that the architectural object only can be described fully as an integration of all types of domains. The theory supports the idea of a participatory andinterdisciplinary approach to design, which proved to be awarding both from a scientific and a social point of view. All types of domains have in common that they are measured in three dimensions: form, function and process, connecting the material aspects of the object with its social and proceduralaspects. In the taxonomy the function dimension is emphasised. It will be argued in the paper that the taxonomy is a categorisation following the pragmatistic philosophy of Charles Sanders Peirce. It will bedemonstrated as well that the taxonomy is easy to handle by giving examples of its application in various countries in the last 5 years. The taxonomy proved to be an adequate tool for judgement ofstudy programs and their subsequent improvement, as constituted by the faculties of a Faculty of Architecture. The matrix is described as the result of theoretical reflection and practical application of a matrix, already in use since 1995. The major improvement of the matrix is its direct connection with Peirce’s universal categories and the self-explanatory character of its structure. The connection with Peirce’s categories gave the matrix a more universal character, which enables application in other fieldswhere the term ‘architecture’ is used as a metaphor for artefacts.
series DDSS
last changed 2003/11/21 15:16

_id ddss9408
id ddss9408
authors Bax, Thijs and Trum, Henk
year 1994
title A Taxonomy of Architecture: Core of a Theory of Design
source Second Design and Decision Support Systems in Architecture & Urban Planning (Vaals, the Netherlands), August 15-19, 1994
summary The authors developed a taxonomy of concepts in architectural design. It was accepted by the Advisory Committee for education in the field of architecture, a committee advising the European Commission and Member States, as a reference for their task to harmonize architectural education in Europe. The taxonomy is based on Domain theory, a theory developed by the authors, based on General Systems Theory and the notion of structure according to French Structuralism, takes a participatory viewpoint for the integration of knowledge and interests by parties in the architectural design process. The paper discusses recent developments of the taxonomy, firstly as a result of a confrontation with similar endeavours to structure the field of architectural design, secondly as a result of applications of education and architectural design practice, and thirdly as a result of theapplication of some views derived from the philosophical work from Charles Benjamin Peirce. Developments concern the structural form of the taxonomy comprising basic concepts and levelbound scale concepts, and the specification of the content of the fields which these concepts represent. The confrontation with similar endeavours concerns mainly the work of an ARCUK workingparty, chaired by Tom Marcus, based on the European Directive from 1985. The application concerns experiences with a taxonomy-based enquiry in order to represent the profile of educational programmes of schools and faculties of architecture in Europe in qualitative and quantitative terms. This enquiry was carried out in order to achieve a basis for comparison and judgement, and a basis for future guidelines including quantitative aspects. Views of Peirce, more specifically his views on triarchy as a way of ordering and structuring processes of thinking,provide keys for a re-definition of concepts as building stones of the taxonomy in terms of the form-function-process-triad, which strengthens the coherence of the taxonomy, allowing for a more regular representation in the form of a hierarchical ordered matrix.
series DDSS
last changed 2003/08/07 16:36

_id a619
authors Bentley, Jon L. and McGeoch, Catherine C.
year 1985
title Amortized Analyses of Self-Organizing Sequential Search ; Heuristics Programming Techniques and Data Structures
source communications of the ACM April, 1985. vol. 28: pp. 404-411 : ill. includes bibliography.
summary Amortization is used to analyze the heuristics in a worst- case sense. The relative merit of the heuristics in this analysis is different in the probabilistic analyses. Experiments show that the behavior of the heuristics on real data is more closely described by the amortized analyses than by the probabilistic analyses
keywords economics, analysis, search, heuristics
series CADline
last changed 2003/06/02 13:58

_id 4316
authors Bentley, Jon L.
year 1985
title Associative Arrays -- Programming Pearls
source communications of the ACM. June, 1985. vol. 28: pp. 570-576 : ill
summary Anthropological studies have shown that one's language has a profound effect on one's view of the world. This column is about a language feature outside the Algol heritage: associative arrays. The column examines the associative arrays provided by the AWK language
keywords techniques, programming, algorithms, data structures
series CADline
last changed 2003/06/02 13:58

_id 4532
authors Bono, Peter R.
year 1985
title A Survey of Graphics Standards and Their Role in Information Interchange
source IEEE Computer. October, 1985. vol. 18: pp. 63-75 : ill. ; tables. includes bibliography
summary The survey describes each graphic standard and explains the interrelationships among the standards. The role and commercial impact of PCs serving as workstations in a distributed, network, multimedia environment is emphasized. It is shown that current graphics standardization activity focused on three principal areas: the application interface, the device interface, and picture exchange. The operator interface and hardware interfaces will be expected to be the subjects for standardization in the future. In addition, picture exchange will be replaced by information exchange, where information includes text, image, and voice components merged with graphics to create an integrated whole
keywords computer graphics, standards, GKS, communication
series CADline
last changed 2003/06/02 13:58

_id ca88
authors Buzbee, B.L. and Sharp, D.H.
year 1985
title Perspectives on Supercomputing
source Science. February, 1985. vol. 227: pp. 591-597 : ill. includes bibliography
summary This article provides a brief look at the current status of supercomputers and supercomputing in the United States. It addresses a variety of applications of supercomputers and the characteristics of a large modern supercomputing facility, the radical changes in the design of supercomputers that are impending, and the conditions that are necessary for a conducive climate for the further development and application of supercomputers
keywords parallel processing, hardware, business
series CADline
last changed 2003/06/02 13:58

_id 0533
authors Clemons, Eric K. and Greenfield, Arnold J.
year 1985
title The SAGE System Architecture: A System for the Rapid Development of Graphics Interfaces for Decision Support
source IEEE Computer Graphics and Applications. November, 1985. vol. 5: pp. 38-50 : ill. includes bibliography
summary Graphics interfaces support the decision maker in sensitivity analysis - the exploration of proposed solutions and examination of alternatives. The authors present an architecture for rapid preparation of graphics interfaces for large classes of management sciences, operations research, and expert systems models. This architecture is based on a detailed study of sensitivity analysis requests is also presented. The architecture was the basis of a prototype, now operational, which is illustrated through a case study of sensitivity analysis in a vehicle-routing system
keywords expert systems, user interface, operations research
series CADline
last changed 2003/06/02 10:24

_id 29ff
authors Farouki, Rida T. and Hinds, John K.
year 1985
title A Hierarchy of Geometric Forms
source IEEE Computer Graphics and Applications. May, 1985. vol. 5: pp. 51-78 : ill. includes bibliography
summary This article describes a unified approach to geometric modeling based on the mathematics of parametric polynomial functions. Such a unified scheme for geometric representation and computation provides a natural base for a geometric modeler of considerable versatility and robustness
keywords geometric modeling, parametrization, representation, curves, curved surfaces, B-splines
series CADline
last changed 2003/06/02 13:58

_id c547
authors Fenves, Stephen J. and Rasdorf, William J.
year 1985
title Treatment of Engineering Design Constraints in a Relational Database
source Engineering with Computers. Springer-Verlag, Spring, 1985. vol. 1: pp. 27-37. includes bibliography
summary A major aspect of engineering design is the formulation, application, evaluation, and satisfaction of design constraints. The ability to represent and process a wide variety of such constraints is a necessary ingredient of an engineering design database. This is especially true in databases integrating several design processes, where the database management system must serve as an active design agent performing many of the consistency and integrity checks that are currently done manually. This paper presents a mechanism for representing and processing engineering design constraints. The mechanism can be used for checking that constraints are satisfied as well as for deriving attribute values that satisfy the applicable constraints. Furthermore, the mechanism provides flexibility in sequencing the enforcement of constraints by allowing new constraints to be applied to a preexisting state of the database as well as to all subsequent operations on the database. In both these respects, the mechanism proposed appears to have applications beyond engineering design. The mechanism presented handles a broad class of single-relation, single-tuple constraints typical in engineering design applications. Instead of relying on normalization where possible, to remove functional dependencies, the mechanism incorporates new attributes that represent the status (satisfied or violated) of each constraint, thereby increasing the functional dependence of the relation. Consequently, passive constraint checking can be readily extended to active assignment of attribute values that automatically satisfy constraints. A prototype system implementing many of the components presented has been programmed in Pascal. In addition, portions of the system were implemented using the Relational Information Management (RIM) system, a commercially available DBMS
keywords civil engineering, design, knowledge, relational database, CAE, constraints management
series CADline
last changed 2003/06/02 13:58

_id 78ca
authors Friedland, P. (Ed.)
year 1985
title Special Section on Architectures for Knowledge-Based Systems
source CACM (28), 9, September
summary A fundamental shift in the preferred approach to building applied artificial intelligence (AI) systems has taken place since the late 1960s. Previous work focused on the construction of general-purpose intelligent systems; the emphasis was on powerful inference methods that could function efficiently even when the available domain-specific knowledge was relatively meager. Today the emphasis is on the role of specific and detailed knowledge, rather than on reasoning methods.The first successful application of this method, which goes by the name of knowledge-based or expert-system research, was the DENDRAL program at Stanford, a long-term collaboration between chemists and computer scientists for automating the determination of molecular structure from empirical formulas and mass spectral data. The key idea is that knowledge is power, for experts, be they human or machine, are often those who know more facts and heuristics about a domain than lesser problem solvers. The task of building an expert system, therefore, is predominantly one of teaching" a system enough of these facts and heuristics to enable it to perform competently in a particular problem-solving context. Such a collection of facts and heuristics is commonly called a knowledge base. Knowledge-based systems are still dependent on inference methods that perform reasoning on the knowledge base, but experience has shown that simple inference methods like generate and test, backward-chaining, and forward-chaining are very effective in a wide variety of problem domains when they are coupled with powerful knowledge bases. If this methodology remains preeminent, then the task of constructing knowledge bases becomes the rate-limiting factor in expert-system development. Indeed, a major portion of the applied AI research in the last decade has been directed at developing techniques and tools for knowledge representation. We are now in the third generation of such efforts. The first generation was marked by the development of enhanced AI languages like Interlisp and PROLOG. The second generation saw the development of knowledge representation tools at AI research institutions; Stanford, for instance, produced EMYCIN, The Unit System, and MRS. The third generation is now producing fully supported commercial tools like KEE and S.1. Each generation has seen a substantial decrease in the amount of time needed to build significant expert systems. Ten years ago prototype systems commonly took on the order of two years to show proof of concept; today such systems are routinely built in a few months. Three basic methodologies-frames, rules, and logic-have emerged to support the complex task of storing human knowledge in an expert system. Each of the articles in this Special Section describes and illustrates one of these methodologies. "The Role of Frame-Based Representation in Reasoning," by Richard Fikes and Tom Kehler, describes an object-centered view of knowledge representation, whereby all knowldge is partitioned into discrete structures (frames) having individual properties (slots). Frames can be used to represent broad concepts, classes of objects, or individual instances or components of objects. They are joined together in an inheritance hierarchy that provides for the transmission of common properties among the frames without multiple specification of those properties. The authors use the KEE knowledge representation and manipulation tool to illustrate the characteristics of frame-based representation for a variety of domain examples. They also show how frame-based systems can be used to incorporate a range of inference methods common to both logic and rule-based systems.""Rule-Based Systems," by Frederick Hayes-Roth, chronicles the history and describes the implementation of production rules as a framework for knowledge representation. In essence, production rules use IF conditions THEN conclusions and IF conditions THEN actions structures to construct a knowledge base. The autor catalogs a wide range of applications for which this methodology has proved natural and (at least partially) successful for replicating intelligent behavior. The article also surveys some already-available computational tools for facilitating the construction of rule-based knowledge bases and discusses the inference methods (particularly backward- and forward-chaining) that are provided as part of these tools. The article concludes with a consideration of the future improvement and expansion of such tools.The third article, "Logic Programming, " by Michael Genesereth and Matthew Ginsberg, provides a tutorial introduction to the formal method of programming by description in the predicate calculus. Unlike traditional programming, which emphasizes how computations are to be performed, logic programming focuses on the what of objects and their behavior. The article illustrates the ease with which incremental additions can be made to a logic-oriented knowledge base, as well as the automatic facilities for inference (through theorem proving) and explanation that result from such formal descriptions. A practical example of diagnosis of digital device malfunctions is used to show how significantand complex problems can be represented in the formalism.A note to the reader who may infer that the AI community is being split into competing camps by these three methodologies: Although each provides advantages in certain specific domains (logic where the domain can be readily axiomatized and where complete causal models are available, rules where most of the knowledge can be conveniently expressed as experiential heuristics, and frames where complex structural descriptions are necessary to adequately describe the domain), the current view is one of synthesis rather than exclusivity. Both logic and rule-based systems commonly incorporate frame-like structures to facilitate the representation of large amounts of factual information, and frame-based systems like KEE allow both production rules and predicate calculus statements to be stored within and activated from frames to do inference. The next generation of knowledge representation tools may even help users to select appropriate methodologies for each particular class of knowledge, and then automatically integrate the various methodologies so selected into a consistent framework for knowledge. "
series journal paper
last changed 2003/04/23 15:14

_id e191
authors Fuchs, Henry, Goldfeather, Jack and Hultquist, Jeff P.
year 1985
title Fast Spheres, Shadows, Textures, Transparencies, and Image Enhancements in Pixel-Planes
source SIGGRAPH '85 Conference Proceedings. July, 1985. 1985. vol. 19 ; no. 3: pp. 111-120 : ill. includes bibliography
summary Pixel-planes is a logic-enhanced memory system for raster graphics and imaging. Although each pixel-memory is enhanced with a one-bit ALU, the system's real power comes from a tree of one-bit address that can evaluate linear expressions Ax + By + C for every pixel (x,y) simultaneously, as fast as the ALUs and the memory circuits can accept the results. The development of a variety of algorithms that exploit this fast linear expression evaluation capability has started. The paper reports some of those results. Illustrated in this paper is a sample image from a small working prototype of the Pixel- planes hardware and a variety of images from simulations of a full-scale system. Timing estimates indicate that 30,000 smooth shaded triangles can be generated per second, or 21, 000 smooth-shaded and shadowed triangles can be generated per second, or over 25,000 shaded spheres can be generated per second. Image-enhancement by adaptive histogram equalization can be performed within 4 seconds on a 512 x 512 image
keywords shadowing, image processing, algorithms, polygons, clipping, computer graphics, technology, hardware
series CADline
last changed 2003/06/02 10:24

_id 027b
authors Griffiths, J.G.
year 1985
title Table-Driven Algorithms for Generating Space-Filling Curves
source Computer Aided Design. January/ February, 1985. vol. 17: pp. 37-41 : ill. includes bibliography
summary A simple general method for constructing space-filling curves is presented, based on the use of tables. It is shown how the use of Hilbert's curve can enhance the performance of Warnock's algorithm. A procedure is given which generates Hilbert curves or Sierpinski curves. A second procedure is given which generates Warnock's windows in Hilbert order
keywords computer graphics, rendering, algorithms, curves, representation, display
series CADline
last changed 2003/06/02 13:58

_id 76ce
authors Grimson, W.
year 1985
title Computational Experiments with a Feature Based Stereo Algorithm
source IEEE Trans. Pattern Anal. Machine Intell., Vol. PAMI-7, No. 1
summary Computational models of the human stereo system' can provide insight into general information processing constraints that apply to any stereo system, either artificial or biological. In 1977, Marr and Poggio proposed one such computational model, that was characterized as matching certain feature points in difference-of-Gaussian filtered images, and using the information obtained by matching coarser resolution representations to restrict the search'space for matching finer resolution representations. An implementation of the algorithm and'its testing on a range of images was reported in 1980. Since then a number of psychophysical experiments have suggested possible refinements to the model and modifications to the algorithm. As well, recent computational experiments applying the algorithm to a variety of natural images, especially aerial photographs, have led to a number of modifications. In this article, we present a version of the Marr-Poggio-Gfimson algorithm that embodies these modifications and illustrate its performance on a series of natural images.
series journal paper
last changed 2003/04/23 15:14

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5show page 6HOMELOGIN (you are user _anon_832002 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002