CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 67

_id avocaad_2001_16
id avocaad_2001_16
authors Yu-Ying Chang, Yu-Tung Liu, Chien-Hui Wong
year 2001
title Some Phenomena of Spatial Characteristics of Cyberspace
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary "Space," which has long been an important concept in architecture (Bloomer & Moore, 1977; Mitchell, 1995, 1999), has attracted interest of researchers from various academic disciplines in recent years (Agnew, 1993; Benko & Strohmayer, 1996; Chang, 1999; Foucault, 1982; Gould, 1998). Researchers from disciplines such as anthropology, geography, sociology, philosophy, and linguistics regard it as the basis of the discussion of various theories in social sciences and humanities (Chen, 1999). On the other hand, since the invention of Internet, Internet users have been experiencing a new and magic "world." According to the definitions in traditional architecture theories, "space" is generated whenever people define a finite void by some physical elements (Zevi, 1985). However, although Internet is a virtual, immense, invisible and intangible world, navigating in it, we can still sense the very presence of ourselves and others in a wonderland. This sense could be testified by our naming of Internet as Cyberspace -- an exotic kind of space. Therefore, as people nowadays rely more and more on the Internet in their daily life, and as more and more architectural scholars and designers begin to invest their efforts in the design of virtual places online (e.g., Maher, 1999; Li & Maher, 2000), we cannot help but ask whether there are indeed sensible spaces in Internet. And if yes, these spaces exist in terms of what forms and created by what ways?To join the current interdisciplinary discussion on the issue of space, and to obtain new definition as well as insightful understanding of "space", this study explores the spatial phenomena in Internet. We hope that our findings would ultimately be also useful for contemporary architectural designers and scholars in their designs in the real world.As a preliminary exploration, the main objective of this study is to discover the elements involved in the creation/construction of Internet spaces and to examine the relationship between human participants and Internet spaces. In addition, this study also attempts to investigate whether participants from different academic disciplines define or experience Internet spaces in different ways, and to find what spatial elements of Internet they emphasize the most.In order to achieve a more comprehensive understanding of the spatial phenomena in Internet and to overcome the subjectivity of the members of the research team, the research design of this study was divided into two stages. At the first stage, we conducted literature review to study existing theories of space (which are based on observations and investigations of the physical world). At the second stage of this study, we recruited 8 Internet regular users to approach this topic from different point of views, and to see whether people with different academic training would define and experience Internet spaces differently.The results of this study reveal that the relationship between human participants and Internet spaces is different from that between human participants and physical spaces. In the physical world, physical elements of space must be established first; it then begins to be regarded as a place after interaction between/among human participants or interaction between human participants and the physical environment. In contrast, in Internet, a sense of place is first created through human interactions (or activities), Internet participants then begin to sense the existence of a space. Therefore, it seems that, among the many spatial elements of Internet we found, "interaction/reciprocity" Ñ either between/among human participants or between human participants and the computer interface Ð seems to be the most crucial element.In addition, another interesting result of this study is that verbal (linguistic) elements could provoke a sense of space in a degree higher than 2D visual representation and no less than 3D visual simulations. Nevertheless, verbal and 3D visual elements seem to work in different ways in terms of cognitive behaviors: Verbal elements provoke visual imagery and other sensory perceptions by "imagining" and then excite personal experiences of space; visual elements, on the other hand, provoke and excite visual experiences of space directly by "mapping".Finally, it was found that participants with different academic training did experience and define space differently. For example, when experiencing and analyzing Internet spaces, architecture designers, the creators of the physical world, emphasize the design of circulation and orientation, while participants with linguistics training focus more on subtle language usage. Visual designers tend to analyze the graphical elements of virtual spaces based on traditional painting theories; industrial designers, on the other hand, tend to treat these spaces as industrial products, emphasizing concept of user-center and the control of the computer interface.The findings of this study seem to add new information to our understanding of virtual space. It would be interesting for future studies to investigate how this information influences architectural designers in their real-world practices in this digital age. In addition, to obtain a fuller picture of Internet space, further research is needed to study the same issue by examining more Internet participants who have no formal linguistics and graphical training.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 7ce5
authors Gal, Shahaf
year 1992
title Computers and Design Activities: Their Mediating Role in Engineering Education
source Sociomedia, ed. Edward Barret. MIT Press
summary Sociomedia: With all the new words used to describe electronic communication (multimedia, hypertext, cyberspace, etc.), do we need another one? Edward Barrett thinks we do; hence, he coins the term "sociomedia." It is meant to displace a computing economy in which technicity is hypostasized over sociality. Sociomedia, a compilation of twenty-five articles on the theory, design and practice of educational multimedia and hypermedia, attempts to re-value the communicational face of computing. Value, of course, is "ultimately a social construct." As such, it has everything to do with knowledge, power, education and technology. The projects discussed in this book represent the leading edge of electronic knowledge production in academia (not to mention major funding) and are determining the future of educational media. For these reasons, Sociomedia warrants close inspection. Barrett's introduction sets the tone. For him, designing computer media involves hardwiring a mechanism for the social construction of knowledge (1). He links computing to a process of social and communicative interactivity for constructing and desseminating knowledge. Through a mechanistic mapping of the university as hypercontext (a huge network that includes classrooms as well as services and offices), Barrett models intellectual work in such a way as to avoid "limiting definitions of human nature or human development." Education, then, can remain "where it should be--in the human domain (public and private) of sharing ideas and information through the medium of language." By leaving education in a virtual realm (where we can continue to disagree about its meaning and execution), it remains viral, mutating and contaminating in an intellectually healthy way. He concludes that his mechanistic model, by means of its reductionist approach, preserves value (7). This "value" is the social construction of knowledge. While I support the social orientation of Barrett's argument, discussions of value are related to power. I am not referring to the traditional teacher-student power structure that is supposedly dismantled through cooperative and constructivist learning strategies. The power to be reckoned with in the educational arena is foundational, that which (pre)determines value and the circulation of knowledge. "Since each of you reading this paragraph has a different perspective on the meaning of 'education' or 'learning,' and on the processes involved in 'getting an education,' think of the hybris in trying to capture education in a programmable function, in a displayable object, in a 'teaching machine'" (7). Actually, we must think about that hybris because it is, precisely, what informs teaching machines. Moreover, the basic epistemological premises that give rise to such productions are too often assumed. In the case of instructional design, the episteme of cognitive sciences are often taken for granted. It is ironic that many of the "postmodernists" who support electronic hypertextuality seem to have missed Jacques Derrida's and Michel Foucault's "deconstructions" of the epistemology underpinning cognitive sciences (if not of epistemology itself). Perhaps it is the glitz of the technology that blinds some users (qua developers) to the belief systems operating beneath the surface. Barrett is not guilty of reactionary thinking or politics; he is, in fact, quite in line with much American deconstructive and postmodern thinking. The problem arises in that he leaves open the definitions of "education," "learning" and "getting an education." One cannot engage in the production of new knowledge without orienting its design, production and dissemination, and without negotiating with others' orientations, especially where largescale funding is involved. Notions of human nature and development are structural, even infrastructural, whatever the medium of the teaching machine. Although he addresses some dynamics of power, money and politics when he talks about the recession and its effects on the conference, they are readily visible dynamics of power (3-4). Where does the critical factor of value determination, of power, of who gets what and why, get mapped onto a mechanistic model of learning institutions? Perhaps a mapping of contributors' institutions, of the funding sources for the projects showcased and for participation in the conference, and of the disciplines receiving funding for these sorts of projects would help visualize the configurations of power operative in the rising field of educational multimedia. Questions of power and money notwithstanding, Barrett's introduction sets the social and textual thematics for the collection of essays. His stress on interactivity, on communal knowledge production, on the society of texts, and on media producers and users is carried foward through the other essays, two of which I will discuss. Section I of the book, "Perspectives...," highlights the foundations, uses and possible consequences of multimedia and hypertextuality. The second essay in this section, "Is There a Class in This Text?," plays on the robust exchange surrounding Stanley Fish's book, Is There a Text in This Class?, which presents an attack on authority in reading. The author, John Slatin, has introduced electronic hypertextuality and interaction into his courses. His article maps the transformations in "the content and nature of work, and the workplace itself"-- which, in this case, is not industry but an English poetry class (25). Slatin discovered an increase of productive and cooperative learning in his electronically- mediated classroom. For him, creating knowledge in the electronic classroom involves interaction between students, instructors and course materials through the medium of interactive written discourse. These interactions lead to a new and persistent understanding of the course materials and of the participants' relation to the materials and to one another. The work of the course is to build relationships that, in my view, constitute not only the meaning of individual poems, but poetry itself. The class carries out its work in the continual and usually interactive production of text (31). While I applaud his strategies which dismantle traditional hierarchical structures in academia, the evidence does not convince me that the students know enough to ask important questions or to form a self-directing, learning community. Stanley Fish has not relinquished professing, though he, too, espouses the indeterminancy of the sign. By the fourth week of his course, Slatin's input is, by his own reckoning, reduced to 4% (39). In the transcript of the "controversial" Week 6 exchange on Gertrude Stein--the most disliked poet they were discussing at the time (40)--we see the blind leading the blind. One student parodies Stein for three lines and sums up his input with "I like it." Another, finds Stein's poetry "almost completey [sic] lacking in emotion or any artistic merit" (emphasis added). On what grounds has this student become an arbiter of "artistic merit"? Another student, after admitting being "lost" during the Wallace Steven discussion, talks of having more "respect for Stevens' work than Stein's" and adds that Stein's poetry lacks "conceptual significance[, s]omething which people of varied opinion can intelligently discuss without feeling like total dimwits...." This student has progressed from admitted incomprehension of Stevens' work to imposing her (groundless) respect for his work over Stein's. Then, she exposes her real dislike for Stein's poetry: that she (the student) missed the "conceptual significance" and hence cannot, being a person "of varied opinion," intelligently discuss it "without feeling like [a] total dimwit." Slatin's comment is frightening: "...by this point in the semester students have come to feel increasingly free to challenge the instructor" (41). The students that I have cited are neither thinking critically nor are their preconceptions challenged by student-governed interaction. Thanks to the class format, one student feels self-righteous in her ignorance, and empowered to censure. I believe strongly in student empowerment in the classroom, but only once students have accrued enough knowledge to make informed judgments. Admittedly, Slatin's essay presents only partial data (there are six hundred pages of course transcripts!); still, I wonder how much valuable knowledge and metaknowledge was gained by the students. I also question the extent to which authority and professorial dictature were addressed in this course format. The power structures that make it possible for a college to require such a course, and the choice of texts and pedagogy, were not "on the table." The traditional professorial position may have been displaced, but what took its place?--the authority of consensus with its unidentifiable strong arm, and the faceless reign of software design? Despite Slatin's claim that the students learned about the learning process, there is no evidence (in the article) that the students considered where their attitudes came from, how consensus operates in the construction of knowledge, how power is established and what relationship they have to bureaucratic insitutions. How do we, as teaching professionals, negotiate a balance between an enlightened despotism in education and student-created knowledge? Slatin, and other authors in this book, bring this fundamental question to the fore. There is no definitive answer because the factors involved are ultimately social, and hence, always shifting and reconfiguring. Slatin ends his article with the caveat that computerization can bring about greater estrangement between students, faculty and administration through greater regimentation and control. Of course, it can also "distribute authority and power more widely" (50). Power or authority without a specific face, however, is not necessarily good or just. Shahaf Gal's "Computers and Design Activities: Their Mediating Role in Engineering Education" is found in the second half of the volume, and does not allow for a theory/praxis dichotomy. Gal recounts a brief history of engineering education up to the introduction of Growltiger (GT), a computer-assisted learning aid for design. He demonstrates GT's potential to impact the learning of engineering design by tracking its use by four students in a bridge-building contest. What his text demonstrates clearly is that computers are "inscribing and imaging devices" that add another viewpoint to an on-going dialogue between student, teacher, earlier coursework, and other teaching/learning tools. The less proficient students made a serious error by relying too heavily on the technology, or treating it as a "blueprint provider." They "interacted with GT in a way that trusted the data to represent reality. They did not see their interaction with GT as a negotiation between two knowledge systems" (495). Students who were more thoroughly informed in engineering discourses knew to use the technology as one voice among others--they knew enough not simply to accept the input of the computer as authoritative. The less-advanced students learned a valuable lesson from the competition itself: the fact that their designs were not able to hold up under pressure (literally) brought the fact of their insufficient knowledge crashing down on them (and their bridges). They also had, post factum, several other designs to study, especially the winning one. Although competition and comparison are not good pedagogical strategies for everyone (in this case the competitors had volunteered), at some point what we think we know has to be challenged within the society of discourses to which it belongs. Students need critique in order to learn to push their learning into auto-critique. This is what is lacking in Slatin's discussion and in the writings of other avatars of constructivist, collaborative and computer-mediated pedagogies. Obviously there are differences between instrumental types of knowledge acquisition and discoursive knowledge accumulation. Indeed, I do not promote the teaching of reading, thinking and writing as "skills" per se (then again, Gal's teaching of design is quite discursive, if not dialogic). Nevertheless, the "soft" sciences might benefit from "bridge-building" competitions or the re-institution of some forms of agonia. Not everything agonistic is inhuman agony--the joy of confronting or creating a sound argument supported by defensible evidence, for example. Students need to know that soundbites are not sound arguments despite predictions that electronic writing will be aphoristic rather than periodic. Just because writing and learning can be conceived of hypertextually does not mean that rigor goes the way of the dinosaur. Rigor and hypertextuality are not mutually incompatible. Nor is rigorous thinking and hard intellectual work unpleasurable, although American anti-intellectualism, especially in the mass media, would make it so. At a time when the spurious dogmatics of a Rush Limbaugh and Holocaust revisionist historians circulate "aphoristically" in cyberspace, and at a time when knowledge is becoming increasingly textualized, the role of critical thinking in education will ultimately determine the value(s) of socially constructed knowledge. This volume affords the reader an opportunity to reconsider knowledge, power, and new communications technologies with respect to social dynamics and power relationships.
series other
last changed 2003/04/23 15:14

_id sigradi2018_1482
id sigradi2018_1482
authors Goffinet de Almeida, Rafael; Lopes de Souza Santos, Fábio
year 2018
title Participation and contemporary spatialities: new technologies of social agency
source SIGraDi 2018 [Proceedings of the 22nd Conference of the Iberoamerican Society of Digital Graphics - ISSN: 2318-6968] Brazil, São Carlos 7 - 9 November 2018, pp. 1150-1158
summary Focusing on the Museu do Futebol and Google Campus – São Paulo, specifically their impacts on the space conventions of culture and labor, this article aims to investigate main questions behind the contemporary phenomena that erases previous boundaries between both fields. Manuel Castells´ concept of “informational economy” will be confronted with Michel Foucault´s theoretical perspective of power devices, social agency and the fabrication of the neoliberal subject to demonstrate how key terms such as participation, collaboration and interactivity – associated with informational technologies – are producing new spatialities that are functioning as sophisticated forms of social behavior and experience control.
keywords Participation; Contemporary spatialities; Space and Power; Social agency
series SIGRADI
email
last changed 2021/03/28 19:58

_id cdrf2023_3
id cdrf2023_3
authors Sandra Manninger, Matias del Campo
year 2023
title Deep Mining Authorship
doi https://doi.org/https://doi.org/10.1007/978-981-99-8405-3_1
source Proceedings of the 2023 DigitalFUTURES The 5st International Conference on Computational Design and Robotic Fabrication (CDRF 2023)
summary Considering the emerging field of architecture and artificial intelligence, it might be necessary to contemplate the remodeling of the concept of authorship entirely. The invention of authorship is a complex historical process that can be traced back to the emergence of print culture in Europe in the 15th century. Prior to this period, most literary and artistic works were created anonymously or attributed to collective or anonymous sources, such as folklore or religious traditions. However, with the rise of printing, texts became more easily reproducible and marketable, and there emerged a need for individual authors to take credit for their works. The notion of authorship was closely tied to the idea of originality and ownership, as authors sought to assert their exclusive rights to their works and to distinguish themselves from other writers. This was supported by the development of copyright law, which granted legal protection to authors and their works, and helped to establish a market for literary and artistic works. The idea of the author as a singular, autonomous figure gained further prominence in the 18th and 19th centuries, with the emergence of romanticism and the cult of the individual. This period saw the rise of the idea of the artist as a genius, whose works were the product of their own unique creativity and imagination. This idea was further reinforced by the rise of literary criticism, which focused on the interpretation and analysis of individual works and their authors. However, as Michel Foucault and other scholars have argued, the notion of authorship is not a universal or timeless concept, but rather a historically contingent and culturally specific one. Different societies ad cultures have different understandings of authorship, and these have shifted over time in response to changes in technology, culture, and social values. As it stands now, authorship in its traditional form can hardly be applied in a context where automated collaborations provide more than 50% of the generated material. This is true for multiple art fields. Visual Arts (Mario Klingemann, Sofia Crespo, Memo Atken, Ooouch, etc.), Music (Dadabots, YACHT, Holly Herndon), Literature, etc. Very soon this will also be true for Architecture. The consequence is also an entire rethinking of the concept of the sole genius. This notion, developed by German Romanticists in the early 19th century, is, in the current context of AI-assisted creativity, completely obsolete, as we are drawing from the genius of hundreds of thousands of artists and artworks in order to interrogate the latent space for unseen artistic opportunities. More akin to an archeological dig leading to the discovery of a next-generation jet fighter plane.
series cdrf
email
last changed 2024/05/29 14:04

_id 05f0
authors Ball, A.A.
year 1977
title CONSURF Part 3 : How the Program Is Used
source computer Aided Design. January, 1977. vol. 9: pp. 9-12 : ill. includes bibliography
summary This paper is the last of a series describing the surface lofting program CONSURF, and outlines how the program is used. The overall approach is geometrical and is modeled closely on manual lofting. The program user must have a practical understanding of shape and be able to visualize the surfaces he defines. He must also be numerate, but he does not need to understand the surface mathematics which is confined to the software. In this paper CONSURF, is considered as a production program and the contribution to the user are described
keywords mechanical engineering, curved surfaces, lofting
series CADline
last changed 2003/06/02 13:58

_id ddssar0206
id ddssar0206
authors Bax, M.F.Th. and Trum, H.M.G.J.
year 2002
title Faculties of Architecture
source Timmermans, Harry (Ed.), Sixth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings Avegoor, the Netherlands), 2002
summary In order to be inscribed in the European Architect’s register the study program leading to the diploma ‘Architect’ has to meet the criteria of the EC Architect’s Directive (1985). The criteria are enumerated in 11 principles of Article 3 of the Directive. The Advisory Committee, established by the European Council got the task to examine such diplomas in the case some doubts are raised by other Member States. To carry out this task a matrix was designed, as an independent interpreting framework that mediates between the principles of Article 3 and the actual study program of a faculty. Such a tool was needed because of inconsistencies in the list of principles, differences between linguistic versions ofthe Directive, and quantification problems with time, devoted to the principles in the study programs. The core of the matrix, its headings, is a categorisation of the principles on a higher level of abstractionin the form of a taxonomy of domains and corresponding concepts. Filling in the matrix means that each study element of the study programs is analysed according to their content in terms of domains; thesummation of study time devoted to the various domains results in a so-called ‘profile of a faculty’. Judgement of that profile takes place by committee of peers. The domains of the taxonomy are intrinsically the same as the concepts and categories, needed for the description of an architectural design object: the faculties of architecture. This correspondence relates the taxonomy to the field of design theory and philosophy. The taxonomy is an application of Domain theory. This theory,developed by the authors since 1977, takes as a view that the architectural object only can be described fully as an integration of all types of domains. The theory supports the idea of a participatory andinterdisciplinary approach to design, which proved to be awarding both from a scientific and a social point of view. All types of domains have in common that they are measured in three dimensions: form, function and process, connecting the material aspects of the object with its social and proceduralaspects. In the taxonomy the function dimension is emphasised. It will be argued in the paper that the taxonomy is a categorisation following the pragmatistic philosophy of Charles Sanders Peirce. It will bedemonstrated as well that the taxonomy is easy to handle by giving examples of its application in various countries in the last 5 years. The taxonomy proved to be an adequate tool for judgement ofstudy programs and their subsequent improvement, as constituted by the faculties of a Faculty of Architecture. The matrix is described as the result of theoretical reflection and practical application of a matrix, already in use since 1995. The major improvement of the matrix is its direct connection with Peirce’s universal categories and the self-explanatory character of its structure. The connection with Peirce’s categories gave the matrix a more universal character, which enables application in other fieldswhere the term ‘architecture’ is used as a metaphor for artefacts.
series DDSS
last changed 2003/11/21 15:16

_id ddssar0003
id ddssar0003
authors Bax, Th., Trum, H. and Nauta, D.jr.
year 2000
title Implications of the philosophy of Ch. S. Peirce for interdisciplinary design: developments in domain theory
source Timmermans, Harry (Ed.), Fifth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Nijkerk, the Netherlands)
summary Subject of this paper is the establishment of a connection between categorical pragmatism, developed by Charles Sanders Peirce (1839-1914) through phenomenological analysis, and Domain Theory, developed by Thijs Bax and Henk Trum since 1977. The first is a phenomenological branch of philosophy, the second a theory of interdisciplinary design. A connection seems possible because of similarity in form (three-partitions with an anarcho-hierarchical character), the not-absolute conception of functionality and the interdisciplinary and procedural (participation based action) character of both theories.
series DDSS
last changed 2003/11/21 15:16

_id ed51
authors Bergeron, Philippe
year 1986
title A General Version of Crow's Shadow Volumes
source IEEE Computer Graphics and Applications September, 1986. vol. 6: pp. 17-28 : col. ill. includes bibliography.
summary In 1977 Frank Crow introduced a new class of algorithms for the generation of shadows. His technique, based on the concept of shadow volumes, assumes a polygonal database and a constrained environment. For example, polyhedrons must be closed, and polygons must be planar. This article presents a new version of Crow's algorithm, developed at the Universite de Montreal, which attempts a less constrained environment. The method has allowed the handling of both open and closed models and nonplanar polygons with the viewpoint anywhere, including any shadow volume. It does not, however, sacrifice the essential features of Crow's original version: penetration between polygons is allowed, and any number of light sources can be defined anywhere in 3D space, including the view volume and any shadow volume. The method has been used successfully in the film Tony de Peltrie and is easily incorporated into an existing scan-line, hidden-surface algorithm
keywords algorithms, shadowing, polygons, computer graphics
series CADline
last changed 1999/02/12 15:07

_id 4489
authors Blinn, J.F.
year 1977
title Models of light reflection for computer synthesised pictures
source Computer Graphics, 11 2, 192-198
summary Bui-Tuong Phong published his illumination model in 1973, in the paper titled "Illumination for Computer-Generated Images". Phong's model is a local illumination model, which means only direct reflections are taken into account. Light that bounces off more than one surface before reaching the eye is not accounted for. While this may not be very realistic, it allows the lighting to be computed efficiently. To properly handle indirect lighting, a global illumination method such as radiosity is required, which is much more expensive. In addition to Phong's basic lighting equation, we will look at a variation invented by Jim Blinn. Blinn changed the way specular is calculated, making the computations slightly cheaper. Blinn published his approach in his paper "Models of Light Reflection for Computer Synthesised Pictures" in 1977.
series journal paper
last changed 2003/04/23 15:14

_id 2168
authors Bobrow, Daniel G. and Winograd, Terry
year 1977
title An Overview of KRL, a Knowledge Representation Language
source Cognitive Science. 1977. vol. 1: pp. 3-46. includes bibliography
summary This paper describes KRL, a Knowledge Representation Language designed for use in understander systems. It outlines both the general concepts which underlie the research and the details of KRL-O, an experimental implementation of some of these concepts. KRL is an attempt to integrate procedural knowledge with a broad base of declarative forms. These forms provide a variety of ways to express the logical structure of the knowledge, in order to give flexibility in associating procedures (for memory and reasoning) with specific pieces of knowledge, and to control the relative accessibility of different facts and descriptions. The formalism for declarative knowledge is based on structured conceptual objects with associated descriptions. These objects form a network of memory units with several different sorts of linkages, each having well-specified implications for the retrieval process. Procedures can be associated directly with the internal structure of a conceptual object. This procedural attachment allows the steps for a particular operation to be determined by characteristics of the specific entities involved. The control structure of KRL is based on the belief that the next generation of intelligent programs will integrate data-directed and goal-directed processing by using multiprocessing. It provides for a priority-ordered multiprocess agenda with explicit (user-provided) strategies for scheduling and resource allocation. It provides procedure directories which operate along with process frameworks to allow procedural parametrization of the fundamental system processes for building, comparing, and retrieving memory structures. Future development of KRL will include integrating procedure definition with the descriptive formalism
keywords knowledge, representation, languages, AI
series CADline
last changed 2003/06/02 10:24

_id aef9
id aef9
authors Brown, A., Knight, M. and Berridge, P. (Eds.)
year 1999
title Architectural Computing from Turing to 2000 [Conference Proceedings]
doi https://doi.org/10.52842/conf.ecaade.1999
source eCAADe Conference Proceedings / ISBN 0-9523687-5-7 / Liverpool (UK) 15-17 September 1999, 773 p.
summary The core theme of this book is the idea of looking forward to where research and development in Computer Aided Architectural Design might be heading. The contention is that we can do so most effectively by using the developments that have taken place over the past three or four decades in Computing and Architectural Computing as our reference point; the past informing the future. The genesis of this theme is the fact that a new millennium is about to arrive. If we are ruthlessly objective the year 2000 holds no more significance than any other year; perhaps we should, instead, be preparing for the year 2048 (2k). In fact, whatever the justification, it is now timely to review where we stand in terms of the development of Architectural Computing. This book aims to do that. It is salutary to look back at what writers and researchers have said in the past about where they thought that the developments in computing were taking us. One of the common themes picked up in the sections of this book is the developments that have been spawned by the global linkup that the worldwide web offers us. In the past decade the scale and application of this new medium of communication has grown at a remarkable rate. There are few technological developments that have become so ubiquitous, so quickly. As a consequence there are particular sections in this book on Communication and the Virtual Design Studio which reflect the prominence of this new area, but examples of its application are scattered throughout the book. In 'Computer-Aided Architectural Design' (1977), Bill Mitchell did suggest that computer network accessibility from expensive centralised locations to affordable common, decentralised computing facilities would become more commonplace. But most pundits have been taken by surprise by just how powerful the explosive cocktail of networks, email and hypertext has proven to be. Each of the ingredients is interesting in its own right but together they have presented us with genuinely new ways of working. Perhaps, with foresight we can see what the next new explosive cocktail might be.
series eCAADe
email
more http://www.ecaade.org
last changed 2022/06/07 07:49

_id 092b
authors Burton, Warren
year 1977
title Representation of Many-Sided Polygons and Polygonal Lines for Rapid Processing
source communications of the ACMò. March, 1977. vol. 20: pp. 166-171 : ill. includes bibliography
summary A representation for polygons and polygonal lines is described which allows sets of consecutive sides to be collectively examined. The set of sides are arranged in a binary tree hierarchy by inclusion. A fast algorithm for testing the inclusion of a point in a many-sided polygon is given. The speed of the algorithm is discussed for both ideal and practical examples. It is shown that the points of intersection of two polygonal lines can be located by what is essentially a binary tree search. The algorithm and a practical example are discussed. The representation overcomes many of the disadvantages associated with the various fixed- grid methods for representing curves and regions
keywords representation, GIS, mapping, computer graphics, algorithms, information, intersection, curves, polygons, B-rep
series CADline
last changed 1999/02/12 15:07

_id 22ce
authors Cahn, Deborah U., Johnston, Nancy E. and Johnston, William E.
year 1977
title A Response to the 1977 GSPC Core Graphic System
source SIGGRAPH '79 Conference Proceedings. August, 1979. vol. 13 ; no. 2: pp. 57-62. includes bibliography
summary This paper responds to the 1977 Core Graphics System of SIGGRAPH's Graphics Standards Planning Committee (GSPC). The authors are interested in low-level device-independent graphics for applications doing data representation and annotation. The level structure and bias in the core system toward display list processor graphics are criticized. Specific issues discussed include display contexts, attributes, current position, 3-dimensional graphics, area filling, and graphics input
keywords computer graphics, standards
series CADline
last changed 2003/06/02 13:58

_id 490d
authors De Groot, D.J.
year 1977
title Designing Curved Surfaces with Analytical Functions
source Computer Aided Design. January, 1977. vol. 9: pp. 3-8 : ill
summary Shaping and computer-interactive design of curved surfaces of industrial objects, where artistic freedom is allowed for the outward appearance, is a time-consuming job particularly when feeding the computer program with the necessary geometrical input data. A design method is presented together with practical results of designed surfaces composed of simple analytical functions. Human input of geometrical and artistic data has been minimized. Smoothness and fairness are created by the surface composing functions
keywords curved surfaces, representation, CAD, systems
series CADline
last changed 2003/06/02 13:58

_id sigradi2009_774
id sigradi2009_774
authors de Souza, Raphael Argento; André Soares Monat
year 2009
title Visualização da Informação em meio telejornalístico: Uma abordagem sob a ótica do design [Information Visualization in the news television: An approach under the design sight]
source SIGraDi 2009 - Proceedings of the 13th Congress of the Iberoamerican Society of Digital Graphics, Sao Paulo, Brazil, November 16-18, 2009
summary This article proposes a classification, under the Visualization Information point of view, of infographics broadcasted in the brazilian news television. To achieve this purpose these so called motion graphics were analised under the basis formed by three main authors: Tufte (1997), Bertin (1977) and Spence (2007), whose theories are in this article compared to the digital means of the motion graphics. With these theoretical foundation and the analisys of two hundred motion graphics broadcasted in the brazilian news television, we achieved a classification which covers every type of these motion graphics, hoping it becomes a basis for the study of these projects.
keywords Design; information visualization; television infographics, motion graphics; information design
series SIGRADI
email
last changed 2016/03/10 09:50

_id sigradi2006_e028c
id sigradi2006_e028c
authors Griffith, Kenfield; Sass, Larry and Michaud, Dennis
year 2006
title A strategy for complex-curved building design:Design structure with Bi-lateral contouring as integrally connected ribs
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 465-469
summary Shapes in designs created by architects such as Gehry Partners (Shelden, 2002), Foster and Partners, and Kohn Peterson and Fox rely on computational processes for rationalizing complex geometry for building construction. Rationalization is the reduction of a complete geometric shape into discrete components. Unfortunately, for many architects the rationalization is limited reducing solid models to surfaces or data on spread sheets for contractors to follow. Rationalized models produced by the firms listed above do not offer strategies for construction or digital fabrication. For the physical production of CAD description an alternative to the rationalized description is needed. This paper examines the coupling of digital rationalization and digital fabrication with physical mockups (Rich, 1989). Our aim is to explore complex relationships found in early and mid stage design phases when digital fabrication is used to produce design outcomes. Results of our investigation will aid architects and engineers in addressing the complications found in the translation of design models embedded with precision to constructible geometries. We present an algorithmically based approach to design rationalization that supports physical production as well as surface production of desktop models. Our approach is an alternative to conventional rapid prototyping that builds objects by assembly of laterally sliced contours from a solid model. We explored an improved product description for rapid manufacture as bilateral contouring for structure and panelling for strength (Kolarevic, 2003). Infrastructure typically found within aerospace, automotive, and shipbuilding industries, bilateral contouring is an organized matrix of horizontal and vertical interlocking ribs evenly distributed along a surface. These structures are monocoque and semi-monocoque assemblies composed of structural ribs and skinning attached by rivets and adhesives. Alternative, bi-lateral contouring discussed is an interlocking matrix of plywood strips having integral joinery for assembly. Unlike traditional methods of building representations through malleable materials for creating tangible objects (Friedman, 2002), this approach constructs with the implication for building life-size solutions. Three algorithms are presented as examples of rationalized design production with physical results. The first algorithm [Figure 1] deconstructs an initial 2D curved form into ribbed slices to be assembled through integral connections constructed as part of the rib solution. The second algorithm [Figure 2] deconstructs curved forms of greater complexity. The algorithm walks along the surface extracting surface information along horizontal and vertical axes saving surface information resulting in a ribbed structure of slight double curvature. The final algorithm [Figure 3] is expressed as plug-in software for Rhino that deconstructs a design to components for assembly as rib structures. The plug-in also translates geometries to a flatten position for 2D fabrication. The software demonstrates the full scope of the research exploration. Studies published by Dodgson argued that innovation technology (IvT) (Dodgson, Gann, Salter, 2004) helped in solving projects like the Guggenheim in Bilbao, the leaning Tower of Pisa in Italy, and the Millennium Bridge in London. Similarly, the method discussed in this paper will aid in solving physical production problems with complex building forms. References Bentley, P.J. (Ed.). Evolutionary Design by Computers. Morgan Kaufman Publishers Inc. San Francisco, CA, 1-73 Celani, G, (2004) “From simple to complex: using AutoCAD to build generative design systems” in: L. Caldas and J. Duarte (org.) Implementations issues in generative design systems. First Intl. Conference on Design Computing and Cognition, July 2004 Dodgson M, Gann D.M., Salter A, (2004), “Impact of Innovation Technology on Engineering Problem Solving: Lessons from High Profile Public Projects,” Industrial Dynamics, Innovation and Development, 2004 Dristas, (2004) “Design Operators.” Thesis. Massachusetts Institute of Technology, Cambridge, MA, 2004 Friedman, M, (2002), Gehry Talks: Architecture + Practice, Universe Publishing, New York, NY, 2002 Kolarevic, B, (2003), Architecture in the Digital Age: Design and Manufacturing, Spon Press, London, UK, 2003 Opas J, Bochnick H, Tuomi J, (1994), “Manufacturability Analysis as a Part of CAD/CAM Integration”, Intelligent Systems in Design and Manufacturing, 261-292 Rudolph S, Alber R, (2002), “An Evolutionary Approach to the Inverse Problem in Rule-Based Design Representations”, Artificial Intelligence in Design ’02, 329-350 Rich M, (1989), Digital Mockup, American Institute of Aeronautics and Astronautics, Reston, VA, 1989 Schön, D., The Reflective Practitioner: How Professional Think in Action. Basic Books. 1983 Shelden, D, (2003), “Digital Surface Representation and the Constructability of Gehry’s Architecture.” Diss. Massachusetts Institute of Technology, Cambridge, MA, 2003 Smithers T, Conkie A, Doheny J, Logan B, Millington K, (1989), “Design as Intelligent Behaviour: An AI in Design Thesis Programme”, Artificial Intelligence in Design, 293-334 Smithers T, (2002), “Synthesis in Designing”, Artificial Intelligence in Design ’02, 3-24 Stiny, G, (1977), “Ice-ray: a note on the generation of Chinese lattice designs” Environmental and Planning B, volume 4, pp. 89-98
keywords Digital fabrication; bilateral contouring; integral connection; complex-curve
series SIGRADI
email
last changed 2016/03/10 09:52

_id 76ce
authors Grimson, W.
year 1985
title Computational Experiments with a Feature Based Stereo Algorithm
source IEEE Trans. Pattern Anal. Machine Intell., Vol. PAMI-7, No. 1
summary Computational models of the human stereo system' can provide insight into general information processing constraints that apply to any stereo system, either artificial or biological. In 1977, Marr and Poggio proposed one such computational model, that was characterized as matching certain feature points in difference-of-Gaussian filtered images, and using the information obtained by matching coarser resolution representations to restrict the search'space for matching finer resolution representations. An implementation of the algorithm and'its testing on a range of images was reported in 1980. Since then a number of psychophysical experiments have suggested possible refinements to the model and modifications to the algorithm. As well, recent computational experiments applying the algorithm to a variety of natural images, especially aerial photographs, have led to a number of modifications. In this article, we present a version of the Marr-Poggio-Gfimson algorithm that embodies these modifications and illustrate its performance on a series of natural images.
series journal paper
last changed 2003/04/23 15:14

_id ecaade2009_177
id ecaade2009_177
authors Göttig, Roland; Braunes, Jörg
year 2009
title Building Survey in Combination with Building Information Modelling for the Architectural Planning Process
doi https://doi.org/10.52842/conf.ecaade.2009.069
source Computation: The New Realm of Architectural Design [27th eCAADe Conference Proceedings / ISBN 978-0-9541183-8-9] Istanbul (Turkey) 16-19 September 2009, pp. 69-74
summary The architectural planning process is influenced by social, cultural and technical aspects (Alexander, 1977). When focussing on computer based planning for retrofitting or modification of buildings it becomes clear that many different data formats are used depending on a great variety of planning methods. Moreover, if building information models are utilized they still lack some essential criteria. It is rarely possible to attach individual data from survey systems. This paper will show both a way to add data from building survey systems as an example for special data attachment on IFC files and how to utilize content management systems for IFC files, deviated plans, lists of building components, and other data necessary in a planning process.
wos WOS:000334282200007
keywords Planning process, building information modeling, IFC, building survey systems, content management systems
series eCAADe
email
last changed 2022/06/07 07:50

_id 20a5
authors Kieburtz, Richard B.
year 1977
title Structured Programming and Problem- Solving with PASCAL
source xiii, 348 p. : ill. Englewood cliffs, New Jersey: Prentice-Hall, Inc., 1977. includes index
summary An introduction emphasizing the problem-solving approach to computing, progressing from the development of a systematic and disciplined approach to the discovery of algorithms. Includes examples and exercises
keywords PASCAL, programming, languages, problem solving, education
series CADline
last changed 2003/06/02 13:58

_id e5a1
authors Korf, R.E.
year 1977
title A Shape Independent Theory of Space Allocation
source Environment and Planning B. 1977. vol. 4: pp. 37-50 : ill. includes bibliography
summary A theory of space allocation in architectural design is presented. The theory is completely independent of the shapes of the spaces. The problem is broken down into four hierarchical levels of abstraction. The top level is the number of spaces. The second level consists of the adjacencies between the spaces, represented as abstract graphs. The third level is concerned with the different planar embeddings or geometries of the adjacency graphs. The bottom level is represented by labelled bubble diagrams. At each level, the number of design alternatives is finite and it is shown how they can be systematically enumerated
keywords space allocation, synthesis, architecture, design, graphs, layout, algorithms
series CADline
last changed 2003/06/02 13:58

For more results click below:

this is page 0show page 1show page 2show page 3HOMELOGIN (you are user _anon_882583 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002