CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 66

_id c7e9
authors Maver, T.W.
year 2002
title Predicting the Past, Remembering the Future
source SIGraDi 2002 - [Proceedings of the 6th Iberoamerican Congress of Digital Graphics] Caracas (Venezuela) 27-29 november 2002, pp. 2-3
summary Charlas Magistrales 2There never has been such an exciting moment in time in the extraordinary 30 year history of our subject area, as NOW,when the philosophical theoretical and practical issues of virtuality are taking centre stage.The PastThere have, of course, been other defining moments during these exciting 30 years:• the first algorithms for generating building layouts (circa 1965).• the first use of Computer graphics for building appraisal (circa 1966).• the first integrated package for building performance appraisal (circa 1972).• the first computer generated perspective drawings (circa 1973).• the first robust drafting systems (circa 1975).• the first dynamic energy models (circa 1982).• the first photorealistic colour imaging (circa 1986).• the first animations (circa 1988)• the first multimedia systems (circa 1995), and• the first convincing demonstrations of virtual reality (circa 1996).Whereas the CAAD community has been hugely inventive in the development of ICT applications to building design, it hasbeen woefully remiss in its attempts to evaluate the contribution of those developments to the quality of the built environmentor to the efficiency of the design process. In the absence of any real evidence, one can only conjecture regarding the realbenefits which fall, it is suggested, under the following headings:• Verisimilitude: The extraordinary quality of still and animated images of the formal qualities of the interiors and exteriorsof individual buildings and of whole neighborhoods must surely give great comfort to practitioners and their clients thatwhat is intended, formally, is what will be delivered, i.e. WYSIWYG - what you see is what you get.• Sustainability: The power of «first-principle» models of the dynamic energetic behaviour of buildings in response tochanging diurnal and seasonal conditions has the potential to save millions of dollars and dramatically to reduce thedamaging environmental pollution created by badly designed and managed buildings.• Productivity: CAD is now a multi-billion dollar business which offers design decision support systems which operate,effectively, across continents, time-zones, professions and companies.• Communication: Multi-media technology - cheap to deliver but high in value - is changing the way in which we canexplain and understand the past and, envisage and anticipate the future; virtual past and virtual future!MacromyopiaThe late John Lansdown offered the view, in his wonderfully prophetic way, that ...”the future will be just like the past, onlymore so...”So what can we expect the extraordinary trajectory of our subject area to be?To have any chance of being accurate we have to have an understanding of the phenomenon of macromyopia: thephenomenon exhibitted by society of greatly exaggerating the immediate short-term impact of new technologies (particularlythe information technologies) but, more importantly, seriously underestimating their sustained long-term impacts - socially,economically and intellectually . Examples of flawed predictions regarding the the future application of information technologiesinclude:• The British Government in 1880 declined to support the idea of a national telephonic system, backed by the argumentthat there were sufficient small boys in the countryside to run with messages.• Alexander Bell was modest enough to say that: «I am not boasting or exaggerating but I believe, one day, there will bea telephone in every American city».• Tom Watson, in 1943 said: «I think there is a world market for about 5 computers».• In 1977, Ken Olssop of Digital said: «There is no reason for any individuals to have a computer in their home».The FutureJust as the ascent of woman/man-kind can be attributed to her/his capacity to discover amplifiers of the modest humancapability, so we shall discover how best to exploit our most important amplifier - that of the intellect. The more we know themore we can figure; the more we can figure the more we understand; the more we understand the more we can appraise;the more we can appraise the more we can decide; the more we can decide the more we can act; the more we can act themore we can shape; and the more we can shape, the better the chance that we can leave for future generations a trulysustainable built environment which is fit-for-purpose, cost-beneficial, environmentally friendly and culturally significactCentral to this aspiration will be our understanding of the relationship between real and virtual worlds and how to moveeffortlessly between them. We need to be able to design, from within the virtual world, environments which may be real ormay remain virtual or, perhaps, be part real and part virtual.What is certain is that the next 30 years will be every bit as exciting and challenging as the first 30 years.
series SIGRADI
email
last changed 2016/03/10 09:55

_id 8c27
authors Kalay, Yehuda E.
year 1982
title Determining the Spatial Containment of a Point in General Polyhedra
source Computer graphics and Image Processing. 1982. vol. 19: pp. 303-334 : ill. includes bibliography. See also criticism and improvements in Orlowski, Marian
summary Determining the inclusion of a point in volume-enclosing polyhedra (shapes) in 3D space is, in principle, the extension of the well-known problem of determining the inclusion of a point in a polygon in 2D space. However, the extra degree of freedom makes 3D point-polyhedron containment analysis much more difficult to solve than the 2D point polygon problem, mainly because of the nonsequential ordering of the shape elements, which requires global shape data to be applied for resolving special cases. Two general O(n) algorithms for solving the problem by reducing the 3D case into the solvable 2D case are presented. The first algorithm, denoted 'the projection method,' is applicable to any planar- faced polyhedron, reducing the dimensionality by employing parallel projection to generate planar images of the shape faces, together with an image of the point being tested for inclusion. The containment relationship of these images is used to increment a global parity-counter when appropriate, representing an abstraction for counting the intersections between the surface of the shape and a halfline extending from the point to infinity. An 'inside' relationship is established when the parity-count is odd. Special cases (coincidence of the halfline with edges or vertices of the shape) are resolved by eliminating the coincidental elements and re-projecting the merged faces. The second algorithm, denoted 'the intersection method,' is applicable to any well- formed shape, including curved-surfaced ones. It reduces the dimensionality by intersecting the polygonal trace of the shape surface at the plane of intersection, which is tested for containing the trace of the point in the plane, directly establishing the overall 3D containment relationship. A particular O(n) implementation of the 2D point-in-polygon inclusion algorithm, which is used for solving the problem once reduced in dimensionality, is also presented. The presentation is complemented by discussions of the problems associated with point-polyhedron relationship determination in general, and comparative analysis of the two particular algorithms presented
keywords geometric modeling, point inclusion, polygons, polyhedra, computational geometry, algorithms, search, B-rep
series CADline
email
last changed 2003/06/02 10:24

_id e5d0
authors Lowe, John P.
year 1994
title Computer-Aided-Design in the Studio Setting: A Paradigm Shift in Architectural Education
source The Virtual Studio [Proceedings of the 12th European Conference on Education in Computer Aided Architectural Design / ISBN 0-9523687-0-6] Glasgow (Scotland) 7-10 September 1994, p. 230
doi https://doi.org/10.52842/conf.ecaade.1994.x.g6j
summary The introduction of the personal computer in 1982 set forth a revolution that will continue to transform the profession of Architecture. Most architectural practices in America have embraced this revolution realizing the potentials of the computer. However, education seems to have been slower accepting the potentials and challenges of computers. Computer technology will change the design studio setting and therefore the fundamental way architects are educated. The Department of Architecture at Kansas State University has made a commitment to move toward a computer based design studio. In the fall of 1990, discussions began among the faculty to search for the placement of a computer studio within the five year program. Curriculum, staffing, and funding were issues that had to be overcome to make this commitment work. The strategy that was adopted involved placing the computer studio at the fourth year level in phase one. Phase two will progress as more staff are trained on the computer and course work was adapted to accommodate other year levels for a computer based design studios. Funding was a major obstacle. The decision was made to move from a position of being the primary suppliers of computing technology to one of support for student purchased computers. This strategy alleviated the department from maintaining and upgrading the technology. There was great enthusiasm and support from the faculty as a whole for the use of computers in the studio setting. However, the pedagogical impacts of such a change are just beginning to be realized.

series eCAADe
last changed 2022/06/07 07:50

_id 2786
authors Woodwark, J.R.
year 1989
title Splitting Set-Theoretic Solid Models into Connected Components
source 10 p. : ill. Winchester: IBM UK Scientific Center, IBM United Kingdom Laboratories Limited, June, 1989. IBM UKSC 210. includes bibliography In general, there is no way to tell how many pieces (connected components) a set-theoretic (CSG) solid model represents, except via conversion to a boundary model. Recent work on the elimination of redundant primitives has been linked with techniques for identifying connected components in quad-trees and oct-trees into a strategy to attack this problem. Some success has been achieved, and an experimental Prolog program, working in two dimensions, that finds connected components and determines the set-theoretic representation of each component, is reported, and further developments proposed. CSG / quadtree / octree / primitives / algorithms. 43. Woodwark, J. R. and Quinlan K. M. 'Reducing the Effect of Complexity on Volume Model Evaluation.' Computer Aided Design. April, 1982. pp. 89-95 : ill. includes bibliography.
summary A major problem with volume modelling systems is that processing times may increase with model complexity in a worse than linear fashion. The authors have addressed this problem, for picture generation, by repeatedly dividing the space occupied by a model, and evaluating the sub-models created only when they meet a criterion of simplicity. Hidden surface elimination has been integrated with evaluation, in such a way that major portions of the model which are not visible are never evaluated. An example demonstrates a better than linear relationship between model complexity and computation time, and also shows the effect of picture complexity on the performance of the process
keywords CAD, computational geometry, solid modeling, geometric modeling, algorithms, hidden surfaces, CSG
series CADline
last changed 2003/06/02 13:58

_id avocaad_2001_16
id avocaad_2001_16
authors Yu-Ying Chang, Yu-Tung Liu, Chien-Hui Wong
year 2001
title Some Phenomena of Spatial Characteristics of Cyberspace
source AVOCAAD - ADDED VALUE OF COMPUTER AIDED ARCHITECTURAL DESIGN, Nys Koenraad, Provoost Tom, Verbeke Johan, Verleye Johan (Eds.), (2001) Hogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel, ISBN 80-76101-05-1
summary "Space," which has long been an important concept in architecture (Bloomer & Moore, 1977; Mitchell, 1995, 1999), has attracted interest of researchers from various academic disciplines in recent years (Agnew, 1993; Benko & Strohmayer, 1996; Chang, 1999; Foucault, 1982; Gould, 1998). Researchers from disciplines such as anthropology, geography, sociology, philosophy, and linguistics regard it as the basis of the discussion of various theories in social sciences and humanities (Chen, 1999). On the other hand, since the invention of Internet, Internet users have been experiencing a new and magic "world." According to the definitions in traditional architecture theories, "space" is generated whenever people define a finite void by some physical elements (Zevi, 1985). However, although Internet is a virtual, immense, invisible and intangible world, navigating in it, we can still sense the very presence of ourselves and others in a wonderland. This sense could be testified by our naming of Internet as Cyberspace -- an exotic kind of space. Therefore, as people nowadays rely more and more on the Internet in their daily life, and as more and more architectural scholars and designers begin to invest their efforts in the design of virtual places online (e.g., Maher, 1999; Li & Maher, 2000), we cannot help but ask whether there are indeed sensible spaces in Internet. And if yes, these spaces exist in terms of what forms and created by what ways?To join the current interdisciplinary discussion on the issue of space, and to obtain new definition as well as insightful understanding of "space", this study explores the spatial phenomena in Internet. We hope that our findings would ultimately be also useful for contemporary architectural designers and scholars in their designs in the real world.As a preliminary exploration, the main objective of this study is to discover the elements involved in the creation/construction of Internet spaces and to examine the relationship between human participants and Internet spaces. In addition, this study also attempts to investigate whether participants from different academic disciplines define or experience Internet spaces in different ways, and to find what spatial elements of Internet they emphasize the most.In order to achieve a more comprehensive understanding of the spatial phenomena in Internet and to overcome the subjectivity of the members of the research team, the research design of this study was divided into two stages. At the first stage, we conducted literature review to study existing theories of space (which are based on observations and investigations of the physical world). At the second stage of this study, we recruited 8 Internet regular users to approach this topic from different point of views, and to see whether people with different academic training would define and experience Internet spaces differently.The results of this study reveal that the relationship between human participants and Internet spaces is different from that between human participants and physical spaces. In the physical world, physical elements of space must be established first; it then begins to be regarded as a place after interaction between/among human participants or interaction between human participants and the physical environment. In contrast, in Internet, a sense of place is first created through human interactions (or activities), Internet participants then begin to sense the existence of a space. Therefore, it seems that, among the many spatial elements of Internet we found, "interaction/reciprocity" Ñ either between/among human participants or between human participants and the computer interface Ð seems to be the most crucial element.In addition, another interesting result of this study is that verbal (linguistic) elements could provoke a sense of space in a degree higher than 2D visual representation and no less than 3D visual simulations. Nevertheless, verbal and 3D visual elements seem to work in different ways in terms of cognitive behaviors: Verbal elements provoke visual imagery and other sensory perceptions by "imagining" and then excite personal experiences of space; visual elements, on the other hand, provoke and excite visual experiences of space directly by "mapping".Finally, it was found that participants with different academic training did experience and define space differently. For example, when experiencing and analyzing Internet spaces, architecture designers, the creators of the physical world, emphasize the design of circulation and orientation, while participants with linguistics training focus more on subtle language usage. Visual designers tend to analyze the graphical elements of virtual spaces based on traditional painting theories; industrial designers, on the other hand, tend to treat these spaces as industrial products, emphasizing concept of user-center and the control of the computer interface.The findings of this study seem to add new information to our understanding of virtual space. It would be interesting for future studies to investigate how this information influences architectural designers in their real-world practices in this digital age. In addition, to obtain a fuller picture of Internet space, further research is needed to study the same issue by examining more Internet participants who have no formal linguistics and graphical training.
series AVOCAAD
email
last changed 2005/09/09 10:48

_id 611a
authors Newell, Allen
year 1982
title The Knowledge Level
source [2]. 46 p. : ill. Design Research Center, CMU, April, 1982. DRC-15-15-82. includes bibliography
summary As the first AAAI Presidential Address, this paper focuses on a basic substantive problem: the nature of knowledge and representation. There are ample indications that artificial intelligence is in need of substantial work in this area, e.g., a recent SIGART special issue on Knowledge Representation edited by Ron Brachman and Brian Smith. The paper proposes a theory of the nature of knowledge, namely, that there is another computer system level immediately above the symbol (or program) level. The nature of computer system levels is reviewed, the new level proposed, and its definition is treated in detail. Knowledge itself is the processing medium at this level and the principle of rationality plays a central role. Some consequences of the existence of the knowledge level and some relations to other fields are discussed
keywords knowledge, representation, AI
series CADline
last changed 1999/02/12 15:09

_id e1d1
authors Shafer, Steven A. and Kanade, Takeo
year 1982
title Using Shadows in Finding Surface Orientations
source 61 p. : ill.` Pittsburgh, PA: Department of Computer Science, CMU, January, 1982. CMU-CS- 82-100
summary Given a line drawing from an image with shadow regions identified, the shapes of the shadows can be used to generate constraints on the orientations of the surfaces involved. This paper describes the theory which governs those constraints under orthography. A 'Basic Shadow Problem' is first posed, in which there is a single light source, and a single surface casts a shadow on another (background) surface. There are six parameters to determine: the orientation (2 parameters) for each surface, and the direction of the vector (2 parameters) pointing at the light source. If some set of 3 of these are given in advance, the remaining 3 can then be determined geometrically
keywords The solution method consists of identifying 'illumination surfaces' consisting of illumination vectors, assigning Huffman-Clowes line labels to
series CADline
last changed 2003/06/02 13:58

_id cf2003_m_040
id cf2003_m_040
authors BAY, Joo-Hwa
year 2003
title Making Rebuttals Available Digitally for Minimising Biases in Mental Judgements
source Digital Design - Research and Practice [Proceedings of the 10th International Conference on Computer Aided Architectural Design Futures / ISBN 1-4020-1210-1] Tainan (Taiwan) 13–15 October 2003, pp. 147-156
summary The problem of heuristic biases (illusions) discussed by Tversky and Kahneman (1982) that can lead to errors in judgement by human designers, when they use precedent knowledge presented graphically (Bay 2001). A Cognitive framework of belief, goal, and decision, and a framework of representation of architectural knowledge by Tzonis are used to map out the problem of heuristic biases in the human mind. These are used to discuss what aspects of knowledge can be presented explicitly and digitally to users to make rebuttal more available for human thinking at the cognitive level. The discussion is applicable to both inductive and analytic digital knowledge systems that use precedent knowledge. This discussion is targeted directly at means of addressing bias in the human mind using digital means. The problem of human bias in machine learning and generalisation are discussed in a different paper, and the problems of international or non-intentional machine bias are not part of discussion in this paper.
keywords analogy, bias, design thinking, environmental design, heuristics
series CAAD Futures
last changed 2003/11/22 07:26

_id 1b10
id 1b10
authors Bay, Joo-Hwa
year 2001
title Cognitive Biases - The case of tropical architecture
source Delft University of Technology
summary This dissertation investigates, i) How cognitive biases (or illusions) may lead to errors in design thinking, ii) Why architects use architectural precedents as heuristics despite such possible errors, and iii) Develops a design tool that can overcome this type of errors through the introduction of a rebuttal mechanism. The mechanism controls biases and improves accuracy in architectural thinking. // The research method applied is interdisciplinary. It employs knowledge from cognitive science, environmental engineering, and architectural theory. The case study approach is also used. The investigation is made in the case of tropical architecture. The investigation of architectural biases draws from work by A. Tversky and D. Kahneman in 1982 on “Heuristics and biases”. According to Tversky and Kahneman, the use of heuristics of representativeness (based on similarity) and availability (based on ease of recall and imaginability) for judgement of probability can result in cognitive biases of illusions of validity and biases due to imaginability respectively. This theory can be used analogically to understand how errors arise in the judgement of environmental behaviour anticipated from various spatial configurations, leading to designs with dysfunctional performances when built. Incomplete information, limited time, and human mental resources make design thinking in practice difficult and impossible to solve. It is not possible to analyse all possible alternative solutions, multiple contingencies, and multiple conflicting demands, as doing so will lead to combinatorial explosion. One of the ways to cope with the difficult design problem is to use precedents as heuristic devices, as shortcuts in design thinking, and at the risk of errors. This is done with analogical, pre-parametric, and qualitative means of thinking, without quantitative calculations. Heuristics can be efficient and reasonably effective, but may not always be good enough or even correct, because they can have associated cognitive biases that lead to errors. Several debiasing strategies are discussed, and one possibility is to introduce a rebuttal mechanism to refocus the designer’s thinking on the negative and opposite outcomes in his judgements, in order to debias these illusions. The research is carried out within the framework of design theory developed by the Design Knowledge System Research Centre, TUDelft. This strategy is tested with an experiment. The results show that the introduction of a rebuttal mechanism can debias and improve design judgements substantially in environmental control. The tool developed has possible applications in design practice and education, and in particular, in the designing of sustainable environments.
keywords Design bias; Design knowledge; Design rebuttal; Design Precedent; Pre-parametric design; Tropical architecture; Sustainability
series thesis:PhD
type normal paper
email
last changed 2006/05/28 07:42

_id 6094
authors Blinn, J.I.
year 1982
title A Generalization of Algebraic Surface Drawing
source ACM Transaction on Graphics, vol. 1, no. 3, pp. 235-256, 1982
summary The technology of creating realistic and visually interesting images of three- dimensional shapes is advancing on many fronts. One such front is the develop- ment of algorithms for drawing curved surfaces directly from their mathematical definitions rather than by dividing them into large numbers of polygons. Two classes of surfaces which have received attention are the quadric and the bivariate parametric surfaces. Bivariate parametric surfaces are generated by three func- tions of two variables (most popularly polynomials), as the variables take on different values. Algorithms dealing with such surfaces are due to Catmull; Lane, Carpenter, Whitted and Blinn; and Clark.
series journal paper
last changed 2003/11/21 15:16

_id eabb
authors Boeykens, St. Geebelen, B. and Neuckermans, H.
year 2002
title Design phase transitions in object-oriented modeling of architecture
source Connecting the Real and the Virtual - design e-ducation [20th eCAADe Conference Proceedings / ISBN 0-9541183-0-8] Warsaw (Poland) 18-20 September 2002, pp. 310-313
doi https://doi.org/10.52842/conf.ecaade.2002.310
summary The project IDEA+ aims to develop an “Integrated Design Environment for Architecture”. Its goal is providing a tool for the designer-architect that can be of assistance in the early-design phases. It should provide the possibility to perform tests (like heat or cost calculations) and simple simulations in the different (early) design phases, without the need for a fully detailed design or remodeling in a different application. The test for daylighting is already in development (Geebelen, to be published). The conceptual foundation for this design environment has been laid out in a scheme in which different design phases and scales are defined, together with appropriate tests at the different levels (Neuckermans, 1992). It is a translation of the “designerly” way of thinking of the architect (Cross, 1982). This conceptual model has been translated into a “Core Object Model” (Hendricx, 2000), which defines a structured object model to describe the necessary building model. These developments form the theoretical basis for the implementation of IDEA+ (both the data structure & prototype software), which is currently in progress. The research project addresses some issues, which are at the forefront of the architect’s interest while designing with CAAD. These are treated from the point of view of a practicing architect.
series eCAADe
email
last changed 2022/06/07 07:52

_id 89e4
authors Cendes, Z.J., Shenton, D. and H. Shahnasser
year 1982
title Adaptive Finite Element Mesh Generation Using the Delaunay Algorithm
source 3 p. : ill. Pittsburgh: Design Research Center, CMU, December, 1982
summary Includes bibliography. A two-dimensional generator is described which automatically creates optimal finite element meshes using the Delaunay triangulation algorithm. The mesh generator is adaptive in the sense that elements containing the largest normalized errors are automatically refined, providing meshes with a uniform error density. The system runs on a PERQ computer made by Three Rivers Computer Company. It is menu oriented and utilizes multiple command and display windows to create and edit the object description interactively. Mesh generation from the object data base is automatic, although it may be modified interactively by the user if desired. Application of the mesh generator to electric machine design and to magnetic bubble simulation shows it to be one of the most powerful and easy to use systems yet devised
keywords electrical engineering, triangulation, algorithms, OOPS, finite elements, analysis
series CADline
last changed 2003/06/02 13:58

_id sigradi2006_e183a
id sigradi2006_e183a
authors Costa Couceiro, Mauro
year 2006
title La Arquitectura como Extensión Fenotípica Humana - Un Acercamiento Basado en Análisis Computacionales [Architecture as human phenotypic extension – An approach based on computational explorations]
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 56-60
summary The study describes some of the aspects tackled within a current Ph.D. research where architectural applications of constructive, structural and organization processes existing in biological systems are considered. The present information processing capacity of computers and the specific software development have allowed creating a bridge between two holistic nature disciplines: architecture and biology. The crossover between those disciplines entails a methodological paradigm change towards a new one based on the dynamical aspects of forms and compositions. Recent studies about artificial-natural intelligence (Hawkins, 2004) and developmental-evolutionary biology (Maturana, 2004) have added fundamental knowledge about the role of the analogy in the creative process and the relationship between forms and functions. The dimensions and restrictions of the Evo-Devo concepts are analyzed, developed and tested by software that combines parametric geometries, L-systems (Lindenmayer, 1990), shape-grammars (Stiny and Gips, 1971) and evolutionary algorithms (Holland, 1975) as a way of testing new architectural solutions within computable environments. It is pondered Lamarck´s (1744-1829) and Weismann (1834-1914) theoretical approaches to evolution where can be found significant opposing views. Lamarck´s theory assumes that an individual effort towards a specific evolutionary goal can cause change to descendents. On the other hand, Weismann defended that the germ cells are not affected by anything the body learns or any ability it acquires during its life, and cannot pass this information on to the next generation; this is called the Weismann barrier. Lamarck’s widely rejected theory has recently found a new place in artificial and natural intelligence researches as a valid explanation to some aspects of the human knowledge evolution phenomena, that is, the deliberate change of paradigms in the intentional research of solutions. As well as the analogy between genetics and architecture (Estévez and Shu, 2000) is useful in order to understand and program emergent complexity phenomena (Hopfield, 1982) for architectural solutions, also the consideration of architecture as a product of a human extended phenotype can help us to understand better its cultural dimension.
keywords evolutionary computation; genetic architectures; artificial/natural intelligence
series SIGRADI
email
last changed 2016/03/10 09:49

_id cf2011_p027
id cf2011_p027
authors Herssens, Jasmien; Heylighen Ann
year 2011
title A Framework of Haptic Design Parameters for Architects: Sensory Paradox Between Content and Representation
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 685-700.
summary Architects—like other designers—tend to think, know and work in a visual way. In design research, this way of knowing and working is highly valued as paramount to design expertise (Cross 1982, 2006). In case of architecture, however, it is not only a particular strength, but may as well be regarded as a serious weakness. The absence of non-visual features in traditional architectural spatial representations indicates how these are disregarded as important elements in conceiving space (Dischinger 2006). This bias towards vision, and the suppression of other senses—in the way architecture is conceived, taught and critiqued—results in a disappearance of sensory qualities (Pallasmaa 2005). Nevertheless, if architects design with more attention to non visual senses, they are able to contribute to more inclusive environments. Indeed if an environment offers a range of sensory triggers, people with different sensory capacities are able to navigate and enjoy it. Rather than implementing as many sensory triggers as possible, the intention is to make buildings and spaces accessible and enjoyable for more people, in line with the objective of inclusive design (Clarkson et al. 2007), also called Design for All or Universal Design (Ostroff 2001). Within this overall objective, the aim of our study is to develop haptic design parameters that support architects during design in paying more attention to the role of haptics, i.e. the sense of touch, in the built environment by informing them about the haptic implications of their design decisions. In the context of our study, haptic design parameters are defined as variables that can be decided upon by designers throughout the design process, and the value of which determines the haptic characteristics of the resulting design. These characteristics are based on the expertise of people who are congenitally blind, as they are more attentive to non visual information, and of professional caregivers working with them. The parameters do not intend to be prescriptive, nor to impose a particular method. Instead they seek to facilitate a more inclusive design attitude by informing designers and helping them to think differently. As the insights from the empirical studies with people born blind and caregivers have been reported elsewhere (Authors 2010), this paper starts by outlining the haptic design parameters resulting from them. Following the classification of haptics into active, dynamic and passive touch, the built environment unfolds into surfaces that can act as “movement”, “guiding” and/or “rest” plane. Furthermore design techniques are suggested to check the haptic qualities during the design process. Subsequently, the paper reports on a focus group interview/workshop with professional architects to assess the usability of the haptic design parameters for design practice. The architects were then asked to try out the parameters in the context of a concrete design project. The reactions suggest that the participating architects immediately picked up the underlying idea of the parameters, and recognized their relevance in relation to the design project at stake, but that their representation confronts us with a sensory paradox: although the parameters question the impact of the visual in architectural design, they are meant to be used by designers, who are used to think, know and work in a visual way.
keywords blindness, design parameters, haptics, inclusive design, vision
series CAAD Futures
email
last changed 2012/02/11 19:21

_id ddssar9616
id ddssar9616
authors Hunt, John
year 1996
title Establishing design directions for complex architectural projects: a decision support strategy
source Timmermans, Harry (Ed.), Third Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings (Spa, Belgium), August 18-21, 1996
summary The paper seeks to identify characteristics of the design decision-making strategy implicit in the first placed design submissions for three significant architectural competitions: the Sydney Opera House competition, and two recent design competitions for university buildings in New Zealand. Cohn Rowe's (1982) characterisation of the design process is adopted as a basis for the analysis of these case studies. Rowe's fertile analogy between design and (criminal) detection is first outlined, then brought to bear on the case studies. By means of a comparison between the successful and selected unsuccessful design submissions in each case, aspects of Rowe's characterisation of the design process are confirmed. On the basis of this analysis several common features of the competition-winning submissions, and their implicit decision-making processes, are identified. The first of these features relates to establishing project or programmatic requirements and the prioritizing of these. The second concerns the role of design parameters or requirements that appear as conflicting or contradictory, in the development of a design direction and in innovative design outcomes. The third concerns the process of simultaneous consideration given by the designer to both project parameters or requirements, and to design solution possibilities - a process described by Rowe as "dialectical interanimation".
series DDSS
last changed 2003/08/07 16:36

_id e234
authors Kalay, Yehuda E. and Harfmann, Anton C.
year 1985
title An Integrative Approach to Computer-Aided Design Education in Architecture
source February, 1985. [17] p. : [8] p. of ill
summary With the advent of CAD, schools of architecture are now obliged to prepare their graduates for using the emerging new design tools and methods in architectural practices of the future. In addition to this educational obligation, schools of architecture (possibly in partnership with practicing firms) are also the most appropriate agents for pursuing research in CAD that will lead to the development of better CAD software for use by the profession as a whole. To meet these two rather different obligations, two kinds of CAD education curricula are required: one which prepares tool- users, and another that prepares tool-builders. The first educates students about the use of CAD tools for the design of buildings, whereas the second educates them about the design of CAD tools themselves. The School of Architecture and Planning in SUNY at Buffalo has recognized these two obligations, and in Fall 1982 began to meet them by planning and implementing an integrated CAD environment. This environment now consists of 3 components: a tool-building sequence of courses, an advanced research program, and a general tool-users architectural curriculum. Students in the tool-building course sequence learn the principles of CAD and may, upon graduation, become researchers and the managers of CAD systems in practicing offices. While in school they form a pool of research assistants who may be employed in the research component of the CAD environment, thereby facilitating the design and development of advanced CAD tools. The research component, through its various projects, develops and provides state of the art tools to be used by practitioners as well as by students in the school, in such courses as architectural studio, environmental controls, performance programming, and basic design courses. Students in these courses who use the tools developed by the research group constitute the tool-users component of the CAD environment. While they are being educated in the methods they will be using throughout their professional careers, they also act as a 'real-world' laboratory for testing the software and thereby provide feedback to the research component. The School of Architecture and Planning in SUNY at Buffalo has been the first school to incorporate such a comprehensive CAD environment in its curriculum, thereby successfully fulfilling its obligation to train students in the innovative methods of design that will be used in architectural practices of the future, and at the same time making a significant contribution to the profession of architecture as a whole. This paper describes the methodology and illustrates the history of the CAD environment's implementation in the School
keywords CAD, architecture, education
series CADline
email
last changed 2003/06/02 13:58

_id 807e
authors Maver, Thomas W. and Petric, Jelena (Eds.)
year 1994
title The Virtual Studio [Conference Proceedings]
source eCAADe Conference Proceedings / ISBN 0-9523687-0-6 / Glasgow (Scotland) 7-10 September 1994, 262 p.
doi https://doi.org/10.52842/conf.ecaade.1994
summary ECAADE was established in 1982 with the intention, across Europe, of facilitating the adoption of the Information Technologies - particularly Computer Aided Architectural Design (CAAD) - within the system of architectural education. The Association, in the 12 years of its existence, has grown in its membership (now close to 350) and in its importance. The annual conferences (Delft 82, Brussels 83, Helsinki 84, Rotterdam 85, Rome 86, Zurich 87, Aarhus 89, Budapest 90, Munich 91, Barcelona 92 and Eindhoven 93) now number 12 and this volume records the 70 or so contributions to the Conference held in Glasgow over the period 7-10 September 1994.The proceedings are arranged according to a number of themes. Theories and Ideas, Teaching and Learning, Visualisation, Multi-Media, Virtual Reality, Virtual Design Studios, Functional Analysis, Design Support Systems and Surveys of Activity. The Conference featured 'long presentations'; and 'short presentations'; the length of these presentations is reflected in the two main sections of this text. To preserve the spirit of conference communication and ensure the rapid dissemination of ideas in a fast grown community of polyglot Europeans, no changes to the papers, which were submitted in Apple Mac and/or PC diskettes, have been imposed; you see them as they were submitted and as the authors intended.
series eCAADe
email
last changed 2022/06/07 07:49

_id 2415
authors Nievergelt, J. and Preparata, Franco P.
year 1982
title Plane-Sweep Algorithms for Intersecting Geometric Figures
source Communications of the ACM. October, 1982. vol. 25: pp. 739-747 : ill. includes bibliography
summary Algorithms in computational geometry are of increasing importance in computer-aided design, for example, in the layout of integrated circuits. The efficient computation of the intersection of several superimposed figures is a basic problem. Plane figures defined by points connected by straight line segments are considered, for example, polygons (not necessarily simple) and maps (embedded planar graphs). The regions into which the plane is partitioned by these intersecting figures are to be processed in various ways such as listing the boundary of each region in cyclic order or sweeping the interior of each region. Let m be the total number of points of all the figures involved and s be the total number of intersections of all line segments. A two plane-sweep algorithm that solves the problems above is presented; in the general case (non convexity) in time O((n+s)log-n) and space O(n+s); when the regions of each given figure are convex, the same can be achieved in time O(n log n +s) and space O(n)
keywords computational geometry, algorithms, intersection, mapping, polygons, data structures, analysis
series CADline
last changed 2003/06/02 10:24

_id 2243
authors O'Rourke, J., Chien, C.-B. and Olson, Th. (et al)
year 1982
title A New Linear Algorithm for Intersecting Convex Polygons
source Computer Graphics and Image Processing. 1982. vol. 19: pp. 384-391 : ill. includes a short bibliography
summary An algorithm is presented that computes the intersection of two convex polygons in linear time. The algorithm is fundamentally different from the only known linear algorithms for this problem, due to Shamos and to Hoey. These algorithms depend on a division of the plane into either angular sectors (Shamos) or parallel slabs (Hoey), and are mildly complex. The authors' algorithm searches for the intersection points of the polygons by advancing a single pointer around each polygon, and is very easy to program
keywords algorithms, boolean operations, polygons, intersection, search
series CADline
last changed 2003/06/02 14:42

_id 4bae
authors Rasdorf, William J. and Kutay, Ali R.
year 1982
title Maintenance of Integrity During Concurrent Access in a Building Design Database
source Computer Aided Design. Butterworth Scientific Ltd., July, 1982. vol. 14: pp. 201-207. includes bibliography
summary This paper proposes a building design database model that insures database integrity in a highly flexible relational structure while supporting disciplinary and interdisciplinary concurrent use. The model strongly supports designer-database interaction by providing extremely versatile data access mechanisms and an associated concurrency control mechanism. Building design components are represented in terms of their location, their attribute values, and combinations of the two. Both the logical and physical database models are illustrated. The relational model is vital for achieving the greatest flexibility in representing and accessing building design data. Its standard relations are ideal for information representation. In addition, the operators provided by the model enable the engineer to readily restructure the database to support building design needs. This paper introduces a database structuring mechanism referred to as catalogs. Catalogs provide a highly versatile mechanism for accessing database information by grouping building components into data units called modules. The modules provide convenient access to multiple design entities. Also included is a protection relation that provides a concurrency control environment for the catalog relations. The module concept is particularly important in design because it enables the ad hoc groupings of data which are so often necessary to support the design process. The module is recommended as the level to which a locking concurrency control mechanism be applied. It is a small enough data unit to support concurrency for interdisciplinary design activities, yet not so small as to require extensive overhead in the concurrency control implementation. Two different modes of locking are recommended for the catalog relations of a building design database to achieve maximum concurrency and efficiency of access by designers
keywords database, concurrency, access, constraints management
series CADline
last changed 2003/06/02 13:58

For more results click below:

this is page 0show page 1show page 2show page 3HOMELOGIN (you are user _anon_334644 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002