CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 5419

_id sigradi2006_e028c
id sigradi2006_e028c
authors Griffith, Kenfield; Sass, Larry and Michaud, Dennis
year 2006
title A strategy for complex-curved building design:Design structure with Bi-lateral contouring as integrally connected ribs
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 465-469
summary Shapes in designs created by architects such as Gehry Partners (Shelden, 2002), Foster and Partners, and Kohn Peterson and Fox rely on computational processes for rationalizing complex geometry for building construction. Rationalization is the reduction of a complete geometric shape into discrete components. Unfortunately, for many architects the rationalization is limited reducing solid models to surfaces or data on spread sheets for contractors to follow. Rationalized models produced by the firms listed above do not offer strategies for construction or digital fabrication. For the physical production of CAD description an alternative to the rationalized description is needed. This paper examines the coupling of digital rationalization and digital fabrication with physical mockups (Rich, 1989). Our aim is to explore complex relationships found in early and mid stage design phases when digital fabrication is used to produce design outcomes. Results of our investigation will aid architects and engineers in addressing the complications found in the translation of design models embedded with precision to constructible geometries. We present an algorithmically based approach to design rationalization that supports physical production as well as surface production of desktop models. Our approach is an alternative to conventional rapid prototyping that builds objects by assembly of laterally sliced contours from a solid model. We explored an improved product description for rapid manufacture as bilateral contouring for structure and panelling for strength (Kolarevic, 2003). Infrastructure typically found within aerospace, automotive, and shipbuilding industries, bilateral contouring is an organized matrix of horizontal and vertical interlocking ribs evenly distributed along a surface. These structures are monocoque and semi-monocoque assemblies composed of structural ribs and skinning attached by rivets and adhesives. Alternative, bi-lateral contouring discussed is an interlocking matrix of plywood strips having integral joinery for assembly. Unlike traditional methods of building representations through malleable materials for creating tangible objects (Friedman, 2002), this approach constructs with the implication for building life-size solutions. Three algorithms are presented as examples of rationalized design production with physical results. The first algorithm [Figure 1] deconstructs an initial 2D curved form into ribbed slices to be assembled through integral connections constructed as part of the rib solution. The second algorithm [Figure 2] deconstructs curved forms of greater complexity. The algorithm walks along the surface extracting surface information along horizontal and vertical axes saving surface information resulting in a ribbed structure of slight double curvature. The final algorithm [Figure 3] is expressed as plug-in software for Rhino that deconstructs a design to components for assembly as rib structures. The plug-in also translates geometries to a flatten position for 2D fabrication. The software demonstrates the full scope of the research exploration. Studies published by Dodgson argued that innovation technology (IvT) (Dodgson, Gann, Salter, 2004) helped in solving projects like the Guggenheim in Bilbao, the leaning Tower of Pisa in Italy, and the Millennium Bridge in London. Similarly, the method discussed in this paper will aid in solving physical production problems with complex building forms. References Bentley, P.J. (Ed.). Evolutionary Design by Computers. Morgan Kaufman Publishers Inc. San Francisco, CA, 1-73 Celani, G, (2004) “From simple to complex: using AutoCAD to build generative design systems” in: L. Caldas and J. Duarte (org.) Implementations issues in generative design systems. First Intl. Conference on Design Computing and Cognition, July 2004 Dodgson M, Gann D.M., Salter A, (2004), “Impact of Innovation Technology on Engineering Problem Solving: Lessons from High Profile Public Projects,” Industrial Dynamics, Innovation and Development, 2004 Dristas, (2004) “Design Operators.” Thesis. Massachusetts Institute of Technology, Cambridge, MA, 2004 Friedman, M, (2002), Gehry Talks: Architecture + Practice, Universe Publishing, New York, NY, 2002 Kolarevic, B, (2003), Architecture in the Digital Age: Design and Manufacturing, Spon Press, London, UK, 2003 Opas J, Bochnick H, Tuomi J, (1994), “Manufacturability Analysis as a Part of CAD/CAM Integration”, Intelligent Systems in Design and Manufacturing, 261-292 Rudolph S, Alber R, (2002), “An Evolutionary Approach to the Inverse Problem in Rule-Based Design Representations”, Artificial Intelligence in Design ’02, 329-350 Rich M, (1989), Digital Mockup, American Institute of Aeronautics and Astronautics, Reston, VA, 1989 Schön, D., The Reflective Practitioner: How Professional Think in Action. Basic Books. 1983 Shelden, D, (2003), “Digital Surface Representation and the Constructability of Gehry’s Architecture.” Diss. Massachusetts Institute of Technology, Cambridge, MA, 2003 Smithers T, Conkie A, Doheny J, Logan B, Millington K, (1989), “Design as Intelligent Behaviour: An AI in Design Thesis Programme”, Artificial Intelligence in Design, 293-334 Smithers T, (2002), “Synthesis in Designing”, Artificial Intelligence in Design ’02, 3-24 Stiny, G, (1977), “Ice-ray: a note on the generation of Chinese lattice designs” Environmental and Planning B, volume 4, pp. 89-98
keywords Digital fabrication; bilateral contouring; integral connection; complex-curve
series SIGRADI
email
last changed 2016/03/10 09:52

_id b190
authors Goldberg, Adele and Robson, David
year 1983
title Smalltalk-80: The language and its implementation
source New York, NY: Addison Wesley Co
summary Smalltalk-80 is the classic standard Smalltalk language as described in Smalltalk-80: The Language and Its Implementation by Goldberg and Robson. This book is commonly called "the Blue Book". Squeak implements the dialect of Smalltalk described in this book, but has a different implementation. Overview of the Smalltalk Language Smalltalk is a general purpose, high level programming language. It was the first original "pure" object oriented language, but not the first to use the object oriented concept, which is credited to Simula 67. The explosive growth of Object Oriented Programming (OOP) technologies began in the early 1980's, with Smalltalk's introduction. Behind it was the idea that the individual human user should be the most important component of any computing system, and that programming should be a natural extension of thinking, and also a dynamic and evolutionary process consistent with the model of human learning activity. In Smalltalk, these ideas are embodied in a framework for human-computer communication. In a sense, Smalltalk is yet another language like C and Pascal, and programs can be written in Smalltalk that have the look and feel of such conventional languages. The difference lies * in the amount of code that can be reduced, * less cryptic syntax, * and code that is easier to handle for application maintenance and enhancement. But Smalltalk's most powerful feature is easy code reuse. Smalltalk makes reuse of programs, routines, and subroutines (methods) far easier. Though procedural languages allow reuse too, it is harder to do, and much easier to cheat. It is no surprise that Smalltalk is relatively easy to learn, mainly due to its simple syntax and semantics, as well as few concepts. Objects, classes, messages, and methods form the basis of programming in Smalltalk. The general methodology to use Smalltalk The notion of human-computer interface also results in Smalltalk promoting the development of safer systems. Errors in Smalltalk may be viewed as objects telling users that confusion exists as to how to perform a desired function.
series other
last changed 2003/04/23 15:14

_id cf2011_p075
id cf2011_p075
authors Janssen, Patrick; Chen Kian Wee
year 2011
title Visual Dataflow Modelling: A Comparison of Three Systems
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 801-816.
summary Visual programming languages enable users to create computer programs by manipulating graphical elements rather than by entering text. The difference between textual languages and visual languages is that most textual languages use a procedural programming model, while most visual languages use a dataflow programming model. When visual programming is applied to design, it results in a new modelling approach that we refer to 'visual dataflow modelling' (VDM). Recently, VDM has becoming increasingly popular within the design community, as it can accelerate the iterative design process, thereby allowing larger numbers of design possibilities to be explored. Furthermore, it is now also becoming an important tool in performance-based design approaches, since it may potentially enable the closing of the loop between design development and design evaluation. A number of CAD systems now provide VDM interfaces, allowing designers to define form generating procedures without having to resort to scripting or programming. However, these environments have certain weaknesses that limit their usability. This paper will analyse these weaknesses by comparing and contrasting three VDM environments: McNeel Grasshopper, Bentley Generative Components, and Sidefx Houdini. The paper will focus on five key areas: * Conditional logic allow rules to be applied to geometric entities that control how they behave. Such rules will typically be defined as if-then-else conditions, where an action will be executed if a particular condition is true. A more advanced version of this is the while loop, where the action within the loop will be repeatedly executed while a certain condition remains true. * Local coordinate systems allow geometric entities to be manipulated relative to some convenient local point of reference. These systems may be either two-dimensional or three-dimensional, using either Cartesian, cylindrical, or spherical systems. Techniques for mapping geometric entities from one coordinate system to another also need to be considered. * Duplication includes three types: simple duplication, endogenous duplication, and exogenous duplication. Simple duplication consists of copying some geometric entity a certain number of times, producing identical copies of the original. Endogenous duplication consist of copying some geometric entity by applying a set of transformations that are defined as part of the duplication process. Lastly, exogenous duplication consists of copying some geometric entity by applying a set of transformations that are defined by some other external geometry. * Part-whole relationships allow geometric entities to be grouped in various ways, based on the fundamental set-theoretic concept that entities can be members of sets, and sets can be members of other sets. Ways of aggregating data into both hierarchical and non-hierarchical structures, and ways of filtering data based on these structures need to be considered. * Spatial queries include relationships between geometric entities such as touching, crossing, overlapping, or containing. More advanced spatial queries include various distance based queries and various sorting queries (e.g. sorting all entities based on position) and filtering queries (e.g. finding all entities with a certain distance from a point). For each of these five areas, a simple benchmarking test case has been developed. For example, for conditional logic, the test case consists of a simple room with a single window with a condition: the window should always be in the longest north-facing wall. If the room is rotated or its dimensions changed, then the window must re-evaluate itself and possibly change position to a different wall. For each benchmarking test-case, visual programs are implemented in each of the three VDM environments. The visual programs are then compared and contrasted, focusing on two areas. First, the type of constructs used in each of these environments are compared and contrasted. Second, the cognitive complexity of the visual programming task in each of these environments are compared and contrasted.
keywords visual, dataflow, programming, parametric, modelling
series CAAD Futures
email
last changed 2012/02/11 19:21

_id ddssar0216
id ddssar0216
authors Jones, Dennis B.
year 2002
title The Quantum Matrix:A Three Dimensional Data Integration and Collaboration ToolFor Virtual Environments
source Timmermans, Harry (Ed.), Sixth Design and Decision Support Systems in Architecture and Urban Planning - Part one: Architecture Proceedings Avegoor, the Netherlands), 2002
summary If a picture is worth a thousand words, what if they could walk and talk? How would you like to bring a whole new dimension to your ideas; to use visualization to convey a sense of time and motion, to use imagery to give your ideas vividness; to use sound to give them voice and view them threedimensionally. The Matrix allows you to do all of this and much more. The Matrix resembles Rubik’s cube, but its purpose is to store, manage and access data of all types and to view them in three dimensions in virtual environments such as the CAVE and on your desktop. The current version can store, access and view almost anything that is in digital form, including:Text files Pictures Video Clips Sound Files Spreadsheets URL’s HTML pages Databases CAD drawings Gantt Charts Business Graphics VRML modelsExecutable Programs OLE (Object Link & Embedded) The Matrix is a three-dimensional multimedia and document management tool. The Matrix anticipates the convergence of electronic media into one consistent environment for analysis and representation. the Matrix uses VMRL and OpenGL technologies to allow the user to be immersed in their data as withCinerama, IMAX and Virtual Reality Environments. The Matrix allows the user to exercise their creativity by interactively placing and organizing their data three dimensionally and navigating through and viewingdata and documents in 3D (monocular and binocular – stereo). The Matrix user interface is simple to use. Employing the now familiar “drag and drop” method to manage data and documents. Items can be placed into the matrix grid at a user selected matrix cube location. Upon dropping a document on a cube it appears as a mapped image onto the surface. Navigating through the 3D Matrix-space is fun. All navigation uses real-time animation giving you instant feed back as to where you are. Data drilling is as simple as mouse click on a Matrix cube. Double clicking the on an object in the matrix activates that object. Data dreams was an image that preexisted the program by several years. The dream was to create a new way oforganizing and exploring data. The Qube image was created using Microstation by Bentley Systems, Inc. The figure was modeled using Poser by MetaCreations and composited using Adobe Photoshop.
series DDSS
last changed 2003/08/07 16:36

_id aea2
authors Laurel, B. (ed.)
year 1990
title The Art of Human-Computer Interface Design
source New York: Addison-Wesley.
summary Human-computer interface design is a new discipline. So new in fact, that Alan Kay of Apple Computer quipped that people "are not sure whether they should order it by the yard or the ton"! Irrespective of the measure, interface design is gradually emerging as a much-needed and timely approach to reducing the awkwardness and inconveniences of human-computer interaction. "Increased cognitive load", "bewildered and tired users" - these are the byproducts of the "plethora of options and the interface conventions" faced by computer users. Originally, computers were "designed by engineers, for engineers". Little or no attention was, or needed to be, paid to the interface. However, the pervasive use of the personal computer and the increasing number and variety of applications and programs has given rise to a need to focus on the "cognitive locus of human-computer interaction" i.e. the interface. What is the interface? Laurel defines the interface as a "contact surface" that "reflects the physical properties of the interactors, the functions to be performed, and the balance of power and control." (p.xiii) Incorporated into her definition are the "cognitive and emotional aspects of the user's experience". In a very basic sense, the interface is "the place where contact between two entities occurs." (p.xii) Doorknobs, steering wheels, spacesuits-these are all interfaces. The greater the difference between the two entities, the greater the need for a well-designed interface. In this case, the two very different entities are computers and humans. Human-conputer interface design looks at how we can lessen the effects of these differences. This means, for Laurel, empowering users by providing them with ease of use. "How can we think about it so that the interfaces we design will empower users?" "What does the user want to do?" These are the questions Laurel believes must be asked by designers. These are the questions addressed directly and indirectly by the approximately 50 contributors to The Art of Human-Computer Interface Design. In spite of the large number of contributors to the book and the wide range of fields with which they are associated, there is a broad consensus on how interfaces can be designed for empowerment and ease of use. User testing, user contexts, user tasks, user needs, user control: these terms appear throughout the book and suggest ways in which design might focus less on the technology and more on the user. With this perspective in mind, contributor D. Norman argues that computer interfaces should be designed so that the user interacts more with the task and less with the machine. Such interfaces "blend with the task", and "make tools invisible" so that "the technology is subervient to that goal". Sellen and Nicol insist on the need for interfaces that are 'simple', 'self-explanatory', 'adaptive' and 'supportive'. Contributors Vertelney and Grudin are interested in interfaces that support the contexts in which many users work. They consider ways in which group-oriented tasks and collaborative efforts can be supported and aided by the particular design of the interface. Mountford equates ease of use with understating the interface: "The art and science of interface design depends largely on making the transaction with the computer as transparent as possible in order to minimize the burden on the user".(p.248) Mountford also believes in "making computers more powerful extensions of our natural capabilities and goals" by offering the user a "richer sensory environment". One way this can be achieved according to Saloman is through creative use of colour. Saloman notes that colour can not only impart information but that it can be a useful mnemonic device to create associations. A richer sensory environment can also be achieved through use of sound, natural speech recognition, graphics, gesture input devices, animation, video, optical media and through what Blake refers to as "hybrid systems". These systems include additional interface features to control components such as optical disks, videotape, speech digitizers and a range of devices that support "whole user tasks". Rich sensory environments are often characteristic of game interfaces which rely heavily on sound and graphics. Crawford believes we have a lot to learn from the design of games and that they incorporate "sound concepts of user interface design". He argues that "games operate in a more demanding user-interface universe than other applications" since they must be both "fun" and "functional".
series other
last changed 2003/04/23 15:14

_id ecaade2017_305
id ecaade2017_305
authors Luther, Mark B.
year 2017
title The Application of Daylighting Software for Case-study Design in Buildings
doi https://doi.org/10.52842/conf.ecaade.2017.1.629
source Fioravanti, A, Cursi, S, Elahmar, S, Gargaro, S, Loffreda, G, Novembri, G, Trento, A (eds.), ShoCK! - Sharing Computational Knowledge! - Proceedings of the 35th eCAADe Conference - Volume 1, Sapienza University of Rome, Rome, Italy, 20-22 September 2017, pp. 629-638
summary The application of different software, whether simple or complex, can each play a significant role in the design and decision-making on daylighting for a building. This paper, discusses the task to be accomplished, in real case studies, and how various lighting software programs are used to achieve the desired information. The message iterated throughout the paper is one that respects, and even suggests, the use of even the simplest software, that can guide and inform design decisions in daylighting. Daylighting can be complex since the position of the sun varies throughout the day and year as well as do the sky conditions for a particular location. Just because we now have the computing capacity to model every single minute of a day throughout a year, doesn't justify its task. Several projects; an architecture studio, a university office building, a school library and a gymnasium all present different tasks to be achieved. The daylighting problems, the objects and the software application and their outcomes are presented in this paper. Over a decade of projects has led to reflecting upon the importance of computing in daylighting, its staged approach and the result that it can achieve if properly applied.
keywords Daylighting Design; Daylighting Analysis; Radiosity; Ray-tracing
series eCAADe
email
last changed 2022/06/07 07:51

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id ecaade2018_361
id ecaade2018_361
authors Schneider, Sven, Kuliga, Saskia, Weiser, René, Kammler, Olaf and Fuchkina, Ekaterina
year 2018
title VREVAL - A BIM-based Framework for User-centered Evaluation of Complex Buildings in Virtual Environments
doi https://doi.org/10.52842/conf.ecaade.2018.2.833
source Kepczynska-Walczak, A, Bialkowski, S (eds.), Computing for a better tomorrow - Proceedings of the 36th eCAADe Conference - Volume 2, Lodz University of Technology, Lodz, Poland, 19-21 September 2018, pp. 833-842
summary The design of buildings requires architects to anticipate how their future users will experience and behave in them. In order to do this objectively and systematically user studies in Virtual Environments (VEs) are a valuable method. In this paper, we present a framework for setting up, conducting and analysing user studies in VEs. The framework is integrated in the architectural design process by using BIM as a common modeling and visualisation platform. In order to define the user studies simple and flexible for the individual purposes we followed a modular concept. Modules thereby refer to different kinds of user study methods. Currently we developed three modules (Wayfinding, Spatial Experience and Qualitative Annotations), each having their individual requirements regarding their setup, interaction method and visualisation of results. In the course of a architectural design studio, students applied this framework to evaluate their building designs from a user perspective.
keywords Pre-Occupancy Evaluation; Virtual Reality; User-centered Design; Building Information Modeling; Architectural Education
series eCAADe
email
last changed 2022/06/07 07:57

_id 2006_532
id 2006_532
authors Abdelhameed, Wael
year 2006
title How Does the Digital Environment Change What Architects Do in the Initial Phases of the Design Process?
doi https://doi.org/10.52842/conf.ecaade.2006.532
source Communicating Space(s) [24th eCAADe Conference Proceedings / ISBN 0-9541183-5-9] Volos (Greece) 6-9 September 2006, pp. 532-539
summary Some researchers have tried to answer the question: do we need to think differently while designing in terms of the digital environment? This methodological question leads to another question: what is the range of this difference, if there is one? This research investigates the range of changes in how architects conduct and develop the initial design within the digital environment. The role offered by the digital environment in visual design thinking during conceptual designing through shaping: concepts, forms, and design methods, is identified and explored.
keywords Conceptual designing; architects; digital environment; design process; visual design thinking
series eCAADe
email
last changed 2022/06/07 07:54

_id 2006_786
id 2006_786
authors Burry, Jane and Mark Burry
year 2006
title Sharing hidden power - Communicating latency in digital models
doi https://doi.org/10.52842/conf.ecaade.2006.786
source Communicating Space(s) [24th eCAADe Conference Proceedings / ISBN 0-9541183-5-9] Volos (Greece) 6-9 September 2006, pp. 786-793
summary As digital spatial models take on the complex relationships inherent in a lattice of dependencies and variables, how easy is it to fully comprehend and communicate the underlying structure and logical subtext of the architectural model: the metadesign? The design of a building, the relationships between a host of different attributes and performances was ever a complex system. Now the models, the representations, are in the early stages of taking on more of that complexity and reflexivity. How do we share and communicate these modelling environments or work on them together? This paper explores the issue through examples from one particular associative geometry model constructed as research to underpin the collaborative design development of the narthex of the Passion Façade on the west transept of Gaudi’s Sagrada Família church, part of the building which is now in the early stages of construction.
keywords Design communication; CAD CAM; mathematical models
series eCAADe
email
last changed 2022/06/07 07:54

_id acadia06_148
id acadia06_148
authors Cabrinha, Mark
year 2006
title Synthetic Pedagogy
doi https://doi.org/10.52842/conf.acadia.2006.148
source Synthetic Landscapes [Proceedings of the 25th Annual Conference of the Association for Computer-Aided Design in Architecture] pp. 148-149
summary As tools, techniques, and technologies expand design practice, there is likewise an innovation in design teaching shifting technology from a means of production and representation to a means of discovery and development. This has implications on studio culture and design pedagogy. Expanding the skills based notion of digital design from know-how, or know-how-to-do, toward know-for, or knowledge-for-action, forms a synthetic relationship between the skills necessary for action and the developing motivations of a young designer. This shifts digital design pedagogy to a medium of active inquiry through play and precision. As digital tools and infrastructure are now ubiquitous in most schools, including the increasing digital material exchange enabled through laser cutters, CNC routers, and rapid prototyping, this topic node presents research papers that engage technology not simply as tools to be taught, but as cognitive technologies which motivate and structure a design students knowledge, both tacit and explicit, in developing a digital and material, ecological and social synthetic environment. Digital fabrication, the Building Information Model, and parametric modeling have currency in architectural education today yet, beyond the instrumentality of teaching the tool, seldom is it questioned what the deeper motivations these technologies suggest. Each of these tools in their own way form a synthesis between representational artifacts and the technological impact on process weaving a wider web of materials, collaboration among peers and consultants, and engagement of the environment that the products of design are situated in.If it is true that this synthetic environment enabled by tools, techniques, and technologies moves from a representational model to a process model of design, the engagement of these tools in the design process is of critical importance in design education. What is the relationship between representation, simulation, and physical material in a digitally mediated design education? At the core of synthetic pedagogies is an underlying principle to form relationships of teaching architecture through digital tools, rather than simply teaching the tools themselves. What principles are taught through teaching with these tools, and furthermore, what new principles might these tools develop?
series ACADIA
email
last changed 2022/06/07 07:54

_id sigradi2006_e070c
id sigradi2006_e070c
authors Cardoso, Daniel
year 2006
title Controlled Unpredictability: Constraining Stochastic Search as a Form-Finding Method for Architectural Design
source SIGraDi 2006 - [Proceedings of the 10th Iberoamerican Congress of Digital Graphics] Santiago de Chile - Chile 21-23 November 2006, pp. 263-267
summary Provided with a strict set of rules a computer program can perform the role of a simple designer. Taking advantage of a computer’s processing power, it can also provide an unlimited number of variations in the form while following a given set of constraints. This paper delineates a model for interrelating a rule-based system based on purely architectural considerations with non-deterministic computational procedures in order to provide controlled variations and constrained unpredictability. The experimental model consists of a verisimilar architectural problem, the design of a residential tower with a strict program of 200 units of different types in a given site. Following the interpretation of the program, a set of rules is defined by considering architectural concerns such as lighting, dimensions, circulations, etc. These rules are then encoded in a program that generates form in an unsupervised manner by means of a stochastic search algorithm. Once the program generates a design it’s evaluated, and the parameters on the constraints are adjusted in order to produce a new design. This paper presents a description of the architectural problem and of the rule building process, images and descriptions of three different towers produced, and the code for the stochastic-search algorithm used for generating the form. The succesful evolution of the experiments show how in a computation-oriented design process the interpretation of the problem and the rule setting process play a major role in the production of meaningful form, outlining the shifting role of human designers from form-makers to rule-builders in a computation-oriented design endeavour.
keywords Architectural Design; Stochastic; Random; Rule-based systems; Form-generation
series SIGRADI
email
last changed 2016/03/10 09:48

_id caadria2006_565
id caadria2006_565
authors CHEN CHIEN TUNG
year 2006
title DESIGN ON SITE: Portable, Measurable, Adjustable Design Media
doi https://doi.org/10.52842/conf.caadria.2006.x.b7f
source CAADRIA 2006 [Proceedings of the 11th International Conference on Computer Aided Architectural Design Research in Asia] Kumamoto (Japan) March 30th - April 2nd 2006, 565-567
summary Space designers usually look for information on site before proceeding design. They image any possibilities of design, while they are on site. Restricted to traditional design media, if they want to develop their ideas further, they have to go back to desks. This kind of design process can capture only part of information of the site. Why not do some developments directly when designers are on the site? That is the starting point of this paper. The whole situation of site is very complicated, so it is very difficult discussing all the possibilities. In order to understand how to design on site, reducing the variations is needed. Tsai and Chang (2005) proposed a prototype about design on site, which focuses on land forming. So I chose interior as the site to reduce the variation and have more controllable factors. Still there are many factors effecting design on site, scale is very unique and very important factor of them. Beginners are difficult to really feel how long it is on the plan drawing, and even most advanced VR equipment still can’t fully present the rich information on the site. To experience the site though body, the main idea is how to propose a portable device that can support space designer to do design on site directly, with intuitional body movement and precise scale, and get feedback immediately.
series CAADRIA
email
last changed 2022/06/07 07:49

_id acadia06_392
id acadia06_392
authors Dorta, T., Perez, E.
year 2006
title Hybrid modeling revaluing manual action for 3D modeling
doi https://doi.org/10.52842/conf.acadia.2006.392
source Synthetic Landscapes [Proceedings of the 25th Annual Conference of the Association for Computer-Aided Design in Architecture] pp. 392-402
summary 3D modeling software uses conventional interface devices like mouse, keyboard and display allowing the designer to model 3D shapes. Due to the complexity of 3D shape data structures, these programs work through a geometrical system and a graphical user interface to input and output data. However, these elements interfere with the conceptual stage of the design process because the software is always asking to be fed with accurate geometries—something hard to do at the beginning of the process. Furthermore, the interface does not recognize all the advantages and skills of the designer’s bare hands as a powerful modeling tool.This paper presents the evaluation of a hybrid modeling technique for conceptual design. The hybrid modeling approach proposes to use both computer and manual tools for 3D modeling at the beginning of the design process. Using 3D scanning and rapid prototyping techniques, the designer is able to go back and forth between digital and manual mode, thus taking advantage of each one. Starting from physical models, the design is then digitalized in order to be treated with special modeling software. Then, the rapid prototyping physical model becomes a matrix or physical 3D template used to explore design intentions with the hands, allowing the proposal of complex shapes, which is difficult to achieve by 3D modeling software alone.
series ACADIA
email
last changed 2022/06/07 07:55

_id 2006_832
id 2006_832
authors El-Khoury, Nada; De Paoli Giovanni and Dorta Tomás
year 2006
title Digital Reconstruction as a means of understanding a building’s history - Case studies of a multilayer prototype
doi https://doi.org/10.52842/conf.ecaade.2006.832
source Communicating Space(s) [24th eCAADe Conference Proceedings / ISBN 0-9541183-5-9] Volos (Greece) 6-9 September 2006, pp. 832-839
summary The experiments presented in this paper are situated at the crossroads of two fields: the understanding and communication of history to students and the field of Information and Communication Technologies (ICT). More specifically, we aim to propose to students, ways of transferring information about lifestyles and techniques linked to the construction methods used in the past and which are present in ancient sites. It is not merely a question of proposing experiments for managing an inventory of knowledge such as that summarized in historical texts, but rather a means for understanding it: How do we communicate the invisible? How do we make visible what we cannot see but that we can imagine lies beneath the ruins of ancient sites? Lastly, how do we propose new approaches in the transferring of these historic skills and lifestyles? Such are the questions that the students’ experiments will attempt to answer while using computers as cognitive tools. In this case, these cognitive tools are designated as “multilayer prototypes” which aim to develop a dynamic virtual history space through augmented reality.
keywords ICT; Byblos; multilayer prototype; augmented reality; education research
series eCAADe
email
last changed 2022/06/07 07:55

_id acadia06_068
id acadia06_068
authors Elys, John
year 2006
title Digital Ornament
doi https://doi.org/10.52842/conf.acadia.2006.068
source Synthetic Landscapes [Proceedings of the 25th Annual Conference of the Association for Computer-Aided Design in Architecture] pp. 68-78
summary Gaming software has a history of fostering development of economical and creative methods to deal with hardware limitations. Traditionally the visual representation of gaming software has been a poor offspring of high-end visualization. In a twist of irony, this paper proposes that game production software leads the way into a new era of physical digital ornament. The toolbox of the rendering engine evolved rapidly between 1974-1985 and it is still today, 20 years later the main component of all visualization programs. The development of the bump map is of particular interest; its evolution into a physical displacement map provides untold opportunities of the appropriation of the 2D image to a physical 3D object.To expose the creative potential of the displacement map, a wide scope of existing displacement usage has been identified: Top2maya is a scientific appropriation, Caruso St John Architects an architectural precedent and Tord Boonje’s use of 2D digital pattern provides us with an artistic production precedent. Current gaming technologies give us an indication of how the resolution of displacement is set to enter an unprecedented level of geometric detail. As modernity was inspired by the machine age, we should be led by current technological advancement and appropriate its usage. It is about a move away from the simplification of structure and form to one that deals with the real possibilities of expanding the dialogue of surface topology. Digital Ornament is a kinetic process rather than static, its intentions lie in returning the choice of bespoke materials back to the Architect, Designer and Artist.
series ACADIA
email
last changed 2022/06/07 07:55

_id 2006_670
id 2006_670
authors Fricker, Pia and Alexandre Kapellos
year 2006
title Digital Interaction in Urban Structure - Reflection : Six years and still scanning
doi https://doi.org/10.52842/conf.ecaade.2006.670
source Communicating Space(s) [24th eCAADe Conference Proceedings / ISBN 0-9541183-5-9] Volos (Greece) 6-9 September 2006, pp. 670-673
summary The focus in our elective course for Master Students of Architecture is the following: in parallel to a more traditional way of analysing urban structures, how can the application of multimedia technology, networking and the integration of interactive computer applications lead to a different approach? The objective of our teaching and research project is to find out in what ways urban structure and specific features of a city can be represented by interactive interfaces and the use of CNC technology. Our attitude is based on small-scale approach: the sum of these microanalyses gives us the broader picture, the system or mechanisms of the city. We do not dive into the city but emerge from it. This reflection leads to a new understanding in the organisation of complex urban structures, highlighting and revealing different connections and relationships, thus giving a different final image.
keywords Abstract Types of Spatial Representation; Interaction – Interfaces; Innovative Integration of Multimedia Technology; Digital Design Education
series eCAADe
email
last changed 2022/06/07 07:50

_id cf2011_p027
id cf2011_p027
authors Herssens, Jasmien; Heylighen Ann
year 2011
title A Framework of Haptic Design Parameters for Architects: Sensory Paradox Between Content and Representation
source Computer Aided Architectural Design Futures 2011 [Proceedings of the 14th International Conference on Computer Aided Architectural Design Futures / ISBN 9782874561429] Liege (Belgium) 4-8 July 2011, pp. 685-700.
summary Architects—like other designers—tend to think, know and work in a visual way. In design research, this way of knowing and working is highly valued as paramount to design expertise (Cross 1982, 2006). In case of architecture, however, it is not only a particular strength, but may as well be regarded as a serious weakness. The absence of non-visual features in traditional architectural spatial representations indicates how these are disregarded as important elements in conceiving space (Dischinger 2006). This bias towards vision, and the suppression of other senses—in the way architecture is conceived, taught and critiqued—results in a disappearance of sensory qualities (Pallasmaa 2005). Nevertheless, if architects design with more attention to non visual senses, they are able to contribute to more inclusive environments. Indeed if an environment offers a range of sensory triggers, people with different sensory capacities are able to navigate and enjoy it. Rather than implementing as many sensory triggers as possible, the intention is to make buildings and spaces accessible and enjoyable for more people, in line with the objective of inclusive design (Clarkson et al. 2007), also called Design for All or Universal Design (Ostroff 2001). Within this overall objective, the aim of our study is to develop haptic design parameters that support architects during design in paying more attention to the role of haptics, i.e. the sense of touch, in the built environment by informing them about the haptic implications of their design decisions. In the context of our study, haptic design parameters are defined as variables that can be decided upon by designers throughout the design process, and the value of which determines the haptic characteristics of the resulting design. These characteristics are based on the expertise of people who are congenitally blind, as they are more attentive to non visual information, and of professional caregivers working with them. The parameters do not intend to be prescriptive, nor to impose a particular method. Instead they seek to facilitate a more inclusive design attitude by informing designers and helping them to think differently. As the insights from the empirical studies with people born blind and caregivers have been reported elsewhere (Authors 2010), this paper starts by outlining the haptic design parameters resulting from them. Following the classification of haptics into active, dynamic and passive touch, the built environment unfolds into surfaces that can act as “movement”, “guiding” and/or “rest” plane. Furthermore design techniques are suggested to check the haptic qualities during the design process. Subsequently, the paper reports on a focus group interview/workshop with professional architects to assess the usability of the haptic design parameters for design practice. The architects were then asked to try out the parameters in the context of a concrete design project. The reactions suggest that the participating architects immediately picked up the underlying idea of the parameters, and recognized their relevance in relation to the design project at stake, but that their representation confronts us with a sensory paradox: although the parameters question the impact of the visual in architectural design, they are meant to be used by designers, who are used to think, know and work in a visual way.
keywords blindness, design parameters, haptics, inclusive design, vision
series CAAD Futures
email
last changed 2012/02/11 19:21

_id 2006_262
id 2006_262
authors Ibrahim, Magdy
year 2006
title To BIM or not to BIM, This is NOT the Question - How to Implement BIM Solutions in Large Design Firm Environments
doi https://doi.org/10.52842/conf.ecaade.2006.262
source Communicating Space(s) [24th eCAADe Conference Proceedings / ISBN 0-9541183-5-9] Volos (Greece) 6-9 September 2006, pp. 262-267
summary Building information modeling is the technology that is converting the workplace in design firms. The initial resistance to applying the concept has faded due to many reasons. Professional architects now see the feasibility and benefits of using the new technology. CAD managers in design firms are working toward the implementation of BIM packages in order to eventually, replace the conventional CAD platforms that are still widely used. However, there are still internal obstacles that slow down the process of the implementation. The change in the project management and the required proper training for the conversion are the two major internal obstacles. The current well organized work flow tailored around the conventional CAD platforms has to be changed in a way suitable for the new technology. The training firms provide for their employees should also be re-structured in a more vertical organization in order to guarantee that everyone understands the new concept and the new work flow. Architectural education usually reflects the needs of the work market. It is very important to understand the needs and identify the directions where the architectural education should go. What do we expect from newly graduated architects? How should we shift the focus toward BIM based CAD in design schools? And, what does it mean to teach modeling versus teaching drafting?
keywords Computer Aided Drafting; Building Information Modeling; Architectural Education
series eCAADe
email
last changed 2022/06/07 07:50

_id ddss2006-pb-343
id DDSS2006-PB-343
authors Jumphon Lertlakkhanakul, Sangrae Do, and Jinwon Choi
year 2006
title Developing a Spatial Context-Aware Building Model and System to Construct a Virtual Place
source Van Leeuwen, J.P. and H.J.P. Timmermans (eds.) 2006, Progress in Design & Decision Support Systems in Architecture and Urban Planning, Eindhoven: Eindhoven University of Technology, ISBN-10: 90-386-1756-9, ISBN-13: 978-90-386-1756-5, p. 343-358
summary The current notion of space seems to be inappropriate to deal with contemporary and future CAAD applications because it lacks of user and social values. Instead of using a general term called 'space', our approach is to consider the common unit in architectural design process as a place composed of space, user and activity information. Our research focuses on developing a novel intelligent building data model carrying the essence of place. Through our research, the needs of using virtual architectural models among various architectural applications are investigated at first step. Second, key characteristics of spatial information are summarized and systematically classified. The third step is to construct a semantically-rich building data model based on structured floor plan and the semantic location modeling. Then intermediate functions are created providing an interface between the model and future applications. Finally, a prototype system, PlaceMaker, is developed to demonstrate how to apply our building data model to construct virtual architectural models embodying the essences of place.
keywords Spatial context-aware building model, Spatial reasoning, Virtual place, Location modeling, Design constraint
series DDSS
last changed 2006/08/29 12:55

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 270HOMELOGIN (you are user _anon_194367 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002