CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 9 of 9

_id cdrf2022_478
id cdrf2022_478
authors Andrea Macruz, Mirko Daneluzzo, and Hind Tawaku
year 2022
title Performative Ornament: Enhancing Humidity and Light Levels for Plants in Multispecies Design
source Proceedings of the 2022 DigitalFUTURES The 4st International Conference on Computational Design and Robotic Fabrication (CDRF 2022)
doi https://doi.org/https://doi.org/10.1007/978-981-19-8637-6_41
summary The paper shifts the design conversation from a human-centered design methodology to a posthuman design, considering human and nonhuman actors. It asks how designers can incorporate a multispecies approach to creating greater intelligence and performance projects. To illustrate this, we describe a project of “ornaments” for plants, culminating from a course in an academic setting. The project methodology starts with “Thing Ethnography” analyzing the movement of a water bottle inside a house and its interaction with different objects. The relationship between water and plant was chosen to be further developed, considering water as a material to increase environmental humidity for the plant and brightness through light reflectance and refraction. 3D printed biomimetic structures as supports for water droplets were designed according to their performance and placed in different arrangements around the plant itself. Humidity levels and illuminance of the structures were measured. Ultimately, this created a new approach for working with plants and mass customization. The paper discusses the resultant evidence-based design and environmental values.
series cdrf
email
last changed 2024/05/29 14:03

_id acadia18_36
id acadia18_36
authors Austin, Matthew; Matthews, Linda
year 2018
title Drawing Imprecision. The digital drawing as bits and pixels
source ACADIA // 2018: Recalibration. On imprecisionand infidelity. [Proceedings of the 38th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA) ISBN 978-0-692-17729-7] Mexico City, Mexico 18-20 October, 2018, pp. 36-45
doi https://doi.org/10.52842/conf.acadia.2018.036
summary This paper explores the consequences of digitizing the architectural drawing. It argues that the fundamental unit of drawing has shifted from “the line” to an interactive partnership between bits and pixels. It also reveals how the developmental focus of imaging technology has been to synthesize and imitate the line using bits and pixels, rather than to explore their innate productive value and aesthetic potential.

Referring to variations of the architectural drawing from a domestic typology, the paper uses high-precision digital tools tailored to quantitative image analysis and digital tools that sit outside the remit of architectural production, such as word processing, to present a new range of drawing techniques. By applying a series of traditional analytical procedures to the image, it reveals how these maneuvers can interrogate and dislocate any predetermined formal normalization.

The paper reveals that the interdisciplinary repurposing of precise digital toolsets therefore has explicit disciplinary consequences. These arise as a direct result of the recalibration of scale, the liberation of the bit’s representational capacity, and the pixel’s properties of color and brightness. It concludes by proposing that deliberate instances of translational imprecision are highly productive, because by liberating the fundamental qualitative properties of the fundamental digital units, these techniques shift the disciplinary agency of the architectural drawing

keywords full paper, imprecision, representation, recalibration, theory, glitch aesthetics, algorithmic design, process
series ACADIA
type paper
email
last changed 2022/06/07 07:54

_id 2a99
authors Keul, A. and Martens, B.
year 1996
title SIMULATION - HOW DOES IT SHAPE THE MESSAGE?
source The Future of Endoscopy [Proceedings of the 2nd European Architectural Endoscopy Association Conference / ISBN 3-85437-114-4], pp. 47-54
summary Architectural simulation techniques - CAD, video montage, endoscopy, full-scale or smaller models, stereoscopy, holography etc. - are common visualizations in planning. A subjective theory of planners says "experts are able to distinguish between 'pure design' in their heads and visualized design details and contexts like color, texture, material, brightness, eye level or perspective." If this is right, simulation details should be compensated mentally by trained people, but act as distractors to the lay mind.

Environmental psychologists specializing in architectural psychology offer "user needs' assessments" and "post occupancy evaluations" to facilitate communication between users and experts. To compare the efficiency of building descriptions, building walkthroughs, regular plans, simulation, and direct, long-time exposition, evaluation has to be evaluated.

Computer visualizations and virtual realities grow more important, but studies on the effects of simulation techniques upon experts and users are rare. As a contribution to the field of architectural simulation, an expert - user comparison of CAD versus endoscopy/model simulations of a Vienna city project was realized in 1995. The Department for Spatial Simulation at the Vienna University of Technology provided diaslides of the planned city development at Aspern showing a) CAD and b) endoscopy photos of small-scale polystyrol models. In an experimental design, they were presented uncommented as images of "PROJECT A" versus "PROJECT B" to student groups of architects and non-architects at Vienna and Salzburg (n= 95) and assessed by semantic differentials. Two contradictory hypotheses were tested: 1. The "selective framing hypothesis" (SFH) as the subjective theory of planners, postulating different judgement effects (measured by item means of the semantic differential) through selective attention of the planners versus material- and context-bound perception of the untrained users. 2. The "general framing hypothesis" (GFH) postulates typical framing and distraction effects of all simulation techniques affecting experts as well as non-experts.

The experiment showed that -counter-intuitive to expert opinions- framing and distraction were prominent both for experts and lay people (= GFH). A position effect (assessment interaction of CAD and endoscopy) was present with experts and non-experts, too. With empirical evidence for "the medium is the message", a more cautious attitude has to be adopted towards simulation products as powerful framing (i.e. perception- and opinion-shaping) devices.

keywords Architectural Endoscopy, Real Environments
series EAEA
type normal paper
email
more http://info.tuwien.ac.at/eaea/
last changed 2005/09/09 10:43

_id sigradi2016_360
id sigradi2016_360
authors Leonard, Francisca Rodríguez
year 2016
title Evaluación de las condiciones de orientación temporal en programas de modelación lumínica [Evaluation of temporal orientation conditions in lighting simulation software]
source SIGraDi 2016 [Proceedings of the 20th Conference of the Iberoamerican Society of Digital Graphics - ISBN: 978-956-7051-86-1] Argentina, Buenos Aires 9 - 11 November 2016, pp.446-452
summary The study analyzes three basic visual aspects of light (Spatial distribution of brightness, shadows and color of light) in their ability to communicate temporal information by modeling two specific scenarios using different lighting simulation software (DIALux and Relux). The results confirm the potentiality of natural light to assess temporal disorientation in indoor environments. At the same time, the study proposes new opportunities for improving natural light representation in the simulation field.
series SIGRADI
email
last changed 2021/03/28 19:58

_id sigradi2022_125
id sigradi2022_125
authors Mechler, Cintia; Paraizo, Rodrigo
year 2022
title Visualization of architecture design collection using image subsets: the case of FAU-UFRJ media library
source Herrera, PC, Dreifuss-Serrano, C, Gómez, P, Arris-Calderon, LF, Critical Appropriations - Proceedings of the XXVI Conference of the Iberoamerican Society of Digital Graphics (SIGraDi 2022), Universidad Peruana de Ciencias Aplicadas, Lima, 7-11 November 2022 , pp. 65–76
summary This paper presents part of a research carried out as a graduation project which investigated new approaches for viewing the digital collection of graduation projects of the School of Architecture and Urbanism at the Federal University of Rio de Janeiro - the “Portal Midiateca”. In addition to visualization, the objective is also to survey open source tools and document the process, enabling other researchers to have access to instruments for analysis and visualization of cultural collections. The visualizations and analysis used as data the images (hue, saturation, brightness, similarity) and metadata (themes and year of publication) of the graduation projects sent by the students. They were made using VIKUS Viewer to examine the collection in a dynamic website with timeline and similarity visualization tools; and ivpy in a notebook environment to produce static mosaics from different groups of images according to their color measurements.
keywords Data analytics, Information visualization, Visual rhetoric, Cultural analytics, ETL
series SIGraDi
email
last changed 2023/05/16 16:55

_id ga0026
id ga0026
authors Ransen, Owen F.
year 2000
title Possible Futures in Computer Art Generation
source International Conference on Generative Art
summary Years of trying to create an "Image Idea Generator" program have convinced me that the perfect solution would be to have an artificial artistic person, a design slave. This paper describes how I came to that conclusion, realistic alternatives, and briefly, how it could possibly happen. 1. The history of Repligator and Gliftic 1.1 Repligator In 1996 I had the idea of creating an “image idea generator”. I wanted something which would create images out of nothing, but guided by the user. The biggest conceptual problem I had was “out of nothing”. What does that mean? So I put aside that problem and forced the user to give the program a starting image. This program eventually turned into Repligator, commercially described as an “easy to use graphical effects program”, but actually, to my mind, an Image Idea Generator. The first release came out in October 1997. In December 1998 I described Repligator V4 [1] and how I thought it could be developed away from simply being an effects program. In July 1999 Repligator V4 won the Shareware Industry Awards Foundation prize for "Best Graphics Program of 1999". Prize winners are never told why they won, but I am sure that it was because of two things: 1) Easy of use 2) Ease of experimentation "Ease of experimentation" means that Repligator does in fact come up with new graphics ideas. Once you have input your original image you can generate new versions of that image simply by pushing a single key. Repligator is currently at version 6, but, apart from adding many new effects and a few new features, is basically the same program as version 4. Following on from the ideas in [1] I started to develop Gliftic, which is closer to my original thoughts of an image idea generator which "starts from nothing". The Gliftic model of images was that they are composed of three components: 1. Layout or form, for example the outline of a mandala is a form. 2. Color scheme, for example colors selected from autumn leaves from an oak tree. 3. Interpretation, for example Van Gogh would paint a mandala with oak tree colors in a different way to Andy Warhol. There is a Van Gogh interpretation and an Andy Warhol interpretation. Further I wanted to be able to genetically breed images, for example crossing two layouts to produce a child layout. And the same with interpretations and color schemes. If I could achieve this then the program would be very powerful. 1.2 Getting to Gliftic Programming has an amazing way of crystalising ideas. If you want to put an idea into practice via a computer program you really have to understand the idea not only globally, but just as importantly, in detail. You have to make hard design decisions, there can be no vagueness, and so implementing what I had decribed above turned out to be a considerable challenge. I soon found out that the hardest thing to do would be the breeding of forms. What are the "genes" of a form? What are the genes of a circle, say, and how do they compare to the genes of the outline of the UK? I wanted the genotype representation (inside the computer program's data) to be directly linked to the phenotype representation (on the computer screen). This seemed to be the best way of making sure that bred-forms would bare some visual relationship to their parents. I also wanted symmetry to be preserved. For example if two symmetrical objects were bred then their children should be symmetrical. I decided to represent shapes as simply closed polygonal shapes, and the "genes" of these shapes were simply the list of points defining the polygon. Thus a circle would have to be represented by a regular polygon of, say, 100 sides. The outline of the UK could easily be represented as a list of points every 10 Kilometers along the coast line. Now for the important question: what do you get when you cross a circle with the outline of the UK? I tried various ways of combining the "genes" (i.e. coordinates) of the shapes, but none of them really ended up producing interesting shapes. And of the methods I used, many of them, applied over several "generations" simply resulted in amorphous blobs, with no distinct family characteristics. Or rather maybe I should say that no single method of breeding shapes gave decent results for all types of images. Figure 1 shows an example of breeding a mandala with 6 regular polygons: Figure 1 Mandala bred with array of regular polygons I did not try out all my ideas, and maybe in the future I will return to the problem, but it was clear to me that it is a non-trivial problem. And if the breeding of shapes is a non-trivial problem, then what about the breeding of interpretations? I abandoned the genetic (breeding) model of generating designs but retained the idea of the three components (form, color scheme, interpretation). 1.3 Gliftic today Gliftic Version 1.0 was released in May 2000. It allows the user to change a form, a color scheme and an interpretation. The user can experiment with combining different components together and can thus home in on an personally pleasing image. Just as in Repligator, pushing the F7 key make the program choose all the options. Unlike Repligator however the user can also easily experiment with the form (only) by pushing F4, the color scheme (only) by pushing F5 and the interpretation (only) by pushing F6. Figures 2, 3 and 4 show some example images created by Gliftic. Figure 2 Mandala interpreted with arabesques   Figure 3 Trellis interpreted with "graphic ivy"   Figure 4 Regular dots interpreted as "sparks" 1.4 Forms in Gliftic V1 Forms are simply collections of graphics primitives (points, lines, ellipses and polygons). The program generates these collections according to the user's instructions. Currently the forms are: Mandala, Regular Polygon, Random Dots, Random Sticks, Random Shapes, Grid Of Polygons, Trellis, Flying Leap, Sticks And Waves, Spoked Wheel, Biological Growth, Chequer Squares, Regular Dots, Single Line, Paisley, Random Circles, Chevrons. 1.5 Color Schemes in Gliftic V1 When combining a form with an interpretation (described later) the program needs to know what colors it can use. The range of colors is called a color scheme. Gliftic has three color scheme types: 1. Random colors: Colors for the various parts of the image are chosen purely at random. 2. Hue Saturation Value (HSV) colors: The user can choose the main hue (e.g. red or yellow), the saturation (purity) of the color scheme and the value (brightness/darkness) . The user also has to choose how much variation is allowed in the color scheme. A wide variation allows the various colors of the final image to depart a long way from the HSV settings. A smaller variation results in the final image using almost a single color. 3. Colors chosen from an image: The user can choose an image (for example a JPG file of a famous painting, or a digital photograph he took while on holiday in Greece) and Gliftic will select colors from that image. Only colors from the selected image will appear in the output image. 1.6 Interpretations in Gliftic V1 Interpretation in Gliftic is best decribed with a few examples. A pure geometric line could be interpreted as: 1) the branch of a tree 2) a long thin arabesque 3) a sequence of disks 4) a chain, 5) a row of diamonds. An pure geometric ellipse could be interpreted as 1) a lake, 2) a planet, 3) an eye. Gliftic V1 has the following interpretations: Standard, Circles, Flying Leap, Graphic Ivy, Diamond Bar, Sparkz, Ess Disk, Ribbons, George Haite, Arabesque, ZigZag. 1.7 Applications of Gliftic Currently Gliftic is mostly used for creating WEB graphics, often backgrounds as it has an option to enable "tiling" of the generated images. There is also a possibility that it will be used in the custom textile business sometime within the next year or two. The real application of Gliftic is that of generating new graphics ideas, and I suspect that, like Repligator, many users will only understand this later. 2. The future of Gliftic, 3 possibilties Completing Gliftic V1 gave me the experience to understand what problems and opportunities there will be in future development of the program. Here I divide my many ideas into three oversimplified possibilities, and the real result may be a mix of two or all three of them. 2.1 Continue the current development "linearly" Gliftic could grow simply by the addition of more forms and interpretations. In fact I am sure that initially it will grow like this. However this limits the possibilities to what is inside the program itself. These limits can be mitigated by allowing the user to add forms (as vector files). The user can already add color schemes (as images). The biggest problem with leaving the program in its current state is that there is no easy way to add interpretations. 2.2 Allow the artist to program Gliftic It would be interesting to add a language to Gliftic which allows the user to program his own form generators and interpreters. In this way Gliftic becomes a "platform" for the development of dynamic graphics styles by the artist. The advantage of not having to deal with the complexities of Windows programming could attract the more adventurous artists and designers. The choice of programming language of course needs to take into account the fact that the "programmer" is probably not be an expert computer scientist. I have seen how LISP (an not exactly easy artificial intelligence language) has become very popular among non programming users of AutoCAD. If, to complete a job which you do manually and repeatedly, you can write a LISP macro of only 5 lines, then you may be tempted to learn enough LISP to write those 5 lines. Imagine also the ability to publish (and/or sell) "style generators". An artist could develop a particular interpretation function, it creates images of a given character which others find appealing. The interpretation (which runs inside Gliftic as a routine) could be offered to interior designers (for example) to unify carpets, wallpaper, furniture coverings for single projects. As Adrian Ward [3] says on his WEB site: "Programming is no less an artform than painting is a technical process." Learning a computer language to create a single image is overkill and impractical. Learning a computer language to create your own artistic style which generates an infinite series of images in that style may well be attractive. 2.3 Add an artificial conciousness to Gliftic This is a wild science fiction idea which comes into my head regularly. Gliftic manages to surprise the users with the images it makes, but, currently, is limited by what gets programmed into it or by pure chance. How about adding a real artifical conciousness to the program? Creating an intelligent artificial designer? According to Igor Aleksander [1] conciousness is required for programs (computers) to really become usefully intelligent. Aleksander thinks that "the line has been drawn under the philosophical discussion of conciousness, and the way is open to sound scientific investigation". Without going into the details, and with great over-simplification, there are roughly two sorts of artificial intelligence: 1) Programmed intelligence, where, to all intents and purposes, the programmer is the "intelligence". The program may perform well (but often, in practice, doesn't) and any learning which is done is simply statistical and pre-programmed. There is no way that this type of program could become concious. 2) Neural network intelligence, where the programs are based roughly on a simple model of the brain, and the network learns how to do specific tasks. It is this sort of program which, according to Aleksander, could, in the future, become concious, and thus usefully intelligent. What could the advantages of an artificial artist be? 1) There would be no need for programming. Presumbably the human artist would dialog with the artificial artist, directing its development. 2) The artificial artist could be used as an apprentice, doing the "drudge" work of art, which needs intelligence, but is, anyway, monotonous for the human artist. 3) The human artist imagines "concepts", the artificial artist makes them concrete. 4) An concious artificial artist may come up with ideas of its own. Is this science fiction? Arthur C. Clarke's 1st Law: "If a famous scientist says that something can be done, then he is in all probability correct. If a famous scientist says that something cannot be done, then he is in all probability wrong". Arthur C Clarke's 2nd Law: "Only by trying to go beyond the current limits can you find out what the real limits are." One of Bertrand Russell's 10 commandments: "Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric" 3. References 1. "From Ramon Llull to Image Idea Generation". Ransen, Owen. Proceedings of the 1998 Milan First International Conference on Generative Art. 2. "How To Build A Mind" Aleksander, Igor. Wiedenfeld and Nicolson, 1999 3. "How I Drew One of My Pictures: or, The Authorship of Generative Art" by Adrian Ward and Geof Cox. Proceedings of the 1999 Milan 2nd International Conference on Generative Art.
series other
email
more http://www.generativeart.com/
last changed 2003/08/07 17:25

_id 5840
authors Sato, I., Sato, Y. and Ikeuchi, K.
year 1999
title Illumination distribution from brightness in shadows: adaptive estimation of illumination distribution with unknown reflectance properties in shadow regions
source Proceedings IEEE Conference on Computer Vision and Pattern Recognition 99, pp. 875-882, September 1999
summary This paper describes a new method for estimating the illumination distribution of a real scene from a radiance distribution inside shadows cast by an object in the scene. First, the illumination distribution of the scene is approximated by discrete sampling of an extended light source. Then the illumination distribution of the scene is estimated from a radiance distribution inside shadows cast by an object of known shape onto another object in the scene. Instead of assuming any particular reflectance properties of the surface inside the shadows, both the illumination distribution of the scene and the reflectance properties of the surface are estimated simultaneously, based on iterative optimization framework. In addition, this paper introduces an adaptive sampling of the illumination distribution of a scene. Rather than using a uniform discretization of the overall illumination distribution, we adaptively increase sampling directions of the illumination distribution based on the estimation at the previous iteration. Using the adaptive sampling framework, we are able to estimate overall illumination more efficiently by using fewer sampling directions. The proposed method is effective for estimating an illumination distribution even under a complex illumination environment.
series other
last changed 2003/04/23 15:50

_id 0b96
authors Spencer, G. (et al.)
year 1995
title Physically-Based Glare Effects for Digital Images
source SIGGRAPH'95. Conference Proc., pp. 325-334
summary The physical mechanisms and physiological causes of glare in human vision are reviewed. These mechanisms are scattering in the cornea, lens, and retina, and di raction in the coherent cell structures on the outer radial areas of the lens. This scattering and di raction are responsible for the \bloom" and \flare lines" seen around very bright objects. The di raction e ects cause the \lenticular halo". The quantitative models of these glare e ects are reviewed, and an algorithm for using these models to add glare e ects to digital images is presented. The resulting digital point-spread function is thus psychophysically based and can substantially increase the \perceived" dynamic range of computer simulations containing light sources. Finally, a perceptual test is presented that indicates these added glare e ects increase the apparent brightness of light sources in digital images.
series other
last changed 2003/04/23 15:50

_id cd68
authors Szalapaj, Peter J. and Tang, Songlan
year 1994
title Giving Colour to Contextual Hypermedia
source The Virtual Studio [Proceedings of the 12th European Conference on Education in Computer Aided Architectural Design / ISBN 0-9523687-0-6] Glasgow (Scotland) 7-10 September 1994, pp. 191-200
doi https://doi.org/10.52842/conf.ecaade.1994.191
summary Design development evolves within design contexts that require expression as much as the design itself, and these contexts often constrain any presentation in ways that are not usually explicitly thought of. The context of a design object will therefore influence the conceptual ways of thinking about and presenting this object. Support in hypermedia applications for the expression of the colour context, therefore, should be based upon sound theoretical principles to ensure the effective communication of design ideas. Johannes Itten has postulated seven ways to communicate visual information by means of colour contrast effects, each of which is unique in character, artistic value, and symbolic effect. Of these seven contrasting effects, three are in terms of the nature of colour itself: hue, brightness, and saturation. Although conventional computer graphics applications support the application of these colour properties to discrete shapes, they give no analysis of contrasting colour relationships between shapes. The proposed system attempts to overcome this deficiency. The remaining four contrast effects concern human psychology and psychophysics, and are not supported at all in computer graphics applications. These include the cold-warm contrast, simultaneous contrast, complementary contrast, and the contrast of extension. Although contrast effects are divided into the above seven aspects, they are also related to one another. Thus, when the hue contrast works, the light-dark contrast and cold-warm contrast must work at the same time. Computational support for these colour effects form the focus of this paper.
series eCAADe
last changed 2022/06/07 07:56

No more hits.

HOMELOGIN (you are user _anon_184034 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002