CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 43

_id caadria2020_161
id caadria2020_161
authors Kido, Daiki, Fukuda, Tomohiro and Yabuki, Nobuyoshi
year 2020
title Mobile Mixed Reality for Environmental Design Using Real-Time Semantic Segmentation and Video Communication - Dynamic Occlusion Handling and Green View Index Estimation
doi https://doi.org/10.52842/conf.caadria.2020.1.681
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 1, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 681-690
summary Mixed reality (MR), that blends the real and virtual worlds, attracted attention for consensus-building among stakeholders in environmental design with the visualization of planned landscape onsite. One of the technical challenges in MR is the occlusion problem which occurs when virtual objects hide physical objects that should be rendered in front of virtual objects. This problem may cause inappropriate simulation. And the visual environmental assessment of present and proposed landscape with MR can be effective for the evidence-based design, such as urban greenery. Thus, this study aims to develop a MR-based environmental assessment system with dynamic occlusion handling and green view index estimation using semantic segmentation based on deep learning. This system was designed for the use on a mobile device with video communication over the Internet to implement a real-time semantic segmentation whose computational cost is high. The applicability of the developed system is shown through case studies.
keywords Mixed Reality (MR); Environmental Design; Dynamic Occlusion Handling; Semantic Segmentation; Green View Index
series CAADRIA
email
last changed 2022/06/07 07:52

_id acadia20_668
id acadia20_668
authors Pasquero, Claudia; Poletto, Marco
year 2020
title Deep Green
doi https://doi.org/10.52842/conf.acadia.2020.1.668
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 668-677.
summary Ubiquitous computing enables us to decipher the biosphere’s anthropogenic dimension, what we call the Urbansphere (Pasquero and Poletto 2020). This machinic perspective unveils a new postanthropocentric reality, where the impact of artificial systems on the natural biosphere is indeed global, but their agency is no longer entirely human. This paper explores a protocol to design the Urbansphere, or what we may call the urbanization of the nonhuman, titled DeepGreen. With the development of DeepGreen, we are testing the potential to bring the interdependence of digital and biological intelligence to the core of architectural and urban design research. This is achieved by developing a new biocomputational design workflow that enables the pairing of what is algorithmically drawn with what is biologically grown (Pasquero and Poletto 2016). In other words, and more in detail, the paper will illustrate how generative adversarial network (GAN) algorithms (Radford, Metz, and Soumith 2015) can be trained to “behave” like a Physarum polycephalum, a unicellular organism endowed with surprising computational abilities and self-organizing behaviors that have made it popular among scientist and engineers alike (Adamatzky 2010) (Fig. 1). The trained GAN_Physarum is deployed as an urban design technique to test the potential of polycephalum intelligence in solving problems of urban remetabolization and in computing scenarios of urban morphogenesis within a nonhuman conceptual framework.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id caadria2020_028
id caadria2020_028
authors Xia, Yixi, Yabuki, Nobuyoshi and Fukuda, Tomohiro
year 2020
title Development of an Urban Greenery Evaluation System Based on Deep Learning and Google Street View
doi https://doi.org/10.52842/conf.caadria.2020.1.783
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 1, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 783-792
summary Street greenery has long played a vital role in the quality of urban landscapes and is closely related to people's physical and mental health. In the current research on the urban environment, researchers use various methods to simulate and measure urban greenery. With the development of computer technology, the way to obtain data is more diverse. For the assessment of urban greenery quality, there are many methods, such as using remote sensing satellite images captured from above (antenna, space) sensors, to assess urban green coverage. However, this method is not suitable for the evaluation of street greenery. Unlike most remote sensing images, from a pedestrian perspective, urban street images are the most common view of green plants. The street view image presented by Google Street View image is similar to the captured by the pedestrian perspective. Thus it is more suitable for studying urban street greening. With the development of artificial intelligence, based on deep learning, we can abandon the heavy manual statistical work and obtain more accurate semantic information from street images. Furthermore, we can also measure green landscapes in larger areas of the city, as well as extract more details from street view images for urban research.
keywords Green View Index; Deep Learning; Google Street View; Segmentation
series CAADRIA
email
last changed 2022/06/07 07:57

_id acadia20_228
id acadia20_228
authors Alawadhi, Mohammad; Yan, Wei
year 2020
title BIM Hyperreality
doi https://doi.org/10.52842/conf.acadia.2020.1.228
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 228-236.
summary Deep learning is expected to offer new opportunities and a new paradigm for the field of architecture. One such opportunity is teaching neural networks to visually understand architectural elements from the built environment. However, the availability of large training datasets is one of the biggest limitations of neural networks. Also, the vast majority of training data for visual recognition tasks is annotated by humans. In order to resolve this bottleneck, we present a concept of a hybrid system—using both building information modeling (BIM) and hyperrealistic (photorealistic) rendering—to synthesize datasets for training a neural network for building object recognition in photos. For generating our training dataset, BIMrAI, we used an existing BIM model and a corresponding photorealistically rendered model of the same building. We created methods for using renderings to train a deep learning model, trained a generative adversarial network (GAN) model using these methods, and tested the output model on real-world photos. For the specific case study presented in this paper, our results show that a neural network trained with synthetic data (i.e., photorealistic renderings and BIM-based semantic labels) can be used to identify building objects from photos without using photos in the training data. Future work can enhance the presented methods using available BIM models and renderings for more generalized mapping and description of photographed built environments.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id cdrf2019_199
id cdrf2019_199
authors Ana Herruzo and Nikita Pashenkov
year 2020
title Collection to Creation: Playfully Interpreting the Classics with Contemporary Tools
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_19
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary This paper details an experimental project developed in an academic and pedagogical environment, aiming to bring together visual arts and computer science coursework in the creation of an interactive installation for a live event at The J. Paul Getty Museum. The result incorporates interactive visuals based on the user’s movements and facial expressions, accompanied by synthetic texts generated using machine learning algorithms trained on the museum’s art collection. Special focus is paid to how advances in computing such as Deep Learning and Natural Language Processing can contribute to deeper engagement with users and add new layers of interactivity.
series cdrf
email
last changed 2022/09/29 07:51

_id ecaade2020_499
id ecaade2020_499
authors Ashour, Ziad and Yan, Wei
year 2020
title BIM-Powered Augmented Reality for Advancing Human-Building Interaction
doi https://doi.org/10.52842/conf.ecaade.2020.1.169
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 169-178
summary The shift from computer-aided design (CAD) to building information modeling (BIM) has made the adoption of augmented reality (AR) promising in the field of architecture, engineering and construction. Despite the potential of AR in this field, the industry and professionals have still not fully adopted it due to registration and tracking limitations and visual occlusions in dynamic environments. We propose our first prototype (BIMxAR), which utilizes existing buildings' semantically rich BIM models and contextually aligns geometrical and non-geometrical information with the physical buildings. The proposed prototype aims to solve registration and tracking issues in dynamic environments by utilizing tracking and motion sensors already available in many mobile phones and tablets. The experiment results indicate that the system can support BIM and physical building registration in outdoor and part of indoor environments, but cannot maintain accurate alignment indoor when relying only on a device's motion sensors. Therefore, additional computer vision and AI (deep learning) functions need to be integrated into the system to enhance AR model registration in the future.
keywords Augmented Reality; BIM; BIM-enabled AR; GPS; Human-Building Interactions; Education
series eCAADe
email
last changed 2022/06/07 07:54

_id ecaade2020_017
id ecaade2020_017
authors Chan, Yick Hin Edwin and Spaeth, A. Benjamin
year 2020
title Architectural Visualisation with Conditional Generative Adversarial Networks (cGAN). - What machines read in architectural sketches.
doi https://doi.org/10.52842/conf.ecaade.2020.2.299
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 299-308
summary As a form of visual reasoning, sketching is a human cognitive activity instrumental to architectural design. In the process of sketching, abstract sketches invoke new mental imageries and subsequently lead to new sketches. This iterative transformation is repeated until the final design emerges. Artificial Intelligence and Deep Neural Networks have been developed to imitate human cognitive processes. Amongst these networks, the Conditional Generative Adversarial Network (cGAN) has been developed for image-to-image translation and is able to generate realistic images from abstract sketches. To mimic the cyclic process of abstracting and imaging in architectural concept design, a Cyclic-cGAN that consists of two cGANs is proposed in this paper. The first cGAN transforms sketches to images, while the second from images to sketches. The training of the Cyclic-cGAN is presented and its performance illustrated by using two sketches from well-known architects, and two from architecture students. The results show that the proposed Cyclic-cGAN can emulate architects' mode of visual reasoning through sketching. This novel approach of utilising deep neural networks may open the door for further development of Artificial Intelligence in assisting architects in conceptual design.
keywords visual cognition; design computation; machine learning; artificial intelligence
series eCAADe
email
last changed 2022/06/07 07:55

_id caadria2020_446
id caadria2020_446
authors Cho, Dahngyu, Kim, Jinsung, Shin, Eunseo, Choi, Jungsik and Lee, Jin-Kook
year 2020
title Recognizing Architectural Objects in Floor-plan Drawings Using Deep-learning Style-transfer Algorithms
doi https://doi.org/10.52842/conf.caadria.2020.2.717
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 717-725
summary This paper describes an approach of recognizing floor plans by assorting essential objects of the plan using deep-learning based style transfer algorithms. Previously, the recognition of floor plans in the design and remodeling phase was labor-intensive, requiring expert-dependent and manual interpretation. For a computer to take in the imaged architectural plan information, the symbols in the plan must be understood. However, the computer has difficulty in extracting information directly from the preexisting plans due to the different conditions of the plans. The goal is to change the preexisting plans to an integrated format to improve the readability by transferring their style into a comprehensible way using Conditional Generative Adversarial Networks (cGAN). About 100-floor plans were used for the dataset which was previously constructed by the Ministry of Land, Infrastructure, and Transport of Korea. The proposed approach has such two steps: (1) to define the important objects contained in the floor plan which needs to be extracted and (2) to use the defined objects as training input data for the cGAN style transfer model. In this paper, wall, door, and window objects were selected as the target for extraction. The preexisting floor plans would be segmented into each part, altered into a consistent format which would then contribute to automatically extracting information for further utilization.
keywords Architectural objects; floor plan recognition; deep-learning; style-transfer
series CAADRIA
email
last changed 2022/06/07 07:56

_id acadia20_272
id acadia20_272
authors del Campo, Matias; Carlson, Alexandra; Manninger, Sandra
year 2020
title How Machines Learn to Plan
doi https://doi.org/10.52842/conf.acadia.2020.1.272
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 272-281.
summary This paper strives to interrogate the abilities of machine vision techniques based on a family of deep neural networks, called generative adversarial neural networks (GANs), to devise alternative planning solutions. The basis for these processes is a large database of existing planning solutions. For the experimental setup of this paper, these plans were divided into two separate learning classes: Modern and Baroque. The proposed algorithmic technique leverages the large amount of structural and symbolic information that is inherent to the design of planning solutions throughout history to generate novel unseen plans. In this area of inquiry, aspects of culture such as creativity, agency, and authorship are discussed, as neural networks can conceive solutions currently alien to designers. These can range from alien morphologies to advanced programmatic solutions. This paper is primarily interested in interrogating the second existing but uncharted territory.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id caadria2020_402
id caadria2020_402
authors Ezzat, Mohammed
year 2020
title A Framework for a Comprehensive Conceptualization of Urban Constructs - SpatialNet and SpatialFeaturesNet for computer-aided creative urban design
doi https://doi.org/10.52842/conf.caadria.2020.2.111
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 111-120
summary Analogy is thought to be foundational for designing and for design creativity. Nonetheless, practicing analogical reasoning needs a knowledge-base. The paper proposes a framework for constructing a knowledge-base of urban constructs that builds on an ontology of urbanism. The framework is composed of two modules that are responsible for representing either the concepts or the features of any urban constructs' materialization. The concepts are represented as a knowledge graph (KG) named SpatialNet, while the physical features are represented by a deep neural network (DNN) called SpatialFeaturesNet. For structuring SpatialNet, as a KG that comprehensively conceptualizes spatial qualities, deep learning applied to natural language processing (NLP) is employed. The comprehensive concepts of SpatialNet are firstly discovered using semantic analyses of nine English lingual corpora and then structured using the urban ontology. The goal of the framework is to map the spatial features to the plethora of their matching concepts. The granularity ànd the coherence of the proposed framework is expected to sustain or substitute other known analogical, knowledge-based, inspirational design approaches such as case-based reasoning (CBR) and its analogical application on architectural design (CBD).
keywords Domain-specific knowledge graph of urban qualities; Deep neural network for structuring KG; Natural language processing and comprehensive understanding of urban constructs; Urban cognition and design creativity; Case-based reasoning (CBR) and case-based design (CBD)
series CAADRIA
email
last changed 2022/06/07 07:55

_id caadria2020_342
id caadria2020_342
authors Han, Yoojin and Lee, Hyunsoo
year 2020
title A Deep Learning Approach for Brand Store Image and Positioning - Auto-generation of Brand Positioning Maps Using Image Classification
doi https://doi.org/10.52842/conf.caadria.2020.2.689
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 689-696
summary This paper presents a deep learning approach to measuring brand store image and generating positioning maps. The rise of signature brand stores can be explained in terms of brand identity. Store design and architecture have been highlighted as effective communicators of brand identity and position but, in terms of spatial environment, have been studied solely using qualitative approaches. This study adopted a deep learning-based image classification model as an alternative methodology for measuring brand image and positioning, which are conventionally considered highly subjective. The results demonstrate that a consistent, coherent, and strong brand identity can be trained and recognized using deep learning technology. A brand positioning map can also be created based on predicted scores derived by deep learning. This paper also suggests wider uses for this approach to branding and architectural design.
keywords Deep Learning; Image Classification; Brand Identity; Brand Positioning Map; Brand Store Design
series CAADRIA
email
last changed 2022/06/07 07:50

_id acadia20_658
id acadia20_658
authors Ho, Brian
year 2020
title Making a New City Image
doi https://doi.org/10.52842/conf.acadia.2020.1.658
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 658-667.
summary This paper explores the application of computer vision and machine learning to streetlevel imagery of cities, reevaluating past theory linking urban form to human perception. This paper further proposes a new method for design based on the resulting model, where a designer can identify areas of a city tied to certain perceptual qualities and generate speculative street scenes optimized for their predicted saliency on labels of human experience. This work extends Kevin Lynch’s Image of the City with deep learning: training an image classification model to recognize Lynch’s five elements of the city image, using Lynch’s original photographs and diagrams of Boston to construct labeled training data alongside new imagery of the same locations. This new city image revitalizes past attempts to quantify the human perception of urban form and improve urban design. A designer can search and map the data set to understand spatial opportunities and predict the quality of imagined designs through a dynamic process of collage, model inference, and adaptation. Within a larger practice of design, this work suggests that the curation of archival records, computer science techniques, and theoretical principles of urbanism might be integrated into a single craft. With a new city image, designers might “see” at the scale of the city, as well as focus on the texture, color, and details of urban life.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id acadia20_382
id acadia20_382
authors Hosmer, Tyson; Tigas, Panagiotis; Reeves, David; He, Ziming
year 2020
title Spatial Assembly with Self-Play Reinforcement Learning
doi https://doi.org/10.52842/conf.acadia.2020.1.382
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 382-393.
summary We present a framework to generate intelligent spatial assemblies from sets of digitally encoded spatial parts designed by the architect with embedded principles of prefabrication, assembly awareness, and reconfigurability. The methodology includes a bespoke constraint-solving algorithm for autonomously assembling 3D geometries into larger spatial compositions for the built environment. A series of graph-based analysis methods are applied to each assembly to extract performance metrics related to architectural space-making goals, including structural stability, material density, spatial segmentation, connectivity, and spatial distribution. Together with the constraint-based assembly algorithm and analysis methods, we have integrated a novel application of deep reinforcement (RL) learning for training the models to improve at matching the multiperformance goals established by the user through self-play. RL is applied to improve the selection and sequencing of parts while considering local and global objectives. The user’s design intent is embedded through the design of partial units of 3D space with embedded fabrication principles and their relational constraints over how they connect to each other and the quantifiable goals to drive the distribution of effective features. The methodology has been developed over three years through three case study projects called ArchiGo (2017–2018), NoMAS (2018–2019), and IRSILA (2019-2020). Each demonstrates the potential for buildings with reconfigurable and adaptive life cycles.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ecaade2020_222
id ecaade2020_222
authors Ikeno, Kazunosuke, Fukuda, Tomohiro and Yabuki, Nobuyoshi
year 2020
title Automatic Generation of Horizontal Building Mask Images by Using a 3D Model with Aerial Photographs for Deep Learning
doi https://doi.org/10.52842/conf.ecaade.2020.2.271
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 271-278
summary Information extracted from aerial photographs is widely used in urban planning and design. An effective method for detecting buildings in aerial photographs is to use deep learning for understanding the current state of a target region. However, the building mask images used to train the deep learning model are manually generated in many cases. To solve this challenge, a method has been proposed for automatically generating mask images by using virtual reality 3D models for deep learning. Because normal virtual models do not have the realism of a photograph, it is difficult to obtain highly accurate detection results in the real world even if the images are used for deep learning training. Therefore, the objective of this research is to propose a method for automatically generating building mask images by using 3D models with textured aerial photographs for deep learning. The model trained on datasets generated by the proposed method could detect buildings in aerial photographs with an accuracy of IoU = 0.622. Work left for the future includes changing the size and type of mask images, training the model, and evaluating the accuracy of the trained model.
keywords Urban planning and design; Deep learning; Semantic segmentation; Mask image; Training data; Automatic design
series eCAADe
email
last changed 2022/06/07 07:50

_id cdrf2019_93
id cdrf2019_93
authors Jiaxin Zhang , Tomohiro Fukuda , and Nobuyoshi Yabuki
year 2020
title A Large-Scale Measurement and Quantitative Analysis Method of Façade Color in the Urban Street Using Deep Learning
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_9
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary Color planning has become a significant issue in urban development, and an overall cognition of the urban color identities will help to design a better urban environment. However, the previous measurement and analysis methods for the facade color in the urban street are limited to manual collection, which is challenging to carry out on a city scale. Recent emerging dataset street view image and deep learning have revealed the possibility to overcome the previous limits, thus bringing forward a research paradigm shift. In the experimental part, we disassemble the goal into three steps: firstly, capturing the street view images with coordinate information through the API provided by the street view service; then extracting facade images and cleaning up invalid data by using the deep-learning segmentation method; finally, calculating the dominant color based on the data on the Munsell Color System. Results can show whether the color status satisfies the requirements of its urban plan for façade color in the street. This method can help to realize the refined measurement of façade color using open source data, and has good universality in practice.
series cdrf
email
last changed 2022/09/29 07:51

_id caadria2020_088
id caadria2020_088
authors Kado, Keita, Furusho, Genki, Nakamura, Yusuke and Hirasawa, Gakuhito
year 2020
title rocess Path Derivation Method for Multi-Tool Processing Machines Using Deep-Learning-Based Three Dimensional Shape Recognition
doi https://doi.org/10.52842/conf.caadria.2020.2.609
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 609-618
summary When multi-axis processing machines are employed for high-mix, low-volume production, they are operated using a dedicated computer-aided design/ computer-aided manufacturing (CAD/CAM) process that derives an operating path concurrently with detailed modeling. This type of work requires dedicated software that occasionally results in complicated front-loading and data management issues. We proposed a three-dimensional (3D) shape recognition method based on deep learning that creates an operational path from 3D part geometry entered by a CAM application to derive a path for processing machinery such as a circular saw, drill, or end mill. The methodology was tested using 11 joint types and five processing patterns. The results show that the proposed method has several practical applications, as it addresses wooden object creation and may also have other applications.
keywords Three-dimensional Shape Recognition; Deep Learning; Digital Fabrication; Multi-axis Processing Machine
series CAADRIA
email
last changed 2022/06/07 07:52

_id caadria2020_375
id caadria2020_375
authors Kalo, Ammar, Tracy, Kenneth and Tam, Mark
year 2020
title Robotic Sand Carving - Machining Techniques Derived from a Traditional Balinese Craft
doi https://doi.org/10.52842/conf.caadria.2020.2.443
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 443-452
summary This paper presents research aimed at translating Ukiran Pasir Melela, traditional Balinese sand carving, into a new robotic-enabled framework for rapidly carving stiff but uncured cement sand blocks to create free-form and architecturally scalable unique volumetric elements. The research aims to reconsider vernacular materials and craft through their integration robotic manufacturing processes and how this activity can provide localized, low energy manufacturing solutions for building in the Anthropocene.Balinese sand carving shows potential advantages over current, and rather environmentally damaging, machining process primarily using soft materials state to make deep, smooth cuts into material with little torque. Transferring this manual and low-impact craft to robotic-enabled fabrication leverages heuristic knowledge developed over decades and opens possibilities for expanding and transforming these capabilities to increase the variability of potential future applications.
keywords Robotic Fabrication; Computational Design; Traditional Craft
series CAADRIA
email
last changed 2022/06/07 07:52

_id ijac202018101
id ijac202018101
authors Karakiewicz, Justyna
year 2020
title Design is real, complex, inclusive, emergent and evil
source International Journal of Architectural Computing vol. 18 - no. 1, 5-19
summary Can computers make our designs more intelligent and better informed? This is the implication of the theme of the special issue. Architectural design is often thought of as the design of the object, and design models of architecture seek to explicate this process. As an architect, however, I cannot subscribe to that view. In this particular article, I will explore how computational approaches have illuminated and expanded my work to enable the interaction of these themes across scores of projects. Underpinning the projects are foundational concepts: design is real, complex, inclusive, emergent and evil. Design is grounded in reality and facts, that we can derive design outcomes from a deep and unblemished understanding of the world around us. It is not a stylistic escape. Reality is complex. Architectural design has sought to simplify. This was inescapable when projects are so large yet need to be communicated succinctly. ‘Less is more’ justified this approach. In town planning, this is evident in the tool of zoning. Parse the problem and then address each piece. What we do is part of a larger effort. The field of architecture seeks distinction. Design theories want to distinguish and elevate architecture. But if design is complex and it is real, then it is tied to messy realism. Designing has to become accessible to other realms of knowledge. Designing is the seeking of opportunity. For many, design is simply finding the answer – think of Herbert Simon’s statement that design is problem solving. Design reveals opportunities, and these emergent conditions are to be grasped. As designers, our decisions have implications. We know now that what we build has future implications in ways that are profound. When we define design as problem solving, we ignore the truth that design is problem making.
keywords Design, panarchy, CAS, complexity, Digital Project, Galapagos
series journal
email
last changed 2020/11/02 13:34

_id caadria2020_163
id caadria2020_163
authors Koh, Immanuel
year 2020
title The Augmented Museum - A Machinic Experience with Deep Learning
doi https://doi.org/10.52842/conf.caadria.2020.2.639
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 639-648
summary Today we witness a shift in the role with which museum used to play -- from one that was simply a spatial container filled with physical artworks on display, to one that is now layered with the digital/online version of the artworks themselves. Deep learning algorithms have become an important means to process such large datasets of digital artworks in providing an alternative curatorial practice (biased/unbiased), and consequentially, augmenting the navigation of the museum's physical spaces. In collaboration with a selection of museums, a series of web/mobile applications have been made to investigate the potential of such machinic inference, as well as interference of the physical experience.
keywords Machine Learning; Deep Learning; Experience Design; Artificial Intelligence
series CAADRIA
email
last changed 2022/06/07 07:51

_id acadia20_170
id acadia20_170
authors Li, Peiwen; Zhu, Wenbo
year 2020
title Clustering and Morphological Analysis of Campus Context
doi https://doi.org/10.52842/conf.acadia.2020.2.170
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 170-177.
summary “Figure-ground” is an indispensable and significant part of urban design and urban morphological research, especially for the study of the university, which exists as a unique product of the city development and also develops with the city. In the past few decades, methods adapted by scholars of analyzing the figure-ground relationship of university campuses have gradually turned from qualitative to quantitative. And with the widespread application of AI technology in various disciplines, emerging research tools such as machine learning/deep learning have also been used in the study of urban morphology. On this basis, this paper reports on a potential application of deep clustering and big-data methods for campus morphological analysis. It documents a new framework for compressing the customized diagrammatic images containing a campus and its surrounding city context into integrated feature vectors via a convolutional autoencoder model, and using the compressed feature vectors for clustering and quantitative analysis of campus morphology.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

For more results click below:

this is page 0show page 1show page 2HOMELOGIN (you are user _anon_752467 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002