CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 653

_id caadria2020_161
id caadria2020_161
authors Kido, Daiki, Fukuda, Tomohiro and Yabuki, Nobuyoshi
year 2020
title Mobile Mixed Reality for Environmental Design Using Real-Time Semantic Segmentation and Video Communication - Dynamic Occlusion Handling and Green View Index Estimation
doi https://doi.org/10.52842/conf.caadria.2020.1.681
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 1, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 681-690
summary Mixed reality (MR), that blends the real and virtual worlds, attracted attention for consensus-building among stakeholders in environmental design with the visualization of planned landscape onsite. One of the technical challenges in MR is the occlusion problem which occurs when virtual objects hide physical objects that should be rendered in front of virtual objects. This problem may cause inappropriate simulation. And the visual environmental assessment of present and proposed landscape with MR can be effective for the evidence-based design, such as urban greenery. Thus, this study aims to develop a MR-based environmental assessment system with dynamic occlusion handling and green view index estimation using semantic segmentation based on deep learning. This system was designed for the use on a mobile device with video communication over the Internet to implement a real-time semantic segmentation whose computational cost is high. The applicability of the developed system is shown through case studies.
keywords Mixed Reality (MR); Environmental Design; Dynamic Occlusion Handling; Semantic Segmentation; Green View Index
series CAADRIA
email
last changed 2022/06/07 07:52

_id ecaade2020_167
id ecaade2020_167
authors Newton, David, Piatkowski, Dan, Marshall, Wesley and Tendle, Atharva
year 2020
title Deep Learning Methods for Urban Analysis and Health Estimation of Obesity
doi https://doi.org/10.52842/conf.ecaade.2020.1.297
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 297-304
summary In the 20th and 21st centuries, urban populations have increased dramatically with a whole host of impacts to human health that remain unknown. Research has shown significant correlations between design features in the built environment and human health, but this research has remained limited. A better understanding of this relationship could allow urban planners and architects to design healthier cities and buildings for an increasingly urbanized population. This research addresses this problem by using discriminative deep learning in combination with satellite imagery of census tracts to estimate rates of obesity. Data from the California Health Interview Survey is used to train a Convolutional Neural Network that uses satellite imagery of selected census tracts to estimate rates of obesity. This research contributes knowledge on methods for applying deep learning to urban health estimation, as well as, methods for identifying correlations between urban morphology and human health.
keywords Deep Learning; Artificial Intelligence; Urban Planning; Health; Remote Sensing
series eCAADe
email
last changed 2022/06/07 07:58

_id acadia20_228
id acadia20_228
authors Alawadhi, Mohammad; Yan, Wei
year 2020
title BIM Hyperreality
doi https://doi.org/10.52842/conf.acadia.2020.1.228
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 228-236.
summary Deep learning is expected to offer new opportunities and a new paradigm for the field of architecture. One such opportunity is teaching neural networks to visually understand architectural elements from the built environment. However, the availability of large training datasets is one of the biggest limitations of neural networks. Also, the vast majority of training data for visual recognition tasks is annotated by humans. In order to resolve this bottleneck, we present a concept of a hybrid system—using both building information modeling (BIM) and hyperrealistic (photorealistic) rendering—to synthesize datasets for training a neural network for building object recognition in photos. For generating our training dataset, BIMrAI, we used an existing BIM model and a corresponding photorealistically rendered model of the same building. We created methods for using renderings to train a deep learning model, trained a generative adversarial network (GAN) model using these methods, and tested the output model on real-world photos. For the specific case study presented in this paper, our results show that a neural network trained with synthetic data (i.e., photorealistic renderings and BIM-based semantic labels) can be used to identify building objects from photos without using photos in the training data. Future work can enhance the presented methods using available BIM models and renderings for more generalized mapping and description of photographed built environments.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id cdrf2019_199
id cdrf2019_199
authors Ana Herruzo and Nikita Pashenkov
year 2020
title Collection to Creation: Playfully Interpreting the Classics with Contemporary Tools
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_19
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary This paper details an experimental project developed in an academic and pedagogical environment, aiming to bring together visual arts and computer science coursework in the creation of an interactive installation for a live event at The J. Paul Getty Museum. The result incorporates interactive visuals based on the user’s movements and facial expressions, accompanied by synthetic texts generated using machine learning algorithms trained on the museum’s art collection. Special focus is paid to how advances in computing such as Deep Learning and Natural Language Processing can contribute to deeper engagement with users and add new layers of interactivity.
series cdrf
email
last changed 2022/09/29 07:51

_id ecaade2020_017
id ecaade2020_017
authors Chan, Yick Hin Edwin and Spaeth, A. Benjamin
year 2020
title Architectural Visualisation with Conditional Generative Adversarial Networks (cGAN). - What machines read in architectural sketches.
doi https://doi.org/10.52842/conf.ecaade.2020.2.299
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 299-308
summary As a form of visual reasoning, sketching is a human cognitive activity instrumental to architectural design. In the process of sketching, abstract sketches invoke new mental imageries and subsequently lead to new sketches. This iterative transformation is repeated until the final design emerges. Artificial Intelligence and Deep Neural Networks have been developed to imitate human cognitive processes. Amongst these networks, the Conditional Generative Adversarial Network (cGAN) has been developed for image-to-image translation and is able to generate realistic images from abstract sketches. To mimic the cyclic process of abstracting and imaging in architectural concept design, a Cyclic-cGAN that consists of two cGANs is proposed in this paper. The first cGAN transforms sketches to images, while the second from images to sketches. The training of the Cyclic-cGAN is presented and its performance illustrated by using two sketches from well-known architects, and two from architecture students. The results show that the proposed Cyclic-cGAN can emulate architects' mode of visual reasoning through sketching. This novel approach of utilising deep neural networks may open the door for further development of Artificial Intelligence in assisting architects in conceptual design.
keywords visual cognition; design computation; machine learning; artificial intelligence
series eCAADe
email
last changed 2022/06/07 07:55

_id caadria2020_446
id caadria2020_446
authors Cho, Dahngyu, Kim, Jinsung, Shin, Eunseo, Choi, Jungsik and Lee, Jin-Kook
year 2020
title Recognizing Architectural Objects in Floor-plan Drawings Using Deep-learning Style-transfer Algorithms
doi https://doi.org/10.52842/conf.caadria.2020.2.717
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 717-725
summary This paper describes an approach of recognizing floor plans by assorting essential objects of the plan using deep-learning based style transfer algorithms. Previously, the recognition of floor plans in the design and remodeling phase was labor-intensive, requiring expert-dependent and manual interpretation. For a computer to take in the imaged architectural plan information, the symbols in the plan must be understood. However, the computer has difficulty in extracting information directly from the preexisting plans due to the different conditions of the plans. The goal is to change the preexisting plans to an integrated format to improve the readability by transferring their style into a comprehensible way using Conditional Generative Adversarial Networks (cGAN). About 100-floor plans were used for the dataset which was previously constructed by the Ministry of Land, Infrastructure, and Transport of Korea. The proposed approach has such two steps: (1) to define the important objects contained in the floor plan which needs to be extracted and (2) to use the defined objects as training input data for the cGAN style transfer model. In this paper, wall, door, and window objects were selected as the target for extraction. The preexisting floor plans would be segmented into each part, altered into a consistent format which would then contribute to automatically extracting information for further utilization.
keywords Architectural objects; floor plan recognition; deep-learning; style-transfer
series CAADRIA
email
last changed 2022/06/07 07:56

_id caadria2020_402
id caadria2020_402
authors Ezzat, Mohammed
year 2020
title A Framework for a Comprehensive Conceptualization of Urban Constructs - SpatialNet and SpatialFeaturesNet for computer-aided creative urban design
doi https://doi.org/10.52842/conf.caadria.2020.2.111
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 111-120
summary Analogy is thought to be foundational for designing and for design creativity. Nonetheless, practicing analogical reasoning needs a knowledge-base. The paper proposes a framework for constructing a knowledge-base of urban constructs that builds on an ontology of urbanism. The framework is composed of two modules that are responsible for representing either the concepts or the features of any urban constructs' materialization. The concepts are represented as a knowledge graph (KG) named SpatialNet, while the physical features are represented by a deep neural network (DNN) called SpatialFeaturesNet. For structuring SpatialNet, as a KG that comprehensively conceptualizes spatial qualities, deep learning applied to natural language processing (NLP) is employed. The comprehensive concepts of SpatialNet are firstly discovered using semantic analyses of nine English lingual corpora and then structured using the urban ontology. The goal of the framework is to map the spatial features to the plethora of their matching concepts. The granularity ànd the coherence of the proposed framework is expected to sustain or substitute other known analogical, knowledge-based, inspirational design approaches such as case-based reasoning (CBR) and its analogical application on architectural design (CBD).
keywords Domain-specific knowledge graph of urban qualities; Deep neural network for structuring KG; Natural language processing and comprehensive understanding of urban constructs; Urban cognition and design creativity; Case-based reasoning (CBR) and case-based design (CBD)
series CAADRIA
email
last changed 2022/06/07 07:55

_id caadria2020_342
id caadria2020_342
authors Han, Yoojin and Lee, Hyunsoo
year 2020
title A Deep Learning Approach for Brand Store Image and Positioning - Auto-generation of Brand Positioning Maps Using Image Classification
doi https://doi.org/10.52842/conf.caadria.2020.2.689
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 689-696
summary This paper presents a deep learning approach to measuring brand store image and generating positioning maps. The rise of signature brand stores can be explained in terms of brand identity. Store design and architecture have been highlighted as effective communicators of brand identity and position but, in terms of spatial environment, have been studied solely using qualitative approaches. This study adopted a deep learning-based image classification model as an alternative methodology for measuring brand image and positioning, which are conventionally considered highly subjective. The results demonstrate that a consistent, coherent, and strong brand identity can be trained and recognized using deep learning technology. A brand positioning map can also be created based on predicted scores derived by deep learning. This paper also suggests wider uses for this approach to branding and architectural design.
keywords Deep Learning; Image Classification; Brand Identity; Brand Positioning Map; Brand Store Design
series CAADRIA
email
last changed 2022/06/07 07:50

_id acadia20_658
id acadia20_658
authors Ho, Brian
year 2020
title Making a New City Image
doi https://doi.org/10.52842/conf.acadia.2020.1.658
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 658-667.
summary This paper explores the application of computer vision and machine learning to streetlevel imagery of cities, reevaluating past theory linking urban form to human perception. This paper further proposes a new method for design based on the resulting model, where a designer can identify areas of a city tied to certain perceptual qualities and generate speculative street scenes optimized for their predicted saliency on labels of human experience. This work extends Kevin Lynch’s Image of the City with deep learning: training an image classification model to recognize Lynch’s five elements of the city image, using Lynch’s original photographs and diagrams of Boston to construct labeled training data alongside new imagery of the same locations. This new city image revitalizes past attempts to quantify the human perception of urban form and improve urban design. A designer can search and map the data set to understand spatial opportunities and predict the quality of imagined designs through a dynamic process of collage, model inference, and adaptation. Within a larger practice of design, this work suggests that the curation of archival records, computer science techniques, and theoretical principles of urbanism might be integrated into a single craft. With a new city image, designers might “see” at the scale of the city, as well as focus on the texture, color, and details of urban life.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ecaade2020_222
id ecaade2020_222
authors Ikeno, Kazunosuke, Fukuda, Tomohiro and Yabuki, Nobuyoshi
year 2020
title Automatic Generation of Horizontal Building Mask Images by Using a 3D Model with Aerial Photographs for Deep Learning
doi https://doi.org/10.52842/conf.ecaade.2020.2.271
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 271-278
summary Information extracted from aerial photographs is widely used in urban planning and design. An effective method for detecting buildings in aerial photographs is to use deep learning for understanding the current state of a target region. However, the building mask images used to train the deep learning model are manually generated in many cases. To solve this challenge, a method has been proposed for automatically generating mask images by using virtual reality 3D models for deep learning. Because normal virtual models do not have the realism of a photograph, it is difficult to obtain highly accurate detection results in the real world even if the images are used for deep learning training. Therefore, the objective of this research is to propose a method for automatically generating building mask images by using 3D models with textured aerial photographs for deep learning. The model trained on datasets generated by the proposed method could detect buildings in aerial photographs with an accuracy of IoU = 0.622. Work left for the future includes changing the size and type of mask images, training the model, and evaluating the accuracy of the trained model.
keywords Urban planning and design; Deep learning; Semantic segmentation; Mask image; Training data; Automatic design
series eCAADe
email
last changed 2022/06/07 07:50

_id cdrf2019_93
id cdrf2019_93
authors Jiaxin Zhang , Tomohiro Fukuda , and Nobuyoshi Yabuki
year 2020
title A Large-Scale Measurement and Quantitative Analysis Method of Façade Color in the Urban Street Using Deep Learning
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_9
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary Color planning has become a significant issue in urban development, and an overall cognition of the urban color identities will help to design a better urban environment. However, the previous measurement and analysis methods for the facade color in the urban street are limited to manual collection, which is challenging to carry out on a city scale. Recent emerging dataset street view image and deep learning have revealed the possibility to overcome the previous limits, thus bringing forward a research paradigm shift. In the experimental part, we disassemble the goal into three steps: firstly, capturing the street view images with coordinate information through the API provided by the street view service; then extracting facade images and cleaning up invalid data by using the deep-learning segmentation method; finally, calculating the dominant color based on the data on the Munsell Color System. Results can show whether the color status satisfies the requirements of its urban plan for façade color in the street. This method can help to realize the refined measurement of façade color using open source data, and has good universality in practice.
series cdrf
email
last changed 2022/09/29 07:51

_id caadria2020_088
id caadria2020_088
authors Kado, Keita, Furusho, Genki, Nakamura, Yusuke and Hirasawa, Gakuhito
year 2020
title rocess Path Derivation Method for Multi-Tool Processing Machines Using Deep-Learning-Based Three Dimensional Shape Recognition
doi https://doi.org/10.52842/conf.caadria.2020.2.609
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 609-618
summary When multi-axis processing machines are employed for high-mix, low-volume production, they are operated using a dedicated computer-aided design/ computer-aided manufacturing (CAD/CAM) process that derives an operating path concurrently with detailed modeling. This type of work requires dedicated software that occasionally results in complicated front-loading and data management issues. We proposed a three-dimensional (3D) shape recognition method based on deep learning that creates an operational path from 3D part geometry entered by a CAM application to derive a path for processing machinery such as a circular saw, drill, or end mill. The methodology was tested using 11 joint types and five processing patterns. The results show that the proposed method has several practical applications, as it addresses wooden object creation and may also have other applications.
keywords Three-dimensional Shape Recognition; Deep Learning; Digital Fabrication; Multi-axis Processing Machine
series CAADRIA
email
last changed 2022/06/07 07:52

_id acadia20_170
id acadia20_170
authors Li, Peiwen; Zhu, Wenbo
year 2020
title Clustering and Morphological Analysis of Campus Context
doi https://doi.org/10.52842/conf.acadia.2020.2.170
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 170-177.
summary “Figure-ground” is an indispensable and significant part of urban design and urban morphological research, especially for the study of the university, which exists as a unique product of the city development and also develops with the city. In the past few decades, methods adapted by scholars of analyzing the figure-ground relationship of university campuses have gradually turned from qualitative to quantitative. And with the widespread application of AI technology in various disciplines, emerging research tools such as machine learning/deep learning have also been used in the study of urban morphology. On this basis, this paper reports on a potential application of deep clustering and big-data methods for campus morphological analysis. It documents a new framework for compressing the customized diagrammatic images containing a campus and its surrounding city context into integrated feature vectors via a convolutional autoencoder model, and using the compressed feature vectors for clustering and quantitative analysis of campus morphology.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ecaade2020_113
id ecaade2020_113
authors Li, Yunqin, Yabuki, Nobuyoshi, Fukuda, Tomohiro and Zhang, Jiaxin
year 2020
title A big data evaluation of urban street walkability using deep learning and environmental sensors - a case study around Osaka University Suita campus
doi https://doi.org/10.52842/conf.ecaade.2020.2.319
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 319-328
summary Although it is widely known that the walkability of urban street plays a vital role in promoting street quality and public health, there is still no consensus on how to measure it quantitatively and comprehensively. Recent emerging deep learning and sensor network has revealed the possibility to overcome the previous limit, thus bringing forward a research paradigm shift. Taking this advantage, this study explores a new approach for urban street walkability measurement. In the experimental study, we capture Street View Picture, traffic flow data, and environmental sensor data covering streets within Osaka University and conduct both physical and perceived walkability evaluation. The result indicates that the street walkability of the campus is significantly higher than that of municipal, and the streets close to large service facilities have better walkability, while others receive lower scores. The difference between physical and perceived walkability indicates the feasibility and limitation of the auto-calculation method.
keywords walkability; WalkScore; deep learning; Street view picture; environmental sensor
series eCAADe
email
last changed 2022/06/07 07:51

_id acadia20_178
id acadia20_178
authors Meeran, Ahmed; Conrad Joyce, Sam
year 2020
title Machine Learning for Comparative Urban Planning at Scale: An Aviation Case Study
doi https://doi.org/10.52842/conf.acadia.2020.1.178
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 178-187.
summary Aviation is in flux, experiencing 5.4% yearly growth over the last two decades. However, with COVID-19 aviation was hard hit. This, along with its contribution to global warming, has led to louder calls to limit its use. This situation emphasizes how urban planners and technologists could contribute to understanding and responding to this change. This paper explores a novel workflow of performing image-based machine learning (ML) on satellite images of over 1,000 world airports that were algorithmically collated using European Space Agency Sentinel2 API. From these, the top 350 United States airports were analyzed with land use parameters extracted around the airport using computer vision, which were mapped against their passenger footfall numbers. The results demonstrate a scalable approach to identify how easy and beneficial it would be for certain airports to expand or contract and how this would impact the surrounding urban environment in terms of pollution and congestion. The generic nature of this workflow makes it possible to potentially extend this method to any large infrastructure and compare and analyze specific features across a large number of images while being able to understand the same feature through time. This is critical in answering key typology-based urban design challenges at a higher level and without needing to perform on-ground studies, which could be expensive and time-consuming.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id caadria2020_259
id caadria2020_259
authors Rhee, Jinmo, Veloso, Pedro and Krishnamurti, Ramesh
year 2020
title Integrating building footprint prediction and building massing - an experiment in Pittsburgh
doi https://doi.org/10.52842/conf.caadria.2020.2.669
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 669-678
summary We present a novel method for generating building geometry using deep learning techniques based on contextual geometry in urban context and explore its potential to support building massing. For contextual geometry, we opted to investigate the building footprint, a main interface between urban and architectural forms. For training, we collected GIS data of building footprints and geometries of parcels from Pittsburgh and created a large dataset of Diagrammatic Image Dataset (DID). We employed a modified version of a VGG neural network to model the relationship between (c) a diagrammatic image of a building parcel and context without the footprint, and (q) a quadrilateral representing the original footprint. The option for simple geometrical output enables direct integration with custom design workflows because it obviates image processing and increases training speed. After training the neural network with a curated dataset, we explore a generative workflow for building massing that integrates contextual and programmatic data. As trained model can suggest a contextual boundary for a new site, we used Massigner (Rhee and Chung 2019) to recommend massing alternatives based on the subtraction of voids inside the contextual boundary that satisfy design constraints and programmatic requirements. This new method suggests the potential that learning-based method can be an alternative of rule-based design methods to grasp the complex relationships between design elements.
keywords Deep Learning; Prediction; Building Footprint; Massing; Generative Design
series CAADRIA
email
last changed 2022/06/07 07:56

_id ijac202018104
id ijac202018104
authors Tarabishy, Sherif; Stamatios Psarras, Marcin Kosicki and Martha Tsigkari
year 2020
title Deep learning surrogate models for spatial and visual connectivity
source International Journal of Architectural Computing vol. 18 - no. 1, 53-66
summary Spatial and visual connectivity are important metrics when developing workplace layouts. Calculating those metrics in real time can be difficult, depending on the size of the floor plan being analysed and the resolution of the analyses. This article investigates the possibility of considerably speeding up the outcomes of such computationally intensive simulations by using machine learning to create models capable of identifying the spatial and visual connectivity potential of a space. To that end, we present the entire process of investigating different machine learning models and a pipeline for training them on such task, from the incorporation of a bespoke spatial and visual connectivity analysis engine through a distributed computation pipeline, to the process of synthesizing training data and evaluating the performance of different neural networks.
keywords Algorithmic and evolutionary techniques, performance and simulation, machine learning
series journal
email
last changed 2020/11/02 13:34

_id ecaade2020_093
id ecaade2020_093
authors Veloso, Pedro and Krishnamurti, Ramesh
year 2020
title An Academy of Spatial Agents - Generating spatial configurations with deep reinforcement learning
doi https://doi.org/10.52842/conf.ecaade.2020.2.191
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 191-200
summary Agent-based models rely on decentralized decision making instantiated in the interactions between agents and the environment. In the context of generative design, agent-based models can enable decentralized geometric modelling, provide partial information about the generative process, and enable fine-grained interaction. However, the existing agent-based models originate from non-architectural problems and it is not straight-forward to adapt them for spatial design. To address this, we introduce a method to create custom spatial agents that can satisfy architectural requirements and support fine-grained interaction using multi-agent deep reinforcement learning (MADRL). We focus on a proof of concept where agents control spatial partitions and interact in an environment (represented as a grid) to satisfy custom goals (shape, area, adjacency, etc.). This approach uses double deep Q-network (DDQN) combined with a dynamic convolutional neural-network (DCNN). We report an experiment where trained agents generalize their knowledge to different settings, consistently explore good spatial configurations, and quickly recover from perturbations in the action selection.
keywords space planning; agent-based model; interactive generative systems; artificial intelligence; multi-agent deep reinforcement learning
series eCAADe
email
last changed 2022/06/07 07:58

_id caadria2020_028
id caadria2020_028
authors Xia, Yixi, Yabuki, Nobuyoshi and Fukuda, Tomohiro
year 2020
title Development of an Urban Greenery Evaluation System Based on Deep Learning and Google Street View
doi https://doi.org/10.52842/conf.caadria.2020.1.783
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 1, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 783-792
summary Street greenery has long played a vital role in the quality of urban landscapes and is closely related to people's physical and mental health. In the current research on the urban environment, researchers use various methods to simulate and measure urban greenery. With the development of computer technology, the way to obtain data is more diverse. For the assessment of urban greenery quality, there are many methods, such as using remote sensing satellite images captured from above (antenna, space) sensors, to assess urban green coverage. However, this method is not suitable for the evaluation of street greenery. Unlike most remote sensing images, from a pedestrian perspective, urban street images are the most common view of green plants. The street view image presented by Google Street View image is similar to the captured by the pedestrian perspective. Thus it is more suitable for studying urban street greening. With the development of artificial intelligence, based on deep learning, we can abandon the heavy manual statistical work and obtain more accurate semantic information from street images. Furthermore, we can also measure green landscapes in larger areas of the city, as well as extract more details from street view images for urban research.
keywords Green View Index; Deep Learning; Google Street View; Segmentation
series CAADRIA
email
last changed 2022/06/07 07:57

_id cdrf2019_134
id cdrf2019_134
authors Zhen Han, Wei Yan, and Gang Liu
year 2020
title A Performance-Based Urban Block Generative Design Using Deep Reinforcement Learning and Computer Vision
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_13
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary In recent years, generative design methods are widely used to guide urban or architectural design. Some performance-based generative design methods also combine simulation and optimization algorithms to obtain optimal solutions. In this paper, a performance-based automatic generative design method was proposed to incorporate deep reinforcement learning (DRL) and computer vision for urban planning through a case study to generate an urban block based on its direct sunlight hours, solar heat gains as well as the aesthetics of the layout. The method was tested on the redesign of an old industrial district located in Shenyang, Liaoning Province, China. A DRL agent - deep deterministic policy gradient (DDPG) agent - was trained to guide the generation of the schemes. The agent arranges one building in the site at one time in a training episode according to the observation. Rhino/Grasshopper and a computer vision algorithm, Hough Transform, were used to evaluate the performance and aesthetics, respectively. After about 150 h of training, the proposed method generated 2179 satisfactory design solutions. Episode 1936 which had the highest reward has been chosen as the final solution after manual adjustment. The test results have proven that the method is a potentially effective way for assisting urban design.
series cdrf
email
last changed 2022/09/29 07:51

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 32HOMELOGIN (you are user _anon_329981 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002