CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 525

_id acadia20_228
id acadia20_228
authors Alawadhi, Mohammad; Yan, Wei
year 2020
title BIM Hyperreality
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 228-236.
doi https://doi.org/10.52842/conf.acadia.2020.1.228
summary Deep learning is expected to offer new opportunities and a new paradigm for the field of architecture. One such opportunity is teaching neural networks to visually understand architectural elements from the built environment. However, the availability of large training datasets is one of the biggest limitations of neural networks. Also, the vast majority of training data for visual recognition tasks is annotated by humans. In order to resolve this bottleneck, we present a concept of a hybrid system—using both building information modeling (BIM) and hyperrealistic (photorealistic) rendering—to synthesize datasets for training a neural network for building object recognition in photos. For generating our training dataset, BIMrAI, we used an existing BIM model and a corresponding photorealistically rendered model of the same building. We created methods for using renderings to train a deep learning model, trained a generative adversarial network (GAN) model using these methods, and tested the output model on real-world photos. For the specific case study presented in this paper, our results show that a neural network trained with synthetic data (i.e., photorealistic renderings and BIM-based semantic labels) can be used to identify building objects from photos without using photos in the training data. Future work can enhance the presented methods using available BIM models and renderings for more generalized mapping and description of photographed built environments.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id acadia20_218
id acadia20_218
authors Rossi, Gabriella; Nicholas, Paul
year 2020
title Encoded Images
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 218-227.
doi https://doi.org/10.52842/conf.acadia.2020.1.218
summary In this paper, we explore conditional generative adversarial networks (cGANs) as a new way of bridging the gap between design and analysis in contemporary architectural practice. By substituting analytical finite element analysis (FEA) modeling with cGAN predictions during the iterative design phase, we develop novel workflows that support iterative computational design and digital fabrication processes in new ways. This paper reports two case studies of increasing complexity that utilize cGANs for structural analysis. Central to both experiments is the representation of information within the data set the cGAN is trained on. We contribute a prototypical representational technique to encode multiple layers of geometric and performative description into false color images, which we then use to train a Pix2Pix neural network architecture on entirely digital generated data sets as a proxy for the performance of physically fabricated elements. The paper describes the representational workflow and reports the process and results of training and their integration into the design experiments. Last, we identify potentials and limits of this approach within the design processes.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id cdrf2022_209
id cdrf2022_209
authors Yecheng Zhang, Qimin Zhang, Yuxuan Zhao, Yunjie Deng, Feiyang Liu, Hao Zheng
year 2022
title Artificial Intelligence Prediction of Urban Spatial Risk Factors from an Epidemic Perspective
source Proceedings of the 2022 DigitalFUTURES The 4st International Conference on Computational Design and Robotic Fabrication (CDRF 2022)
doi https://doi.org/https://doi.org/10.1007/978-981-19-8637-6_18
summary From the epidemiological perspective, previous research methods of COVID-19 are generally based on classical statistical analysis. As a result, spatial information is often not used effectively. This paper uses image-based neural networks to explore the relationship between urban spatial risk and the distribution of infected populations, and the design of urban facilities. We take the Spatio-temporal data of people infected with new coronary pneumonia before February 28 in Wuhan in 2020 as the research object. We use kriging spatial interpolation technology and core density estimation technology to establish the epidemic heat distribution on fine grid units. We further examine the distribution of nine main spatial risk factors, including agencies, hospitals, park squares, sports fields, banks, hotels, Etc., which are tested for the significant positive correlation with the heat distribution of the epidemic. The weights of the spatial risk factors are used for training Generative Adversarial Network models, which predict the heat distribution of the outbreak in a given area. According to the trained model, optimizing the relevant environment design in urban areas to control risk factors effectively prevents and manages the epidemic from dispersing. The input image of the machine learning model is a city plan converted by public infrastructures, and the output image is a map of urban spatial risk factors in the given area.
series cdrf
email
last changed 2024/05/29 14:02

_id caadria2020_234
id caadria2020_234
authors Zhang, Hang and Blasetti, Ezio
year 2020
title 3D Architectural Form Style Transfer through Machine Learning
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 659-668
doi https://doi.org/10.52842/conf.caadria.2020.2.659
summary In recent years, a tremendous amount of progress is being made in the field of machine learning, but it is still very hard to directly apply 3D Machine Learning on the architectural design due to the practical constraints on model resolution and training time. Based on the past several years' development of GAN (Generative Adversarial Network), also the method of spatial sequence rules, the authors mainly introduces 3D architectural form style transfer on 2 levels of scale (overall and detailed) through multiple methods of machine learning algorithms which are trained with 2 types of 2D training data set (serial stack and multi-view) at a relatively decent resolution. By exploring how styles interact and influence the original content in neural networks on the 2D level, it is possible for designers to manually control the expected output of 2D images, result in creating the new style 3D architectural model with a clear designing approach.
keywords 3D; Form Finding; Style Transfer; Machine Learning; Architectural Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id ecaade2020_017
id ecaade2020_017
authors Chan, Yick Hin Edwin and Spaeth, A. Benjamin
year 2020
title Architectural Visualisation with Conditional Generative Adversarial Networks (cGAN). - What machines read in architectural sketches.
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 299-308
doi https://doi.org/10.52842/conf.ecaade.2020.2.299
summary As a form of visual reasoning, sketching is a human cognitive activity instrumental to architectural design. In the process of sketching, abstract sketches invoke new mental imageries and subsequently lead to new sketches. This iterative transformation is repeated until the final design emerges. Artificial Intelligence and Deep Neural Networks have been developed to imitate human cognitive processes. Amongst these networks, the Conditional Generative Adversarial Network (cGAN) has been developed for image-to-image translation and is able to generate realistic images from abstract sketches. To mimic the cyclic process of abstracting and imaging in architectural concept design, a Cyclic-cGAN that consists of two cGANs is proposed in this paper. The first cGAN transforms sketches to images, while the second from images to sketches. The training of the Cyclic-cGAN is presented and its performance illustrated by using two sketches from well-known architects, and two from architecture students. The results show that the proposed Cyclic-cGAN can emulate architects' mode of visual reasoning through sketching. This novel approach of utilising deep neural networks may open the door for further development of Artificial Intelligence in assisting architects in conceptual design.
keywords visual cognition; design computation; machine learning; artificial intelligence
series eCAADe
email
last changed 2022/06/07 07:55

_id caadria2020_446
id caadria2020_446
authors Cho, Dahngyu, Kim, Jinsung, Shin, Eunseo, Choi, Jungsik and Lee, Jin-Kook
year 2020
title Recognizing Architectural Objects in Floor-plan Drawings Using Deep-learning Style-transfer Algorithms
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 717-725
doi https://doi.org/10.52842/conf.caadria.2020.2.717
summary This paper describes an approach of recognizing floor plans by assorting essential objects of the plan using deep-learning based style transfer algorithms. Previously, the recognition of floor plans in the design and remodeling phase was labor-intensive, requiring expert-dependent and manual interpretation. For a computer to take in the imaged architectural plan information, the symbols in the plan must be understood. However, the computer has difficulty in extracting information directly from the preexisting plans due to the different conditions of the plans. The goal is to change the preexisting plans to an integrated format to improve the readability by transferring their style into a comprehensible way using Conditional Generative Adversarial Networks (cGAN). About 100-floor plans were used for the dataset which was previously constructed by the Ministry of Land, Infrastructure, and Transport of Korea. The proposed approach has such two steps: (1) to define the important objects contained in the floor plan which needs to be extracted and (2) to use the defined objects as training input data for the cGAN style transfer model. In this paper, wall, door, and window objects were selected as the target for extraction. The preexisting floor plans would be segmented into each part, altered into a consistent format which would then contribute to automatically extracting information for further utilization.
keywords Architectural objects; floor plan recognition; deep-learning; style-transfer
series CAADRIA
email
last changed 2022/06/07 07:56

_id cdrf2019_103
id cdrf2019_103
authors Runjia Tian
year 2020
title Suggestive Site Planning with Conditional GAN and Urban GIS Data
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_10
summary In architecture, landscape architecture, and urban design, site planning refers to the organizational process of site layout. A fundamental step for site planning is the design of building layout across the site. This process is hard to automate due to its multi-modal nature: it takes multiple constraints such as street block shape, orientation, program, density, and plantation. The paper proposes a prototypical and extensive framework to generate building footprints as masterplan references for architects, landscape architects, and urban designers by learning from the existing built environment with Artificial Neural Networks. Pix2PixHD Conditional Generative Adversarial Neural Network is used to learn the mapping from a site boundary geometry represented with a pixelized image to that of an image containing building footprint color-coded to various programs. A dataset containing necessary information is collected from open source GIS (Geographic Information System) portals from the city of Boston, wrangled with geospatial analysis libraries in python, trained with the TensorFlow framework. The result is visualized in Rhinoceros and Grasshopper, for generating site plans interactively.
series cdrf
email
last changed 2022/09/29 07:51

_id ecaade2020_018
id ecaade2020_018
authors Sato, Gen, Ishizawa, Tsukasa, Iseda, Hajime and Kitahara, Hideo
year 2020
title Automatic Generation of the Schematic Mechanical System Drawing by Generative Adversarial Network
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 403-410
doi https://doi.org/10.52842/conf.ecaade.2020.1.403
summary In the front-loaded project workflow, mechanical, electrical, and plumbing (MEP) design requires precision from the beginning of the design phase. Leveraging insights from as-built drawings during the early design stage can be beneficial to design enhancement. This study proposes a GAN (Generative Adversarial Networks)-based system which populates the fire extinguishing (FE) system onto the architectural drawing image as its input. An algorithm called Pix2Pix with the improved loss function enabled such generation. The algorithm was trained by the dataset, which includes pairs of as-built building plans with and without FE equipment. A novel index termed Piping Coverage Rate was jointly proposed to evaluate the obtained results. The system produces the output within 45 seconds, which is drastically faster than the conventional manual workflow. The system realizes the prompt engineering study learned from past as-built information, which contributes to further the data-driven decision making.
keywords Generative Adversarial Network; MEP; as-built drawing; automated design; data-driven design
series eCAADe
email
last changed 2022/06/07 07:57

_id caadria2020_054
id caadria2020_054
authors Shen, Jiaqi, Liu, Chuan, Ren, Yue and Zheng, Hao
year 2020
title Machine Learning Assisted Urban Filling
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 679-688
doi https://doi.org/10.52842/conf.caadria.2020.2.679
summary When drawing urban scale plans, designers should always define the position and the shape of each building. This process usually costs much time in the early design stage when the condition of a city has not been finally determined. Thus the designers spend a lot of time working forward and backward drawing sketches for different characteristics of cities. Meanwhile, machine learning, as a decision-making tool, has been widely used in many fields. Generative Adversarial Network (GAN) is a model frame in machine learning, specially designed to learn and generate image data. Therefore, this research aims to apply GAN in creating urban design plans, helping designers automatically generate the predicted details of buildings configuration with a given condition of cities. Through the machine learning of image pairs, the result shows the relationship between the site conditions (roads, green lands, and rivers) and the configuration of buildings. This automatic design tool can help release the heavy load of urban designers in the early design stage, quickly providing a preview of design solutions for urban design tasks. The analysis of different machine learning models trained by the data from different cities inspires urban designers with design strategies and features in distinct conditions.
keywords Artificial Intelligence; Urban Design; Generative Adversarial Networks; Machine Learning
series CAADRIA
email
last changed 2022/06/07 07:56

_id artificial_intellicence2019_117
id artificial_intellicence2019_117
authors Stanislas Chaillou
year 2020
title ArchiGAN: Artificial Intelligence x Architecture
source Architectural Intelligence Selected Papers from the 1st International Conference on Computational Design and Robotic Fabrication (CDRF 2022)
doi https://doi.org/https://doi.org/10.1007/978-981-15-6568-7_8
summary AI will soon massively empower architects in their day-to-day practice. This article provides a proof of concept. The framework used here offers a springboard for discussion, inviting architects to start engaging with AI, and data scientists to consider Architecture as a field of investigation. In this article, we summarize a part of our thesis, submitted at Harvard in May 2019, where Generative Adversarial Neural Networks (or GANs) get leveraged to design floor plans and entire buildings .
series Architectural Intelligence
email
last changed 2022/09/29 07:28

_id caadria2020_015
id caadria2020_015
authors Zheng, Hao, An, Keyao, Wei, Jingxuan and Ren, Yue
year 2020
title Apartment Floor Plans Generation via Generative Adversarial Networks
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 599-608
doi https://doi.org/10.52842/conf.caadria.2020.2.599
summary When drawing architectural plans, designers should always define every detail, so the images can contain enough information to support design. This process usually costs much time in the early design stage when the design boundary has not been finally determined. Thus the designers spend a lot of time working forward and backward drawing sketches for different site conditions. Meanwhile, Machine Learning, as a decision-making tool, has been widely used in many fields. Generative Adversarial Network (GAN) is a model frame in machine learning, specially designed to learn and generate image data. Therefore, this research aims to apply GAN in creating architectural plan drawings, helping designers automatically generate the predicted details of apartment floor plans with given boundaries. Through the machine learning of image pairs that show the boundary and the details of plan drawings, the learning program will build a model to learn the connections between two given images, and then the evaluation program will generate architectural drawings according to the inputted boundary images. This automatic design tool can help release the heavy load of architects in the early design stage, quickly providing a preview of design solutions for architectural plans.
keywords Machine Learning; Artificial Intelligence; Architectural Design; Interior Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id ijac202018402
id ijac202018402
authors Mette Ramsgaard Thomsen, Paul Nicholas, Martin Tamke, Sebastian Gatz, Yuliya Sinke and Gabriella Rossi
year 2020
title Towards machine learning for architectural fabrication in the age of industry 4.0
source International Journal of Architectural Computing vol. 18 - no. 4, 335–352
summary Machine Learning (ML) is opening new perspectives for architectural fabrication, as it holds the potential for the profession to shortcut the currently tedious and costly setup of digital integrated design to fabrication workflows and make these more adaptable. To establish and alter these workflows rapidly becomes a main concern with the advent of Industry 4.0 in building industry. In this article we present two projects, which presents how ML can lead to radical changes in generation of fabrication data and linking these directly to design intent. We investigate two different moments of implementation: linking performance to the generation of fabrication data (KnitCone) and integrating the ability to adapt fabrication data in realtime as response to fabrication processes (Neural-Network Steered Robotic Fabrication). Together they examine how models can employ design information as training data and be trained to by step processes within the digital chain. We detail the advantages and limitations of each experiment, we reflect on core questions and perspectives of ML for architectural fabrication: the nature of data to be used, the capacity of these algorithms to encode complexity and generalize results, their task-specificness versus their adaptability and the tradeoffs of using them with respect to conventional explicit analytical modelling.
keywords Machine learning, architectural design, industry 4.0, digital fabrication, robotic fabrication, CNC knit, neural networks
series journal
email
last changed 2021/06/03 23:29

_id caadria2020_259
id caadria2020_259
authors Rhee, Jinmo, Veloso, Pedro and Krishnamurti, Ramesh
year 2020
title Integrating building footprint prediction and building massing - an experiment in Pittsburgh
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 669-678
doi https://doi.org/10.52842/conf.caadria.2020.2.669
summary We present a novel method for generating building geometry using deep learning techniques based on contextual geometry in urban context and explore its potential to support building massing. For contextual geometry, we opted to investigate the building footprint, a main interface between urban and architectural forms. For training, we collected GIS data of building footprints and geometries of parcels from Pittsburgh and created a large dataset of Diagrammatic Image Dataset (DID). We employed a modified version of a VGG neural network to model the relationship between (c) a diagrammatic image of a building parcel and context without the footprint, and (q) a quadrilateral representing the original footprint. The option for simple geometrical output enables direct integration with custom design workflows because it obviates image processing and increases training speed. After training the neural network with a curated dataset, we explore a generative workflow for building massing that integrates contextual and programmatic data. As trained model can suggest a contextual boundary for a new site, we used Massigner (Rhee and Chung 2019) to recommend massing alternatives based on the subtraction of voids inside the contextual boundary that satisfy design constraints and programmatic requirements. This new method suggests the potential that learning-based method can be an alternative of rule-based design methods to grasp the complex relationships between design elements.
keywords Deep Learning; Prediction; Building Footprint; Massing; Generative Design
series CAADRIA
email
last changed 2022/06/07 07:56

_id ecaade2020_283
id ecaade2020_283
authors Sebestyen, Adam and Tyc, Jakub
year 2020
title Machine Learning Methods in Energy Simulations for Architects and Designers - The implementation of supervised machine learning in the context of the computational design process
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 613-622
doi https://doi.org/10.52842/conf.ecaade.2020.1.613
summary Application of Machine Learning (ML) in the field of architecture is a worthwhile topic to discuss in the context of digital architecture. Authors propose to extend this discussion, presenting an integrated ML pipeline built with the state-of-the-art data science tools. To investigate the affordances of such pipelines, an ML model being able to predict the environmental metrics of a generalized facade system is created. This approach is valid for arbitrary facades, as long as the proposed design could be discretized in the form analogous to the data generated for the ML model training. The presented experiment evaluates the precision of the sunlight hours and radiation values predictions, aiming at the application in the early design phases. Conducted investigation builds up on the knowledge embedded in the Grasshopper and Ladybug toolsets. Potential application of Convolutional Neural Networks and categorical datasets for classifications tasks to increase the precision of the ML models have been identified. Possibility to extend the approach beyond the workspace of Rhino and Grasshopper is suggested. Further research outlook, investigating the data pattern recognition capabilities in relation to the three-dimensional forms discretized as multidimensional arrays, is stated.
keywords Machine Learning; Environmental Analysis; Parametric Design; Supervised Learning
series eCAADe
email
last changed 2022/06/07 08:00

_id ecaade2020_007
id ecaade2020_007
authors Yu, De
year 2020
title Reprogramming Urban Block by Machine Creativity - How to use neural networks as generative tools to design space
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 249-258
doi https://doi.org/10.52842/conf.ecaade.2020.1.249
summary The democratization of design requires balancing all sorts of factors in space design. However, the traditional way to organize spatial relationships cannot deal with such complex design objectives. Can one find another form of creativity rather the human brain to design space? As Margaret Boden mentioned, "computers and creativity make interesting partners with respect to two different projects." This paper addresses whether machine creativity in the form of neural networks could be considered as a powerful generative tool to reprogram urban block in order to meet multi-users' needs. It tested this theory in a specific block model called Agri-tecture, a new architectural form combing farming with the urban built environment. Specifically, the machine empowered by Generative Adversarial Network designed spatial layouts based on learning the existing cases. Nevertheless, since the machine can hardly avoid errors, architects need to intervene and verify the machine's work. Thus, a synergy between human creativity and machine creativity is called for.
keywords machine creativity; Generative Adversarial Network; spatial layout; creativity combination; Agri-tecture
series eCAADe
email
last changed 2022/06/07 07:57

_id cdrf2019_169
id cdrf2019_169
authors Yubo Liu, Yihua Luo, Qiaoming Deng, and Xuanxing Zhou
year 2020
title Exploration of Campus Layout Based on Generative Adversarial Network Discussing the Significance of Small Amount Sample Learning for Architecture
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_16
summary This paper aims to explore the idea and method of using deep learning with a small amount sample to realize campus layout generation. From the perspective of the architect, we construct two small amount sample campus layout data sets through artificial screening with the preference of the specific architects. These data sets are used to train the ability of Pix2Pix model to automatically generate the campus layout under the condition of the given campus boundary and surrounding roads. Through the analysis of the experimental results, this paper finds that under the premise of effective screening of the collected samples, even using a small amount sample data set for deep learning can achieve a good result.
series cdrf
email
last changed 2022/09/29 07:51

_id caadria2020_384
id caadria2020_384
authors Patt, Trevor Ryan
year 2020
title Spectral Clustering for Urban Networks
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 91-100
doi https://doi.org/10.52842/conf.caadria.2020.2.091
summary As planetary urbanization accelerates, the significance of developing better methods for analyzing and making sense of complex urban networks also increases. The complexity and heterogeneity of contemporary urban space poses a challenge to conventional descriptive tools. In recent years, the emergence of urban network analysis and the widespread availability of GIS data has brought network analysis methods into the discussion of urban form. This paper describes a method for computationally identifying clusters within urban and other spatial networks using spectral analysis techniques. While spectral clustering has been employed in some limited urban studies, on large spatialized datasets (particularly in identifying land use from orthoimages), it has not yet been thoroughly studied in relation to the space of the urban network itself. We present the construction of a weighted graph Laplacian matrix representation of the network and the processing of the network by eigen decomposition and subsequent clustering of eigenvalues in 4d-space.In this implementation, the algorithm computes a cross-comparison for different numbers of clusters and recommends the best option based on either the 'elbow method,' or by "eigen gap" criteria. The results of the clustering operation are immediately visualized on the original map and can also be validated numerically according to a selection of cluster metrics. Cohesion and separation values are calculated simultaneously for all nodes. After presenting these, the paper also expands on the 'silhouette' value, which is a composite measure that seems especially suited to urban network clustering.This research is undertaken with the aim of informing the design process and so the visualization of results within the active 3d model is essential. Within the paper, we illustrate the process as applied to formal grids and also historic, vernacular urban fabric; first on small, extract urban fragments and then over an entire city networks to indicate the scalability.
keywords Urban morphology; network analysis; spectral clustering; computation
series CAADRIA
email
last changed 2022/06/07 07:59

_id acadia20_658
id acadia20_658
authors Ho, Brian
year 2020
title Making a New City Image
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 658-667.
doi https://doi.org/10.52842/conf.acadia.2020.1.658
summary This paper explores the application of computer vision and machine learning to streetlevel imagery of cities, reevaluating past theory linking urban form to human perception. This paper further proposes a new method for design based on the resulting model, where a designer can identify areas of a city tied to certain perceptual qualities and generate speculative street scenes optimized for their predicted saliency on labels of human experience. This work extends Kevin Lynch’s Image of the City with deep learning: training an image classification model to recognize Lynch’s five elements of the city image, using Lynch’s original photographs and diagrams of Boston to construct labeled training data alongside new imagery of the same locations. This new city image revitalizes past attempts to quantify the human perception of urban form and improve urban design. A designer can search and map the data set to understand spatial opportunities and predict the quality of imagined designs through a dynamic process of collage, model inference, and adaptation. Within a larger practice of design, this work suggests that the curation of archival records, computer science techniques, and theoretical principles of urbanism might be integrated into a single craft. With a new city image, designers might “see” at the scale of the city, as well as focus on the texture, color, and details of urban life.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ecaade2020_222
id ecaade2020_222
authors Ikeno, Kazunosuke, Fukuda, Tomohiro and Yabuki, Nobuyoshi
year 2020
title Automatic Generation of Horizontal Building Mask Images by Using a 3D Model with Aerial Photographs for Deep Learning
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 271-278
doi https://doi.org/10.52842/conf.ecaade.2020.2.271
summary Information extracted from aerial photographs is widely used in urban planning and design. An effective method for detecting buildings in aerial photographs is to use deep learning for understanding the current state of a target region. However, the building mask images used to train the deep learning model are manually generated in many cases. To solve this challenge, a method has been proposed for automatically generating mask images by using virtual reality 3D models for deep learning. Because normal virtual models do not have the realism of a photograph, it is difficult to obtain highly accurate detection results in the real world even if the images are used for deep learning training. Therefore, the objective of this research is to propose a method for automatically generating building mask images by using 3D models with textured aerial photographs for deep learning. The model trained on datasets generated by the proposed method could detect buildings in aerial photographs with an accuracy of IoU = 0.622. Work left for the future includes changing the size and type of mask images, training the model, and evaluating the accuracy of the trained model.
keywords Urban planning and design; Deep learning; Semantic segmentation; Mask image; Training data; Automatic design
series eCAADe
email
last changed 2022/06/07 07:50

_id sigradi2023_234
id sigradi2023_234
authors Santos, Ítalo, Andrade, Max, Zanchettin, Cleber and Rolim, Adriana
year 2023
title Machine learning applied in the evaluation of airport projects in Brazil based on BIM models
source García Amen, F, Goni Fitipaldo, A L and Armagno Gentile, Á (eds.), Accelerated Landscapes - Proceedings of the XXVII International Conference of the Ibero-American Society of Digital Graphics (SIGraDi 2023), Punta del Este, Maldonado, Uruguay, 29 November - 1 December 2023, pp. 875–887
summary In a country with continental dimensions like Brazil, air transport plays a strategic role in the development of the country. In recent years, initiatives have been promoted to boost the development of air transport, among which the BIM BR strategy stands out, instituted by decree n-9.983 (2019), decree n-10.306 (2020) and more recently, the publication of the airport design manual (SAC, 2021). In this context, this work presents partial results of a doctoral research based on the Design Science Research (DSR) method for the application of Machine Learning (ML) techniques in the Artificial Intelligence (AI) subarea, aiming to support SAC airport project analysts in the phase of project evaluation. Based on a set of training and test data corresponding to airport projects, two ML algorithms were trained. Preliminary results indicate that the use of ML algorithms enables a new scenario to be explored by teams of airport design analysts in Brazil.
keywords Airports, Artificial intelligence, BIM, Evaluation, Machine learning.
series SIGraDi
email
last changed 2024/03/08 14:07

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 26HOMELOGIN (you are user _anon_207135 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002