CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 86

_id acadia20_228
id acadia20_228
authors Alawadhi, Mohammad; Yan, Wei
year 2020
title BIM Hyperreality
doi https://doi.org/10.52842/conf.acadia.2020.1.228
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 228-236.
summary Deep learning is expected to offer new opportunities and a new paradigm for the field of architecture. One such opportunity is teaching neural networks to visually understand architectural elements from the built environment. However, the availability of large training datasets is one of the biggest limitations of neural networks. Also, the vast majority of training data for visual recognition tasks is annotated by humans. In order to resolve this bottleneck, we present a concept of a hybrid system—using both building information modeling (BIM) and hyperrealistic (photorealistic) rendering—to synthesize datasets for training a neural network for building object recognition in photos. For generating our training dataset, BIMrAI, we used an existing BIM model and a corresponding photorealistically rendered model of the same building. We created methods for using renderings to train a deep learning model, trained a generative adversarial network (GAN) model using these methods, and tested the output model on real-world photos. For the specific case study presented in this paper, our results show that a neural network trained with synthetic data (i.e., photorealistic renderings and BIM-based semantic labels) can be used to identify building objects from photos without using photos in the training data. Future work can enhance the presented methods using available BIM models and renderings for more generalized mapping and description of photographed built environments.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id sigradi2020_60
id sigradi2020_60
authors Asmar, Karen El; Sareen, Harpreet
year 2020
title Machinic Interpolations: A GAN Pipeline for Integrating Lateral Thinking in Computational Tools of Architecture
source SIGraDi 2020 [Proceedings of the 24th Conference of the Iberoamerican Society of Digital Graphics - ISSN: 2318-6968] Online Conference 18 - 20 November 2020, pp. 60-66
summary In this paper, we discuss a new tool pipeline that aims to re-integrate lateral thinking strategies in computational tools of architecture. We present a 4-step AI-driven pipeline, based on Generative Adversarial Networks (GANs), that draws from the ability to access the latent space of a machine and use this space as a digital design environment. We demonstrate examples of navigating in this space using vector arithmetic and interpolations as a method to generate a series of images that are then translated to 3D voxel structures. Through a gallery of forms, we show how this series of techniques could result in unexpected spaces and outputs beyond what could be produced by human capability alone.
keywords Latent space, GANs, Lateral thinking, Computational tools, Artificial intelligence
series SIGraDi
email
last changed 2021/07/16 11:48

_id ecaade2020_017
id ecaade2020_017
authors Chan, Yick Hin Edwin and Spaeth, A. Benjamin
year 2020
title Architectural Visualisation with Conditional Generative Adversarial Networks (cGAN). - What machines read in architectural sketches.
doi https://doi.org/10.52842/conf.ecaade.2020.2.299
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 299-308
summary As a form of visual reasoning, sketching is a human cognitive activity instrumental to architectural design. In the process of sketching, abstract sketches invoke new mental imageries and subsequently lead to new sketches. This iterative transformation is repeated until the final design emerges. Artificial Intelligence and Deep Neural Networks have been developed to imitate human cognitive processes. Amongst these networks, the Conditional Generative Adversarial Network (cGAN) has been developed for image-to-image translation and is able to generate realistic images from abstract sketches. To mimic the cyclic process of abstracting and imaging in architectural concept design, a Cyclic-cGAN that consists of two cGANs is proposed in this paper. The first cGAN transforms sketches to images, while the second from images to sketches. The training of the Cyclic-cGAN is presented and its performance illustrated by using two sketches from well-known architects, and two from architecture students. The results show that the proposed Cyclic-cGAN can emulate architects' mode of visual reasoning through sketching. This novel approach of utilising deep neural networks may open the door for further development of Artificial Intelligence in assisting architects in conceptual design.
keywords visual cognition; design computation; machine learning; artificial intelligence
series eCAADe
email
last changed 2022/06/07 07:55

_id caadria2020_446
id caadria2020_446
authors Cho, Dahngyu, Kim, Jinsung, Shin, Eunseo, Choi, Jungsik and Lee, Jin-Kook
year 2020
title Recognizing Architectural Objects in Floor-plan Drawings Using Deep-learning Style-transfer Algorithms
doi https://doi.org/10.52842/conf.caadria.2020.2.717
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 717-725
summary This paper describes an approach of recognizing floor plans by assorting essential objects of the plan using deep-learning based style transfer algorithms. Previously, the recognition of floor plans in the design and remodeling phase was labor-intensive, requiring expert-dependent and manual interpretation. For a computer to take in the imaged architectural plan information, the symbols in the plan must be understood. However, the computer has difficulty in extracting information directly from the preexisting plans due to the different conditions of the plans. The goal is to change the preexisting plans to an integrated format to improve the readability by transferring their style into a comprehensible way using Conditional Generative Adversarial Networks (cGAN). About 100-floor plans were used for the dataset which was previously constructed by the Ministry of Land, Infrastructure, and Transport of Korea. The proposed approach has such two steps: (1) to define the important objects contained in the floor plan which needs to be extracted and (2) to use the defined objects as training input data for the cGAN style transfer model. In this paper, wall, door, and window objects were selected as the target for extraction. The preexisting floor plans would be segmented into each part, altered into a consistent format which would then contribute to automatically extracting information for further utilization.
keywords Architectural objects; floor plan recognition; deep-learning; style-transfer
series CAADRIA
email
last changed 2022/06/07 07:56

_id acadia20_272
id acadia20_272
authors del Campo, Matias; Carlson, Alexandra; Manninger, Sandra
year 2020
title How Machines Learn to Plan
doi https://doi.org/10.52842/conf.acadia.2020.1.272
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 272-281.
summary This paper strives to interrogate the abilities of machine vision techniques based on a family of deep neural networks, called generative adversarial neural networks (GANs), to devise alternative planning solutions. The basis for these processes is a large database of existing planning solutions. For the experimental setup of this paper, these plans were divided into two separate learning classes: Modern and Baroque. The proposed algorithmic technique leverages the large amount of structural and symbolic information that is inherent to the design of planning solutions throughout history to generate novel unseen plans. In this area of inquiry, aspects of culture such as creativity, agency, and authorship are discussed, as neural networks can conceive solutions currently alien to designers. These can range from alien morphologies to advanced programmatic solutions. This paper is primarily interested in interrogating the second existing but uncharted territory.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id acadia20_218
id acadia20_218
authors Rossi, Gabriella; Nicholas, Paul
year 2020
title Encoded Images
doi https://doi.org/10.52842/conf.acadia.2020.1.218
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 218-227.
summary In this paper, we explore conditional generative adversarial networks (cGANs) as a new way of bridging the gap between design and analysis in contemporary architectural practice. By substituting analytical finite element analysis (FEA) modeling with cGAN predictions during the iterative design phase, we develop novel workflows that support iterative computational design and digital fabrication processes in new ways. This paper reports two case studies of increasing complexity that utilize cGANs for structural analysis. Central to both experiments is the representation of information within the data set the cGAN is trained on. We contribute a prototypical representational technique to encode multiple layers of geometric and performative description into false color images, which we then use to train a Pix2Pix neural network architecture on entirely digital generated data sets as a proxy for the performance of physically fabricated elements. The paper describes the representational workflow and reports the process and results of training and their integration into the design experiments. Last, we identify potentials and limits of this approach within the design processes.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id cdrf2019_103
id cdrf2019_103
authors Runjia Tian
year 2020
title Suggestive Site Planning with Conditional GAN and Urban GIS Data
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_10
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary In architecture, landscape architecture, and urban design, site planning refers to the organizational process of site layout. A fundamental step for site planning is the design of building layout across the site. This process is hard to automate due to its multi-modal nature: it takes multiple constraints such as street block shape, orientation, program, density, and plantation. The paper proposes a prototypical and extensive framework to generate building footprints as masterplan references for architects, landscape architects, and urban designers by learning from the existing built environment with Artificial Neural Networks. Pix2PixHD Conditional Generative Adversarial Neural Network is used to learn the mapping from a site boundary geometry represented with a pixelized image to that of an image containing building footprint color-coded to various programs. A dataset containing necessary information is collected from open source GIS (Geographic Information System) portals from the city of Boston, wrangled with geospatial analysis libraries in python, trained with the TensorFlow framework. The result is visualized in Rhinoceros and Grasshopper, for generating site plans interactively.
series cdrf
email
last changed 2022/09/29 07:51

_id ecaade2020_018
id ecaade2020_018
authors Sato, Gen, Ishizawa, Tsukasa, Iseda, Hajime and Kitahara, Hideo
year 2020
title Automatic Generation of the Schematic Mechanical System Drawing by Generative Adversarial Network
doi https://doi.org/10.52842/conf.ecaade.2020.1.403
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 403-410
summary In the front-loaded project workflow, mechanical, electrical, and plumbing (MEP) design requires precision from the beginning of the design phase. Leveraging insights from as-built drawings during the early design stage can be beneficial to design enhancement. This study proposes a GAN (Generative Adversarial Networks)-based system which populates the fire extinguishing (FE) system onto the architectural drawing image as its input. An algorithm called Pix2Pix with the improved loss function enabled such generation. The algorithm was trained by the dataset, which includes pairs of as-built building plans with and without FE equipment. A novel index termed Piping Coverage Rate was jointly proposed to evaluate the obtained results. The system produces the output within 45 seconds, which is drastically faster than the conventional manual workflow. The system realizes the prompt engineering study learned from past as-built information, which contributes to further the data-driven decision making.
keywords Generative Adversarial Network; MEP; as-built drawing; automated design; data-driven design
series eCAADe
email
last changed 2022/06/07 07:57

_id caadria2020_054
id caadria2020_054
authors Shen, Jiaqi, Liu, Chuan, Ren, Yue and Zheng, Hao
year 2020
title Machine Learning Assisted Urban Filling
doi https://doi.org/10.52842/conf.caadria.2020.2.679
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 679-688
summary When drawing urban scale plans, designers should always define the position and the shape of each building. This process usually costs much time in the early design stage when the condition of a city has not been finally determined. Thus the designers spend a lot of time working forward and backward drawing sketches for different characteristics of cities. Meanwhile, machine learning, as a decision-making tool, has been widely used in many fields. Generative Adversarial Network (GAN) is a model frame in machine learning, specially designed to learn and generate image data. Therefore, this research aims to apply GAN in creating urban design plans, helping designers automatically generate the predicted details of buildings configuration with a given condition of cities. Through the machine learning of image pairs, the result shows the relationship between the site conditions (roads, green lands, and rivers) and the configuration of buildings. This automatic design tool can help release the heavy load of urban designers in the early design stage, quickly providing a preview of design solutions for urban design tasks. The analysis of different machine learning models trained by the data from different cities inspires urban designers with design strategies and features in distinct conditions.
keywords Artificial Intelligence; Urban Design; Generative Adversarial Networks; Machine Learning
series CAADRIA
email
last changed 2022/06/07 07:56

_id artificial_intellicence2019_117
id artificial_intellicence2019_117
authors Stanislas Chaillou
year 2020
title ArchiGAN: Artificial Intelligence x Architecture
doi https://doi.org/https://doi.org/10.1007/978-981-15-6568-7_8
source Architectural Intelligence Selected Papers from the 1st International Conference on Computational Design and Robotic Fabrication (CDRF 2022)
summary AI will soon massively empower architects in their day-to-day practice. This article provides a proof of concept. The framework used here offers a springboard for discussion, inviting architects to start engaging with AI, and data scientists to consider Architecture as a field of investigation. In this article, we summarize a part of our thesis, submitted at Harvard in May 2019, where Generative Adversarial Neural Networks (or GANs) get leveraged to design floor plans and entire buildings .
series Architectural Intelligence
email
last changed 2022/09/29 07:28

_id cdrf2022_209
id cdrf2022_209
authors Yecheng Zhang, Qimin Zhang, Yuxuan Zhao, Yunjie Deng, Feiyang Liu, Hao Zheng
year 2022
title Artificial Intelligence Prediction of Urban Spatial Risk Factors from an Epidemic Perspective
doi https://doi.org/https://doi.org/10.1007/978-981-19-8637-6_18
source Proceedings of the 2022 DigitalFUTURES The 4st International Conference on Computational Design and Robotic Fabrication (CDRF 2022)
summary From the epidemiological perspective, previous research methods of COVID-19 are generally based on classical statistical analysis. As a result, spatial information is often not used effectively. This paper uses image-based neural networks to explore the relationship between urban spatial risk and the distribution of infected populations, and the design of urban facilities. We take the Spatio-temporal data of people infected with new coronary pneumonia before February 28 in Wuhan in 2020 as the research object. We use kriging spatial interpolation technology and core density estimation technology to establish the epidemic heat distribution on fine grid units. We further examine the distribution of nine main spatial risk factors, including agencies, hospitals, park squares, sports fields, banks, hotels, Etc., which are tested for the significant positive correlation with the heat distribution of the epidemic. The weights of the spatial risk factors are used for training Generative Adversarial Network models, which predict the heat distribution of the outbreak in a given area. According to the trained model, optimizing the relevant environment design in urban areas to control risk factors effectively prevents and manages the epidemic from dispersing. The input image of the machine learning model is a city plan converted by public infrastructures, and the output image is a map of urban spatial risk factors in the given area.
series cdrf
email
last changed 2024/05/29 14:02

_id ecaade2020_007
id ecaade2020_007
authors Yu, De
year 2020
title Reprogramming Urban Block by Machine Creativity - How to use neural networks as generative tools to design space
doi https://doi.org/10.52842/conf.ecaade.2020.1.249
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 249-258
summary The democratization of design requires balancing all sorts of factors in space design. However, the traditional way to organize spatial relationships cannot deal with such complex design objectives. Can one find another form of creativity rather the human brain to design space? As Margaret Boden mentioned, "computers and creativity make interesting partners with respect to two different projects." This paper addresses whether machine creativity in the form of neural networks could be considered as a powerful generative tool to reprogram urban block in order to meet multi-users' needs. It tested this theory in a specific block model called Agri-tecture, a new architectural form combing farming with the urban built environment. Specifically, the machine empowered by Generative Adversarial Network designed spatial layouts based on learning the existing cases. Nevertheless, since the machine can hardly avoid errors, architects need to intervene and verify the machine's work. Thus, a synergy between human creativity and machine creativity is called for.
keywords machine creativity; Generative Adversarial Network; spatial layout; creativity combination; Agri-tecture
series eCAADe
email
last changed 2022/06/07 07:57

_id caadria2020_234
id caadria2020_234
authors Zhang, Hang and Blasetti, Ezio
year 2020
title 3D Architectural Form Style Transfer through Machine Learning
doi https://doi.org/10.52842/conf.caadria.2020.2.659
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 659-668
summary In recent years, a tremendous amount of progress is being made in the field of machine learning, but it is still very hard to directly apply 3D Machine Learning on the architectural design due to the practical constraints on model resolution and training time. Based on the past several years' development of GAN (Generative Adversarial Network), also the method of spatial sequence rules, the authors mainly introduces 3D architectural form style transfer on 2 levels of scale (overall and detailed) through multiple methods of machine learning algorithms which are trained with 2 types of 2D training data set (serial stack and multi-view) at a relatively decent resolution. By exploring how styles interact and influence the original content in neural networks on the 2D level, it is possible for designers to manually control the expected output of 2D images, result in creating the new style 3D architectural model with a clear designing approach.
keywords 3D; Form Finding; Style Transfer; Machine Learning; Architectural Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id caadria2020_015
id caadria2020_015
authors Zheng, Hao, An, Keyao, Wei, Jingxuan and Ren, Yue
year 2020
title Apartment Floor Plans Generation via Generative Adversarial Networks
doi https://doi.org/10.52842/conf.caadria.2020.2.599
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 599-608
summary When drawing architectural plans, designers should always define every detail, so the images can contain enough information to support design. This process usually costs much time in the early design stage when the design boundary has not been finally determined. Thus the designers spend a lot of time working forward and backward drawing sketches for different site conditions. Meanwhile, Machine Learning, as a decision-making tool, has been widely used in many fields. Generative Adversarial Network (GAN) is a model frame in machine learning, specially designed to learn and generate image data. Therefore, this research aims to apply GAN in creating architectural plan drawings, helping designers automatically generate the predicted details of apartment floor plans with given boundaries. Through the machine learning of image pairs that show the boundary and the details of plan drawings, the learning program will build a model to learn the connections between two given images, and then the evaluation program will generate architectural drawings according to the inputted boundary images. This automatic design tool can help release the heavy load of architects in the early design stage, quickly providing a preview of design solutions for architectural plans.
keywords Machine Learning; Artificial Intelligence; Architectural Design; Interior Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id caadria2020_118
id caadria2020_118
authors Chow, Ka Lok and van Ameijde, Jeroen
year 2020
title Generative Housing Communities - Design of Participatory Spaces in Public Housing Using Network Configurational Theories
doi https://doi.org/10.52842/conf.caadria.2020.2.283
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 283-292
summary This research-by-design project explores how public housing estates can accommodate social diversity and the appropriation of shared spaces, using qualitative and quantitative analysis of circulation networks. A case study housing estate in Hong Kong was analysed through field observations of movements and activities and as a site for the speculative re-design of shared spaces. Generative design processes were developed based on several parameters, including shortest paths, visibility integration and connectivity integration (Hillier & Hanson, 1984). Additional tools were developed to combine these techniques with optimisation of sunlight access, maximisation of views for residential towers and the provision of permeability of ground level building volumes. The project demonstrates how flexibility of use and social engagement can constitute a platform for self-organisation, similar to Jane Jacobs' notion of vibrant streets leading to active and progressive communities. It shows how computational design and configurational theories can promote a bottom-up approach for generating new types of residential environments that support participatory and diverse communities, rather than a conventional top-down approach that is perceived to embody mechanisms of social regimentation.
keywords Urban Planning and Design; Network Configuration; Community Space and Social Interaction; Hong Kong Public Housing
series CAADRIA
email
last changed 2022/06/07 07:56

_id acadia20_668
id acadia20_668
authors Pasquero, Claudia; Poletto, Marco
year 2020
title Deep Green
doi https://doi.org/10.52842/conf.acadia.2020.1.668
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 668-677.
summary Ubiquitous computing enables us to decipher the biosphere’s anthropogenic dimension, what we call the Urbansphere (Pasquero and Poletto 2020). This machinic perspective unveils a new postanthropocentric reality, where the impact of artificial systems on the natural biosphere is indeed global, but their agency is no longer entirely human. This paper explores a protocol to design the Urbansphere, or what we may call the urbanization of the nonhuman, titled DeepGreen. With the development of DeepGreen, we are testing the potential to bring the interdependence of digital and biological intelligence to the core of architectural and urban design research. This is achieved by developing a new biocomputational design workflow that enables the pairing of what is algorithmically drawn with what is biologically grown (Pasquero and Poletto 2016). In other words, and more in detail, the paper will illustrate how generative adversarial network (GAN) algorithms (Radford, Metz, and Soumith 2015) can be trained to “behave” like a Physarum polycephalum, a unicellular organism endowed with surprising computational abilities and self-organizing behaviors that have made it popular among scientist and engineers alike (Adamatzky 2010) (Fig. 1). The trained GAN_Physarum is deployed as an urban design technique to test the potential of polycephalum intelligence in solving problems of urban remetabolization and in computing scenarios of urban morphogenesis within a nonhuman conceptual framework.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id cdrf2019_169
id cdrf2019_169
authors Yubo Liu, Yihua Luo, Qiaoming Deng, and Xuanxing Zhou
year 2020
title Exploration of Campus Layout Based on Generative Adversarial Network Discussing the Significance of Small Amount Sample Learning for Architecture
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_16
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary This paper aims to explore the idea and method of using deep learning with a small amount sample to realize campus layout generation. From the perspective of the architect, we construct two small amount sample campus layout data sets through artificial screening with the preference of the specific architects. These data sets are used to train the ability of Pix2Pix model to automatically generate the campus layout under the condition of the given campus boundary and surrounding roads. Through the analysis of the experimental results, this paper finds that under the premise of effective screening of the collected samples, even using a small amount sample data set for deep learning can achieve a good result.
series cdrf
email
last changed 2022/09/29 07:51

_id acadia20_238
id acadia20_238
authors Zhang, Hang
year 2020
title Text-to-Form
doi https://doi.org/10.52842/conf.acadia.2020.1.238
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 238-247.
summary Traditionally, architects express their thoughts on the design of 3D architectural forms via perspective renderings and standardized 2D drawings. However, as architectural design is always multidimensional and intricate, it is difficult to make others understand the design intention, concrete form, and even spatial layout through simple language descriptions. Benefiting from the fast development of machine learning, especially natural language processing and convolutional neural networks, this paper proposes a Linguistics-based Architectural Form Generative Model (LAFGM) that could be trained to make 3D architectural form predictions based simply on language input. Several related works exist that focus on learning text-to-image generation, while others have taken a further step by generating simple shapes from the descriptions. However, the text parsing and output of these works still remain either at the 2D stage or confined to a single geometry. On the basis of these works, this paper used both Stanford Scene Graph Parser (Sebastian et al. 2015) and graph convolutional networks (Kipf and Welling 2016) to compile the analytic semantic structure for the input texts, then generated the 3D architectural form expressed by the language descriptions, which is also aided by several optimization algorithms. To a certain extent, the training results approached the 3D form intended in the textual description, not only indicating the tremendous potential of LAFGM from linguistic input to 3D architectural form, but also innovating design expression and communication regarding 3D spatial information.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id caadria2020_045
id caadria2020_045
authors Zheng, Hao and Ren, Yue
year 2020
title Machine Learning Neural Networks Construction and Analysis in Vectorized Design Drawings
doi https://doi.org/10.52842/conf.caadria.2020.2.707
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 707-716
summary Machine Learning, a recently prevalent research domain in data prediction and analysis, has been widely used in a variety of fields. In the design field, especially for architectural design, a machine learning method to learn and generate design data as pixelized images has been developed in previous researches. However, proceeding pixelized image data will cause the problems of precision loss and calculation waste, since the geometric architectural design data is efficiently stored and presented as vectorized CAD files. Thus, in this article, the author developed a specific machine learning neural network to learn and predict design drawings as vectorized data, speeding up the learning and predicting process, while improving the accuracy. First, two necessary geometric tests have been successfully done, which shows the central concept of neural network construct. Then, a design rule prediction model was built to demonstrate the methods to optimize the neural network and data structure. Lastly, a generation model based on human-made design data was constructed, which can be used to predict and generate the bedroom furniture positions by inputting the boundary data of the room, door, and window.
keywords Machine Learning; Artificial Intelligence; Generative Design; Geometric Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id acadia20_208
id acadia20_208
authors Zheng, Hao; Wang, Xinyu; Qi, Zehua; Sun, Shixuan; Akbarzadeh, Masoud
year 2020
title Generating and Optimizing a Funicular Arch Floor Structure
doi https://doi.org/10.52842/conf.acadia.2020.2.208
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 208-217.
summary In this paper, we propose a geometry-based generative design method to generate and optimize a floor structure with funicular building members. This method challenges the antiquated column system, which has been used for more than a century. By inputting the floor plan with the positions of columns, designers can generate a variety of funicular supporting structures, expanding the choice of floor structure designs beyond simply columns and beams and encouraging the creation of architectural spaces with more diverse design elements. We further apply machine learning techniques (artificial neural networks) to evaluate and optimize the structural performance and constructability of the funicular structure, thus finding the optimal solutions within the almost infinite solution space. To achieve this, a machine learning model is trained and used as a fast evaluator to help the evolutionary algorithm find the optimal designs. This interdisciplinary method combines computer science and structural design, providing flexible design choices for generating floor structures.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

For more results click below:

this is page 0show page 1show page 2show page 3show page 4HOMELOGIN (you are user _anon_573509 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002