CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 652

_id caadria2020_015
id caadria2020_015
authors Zheng, Hao, An, Keyao, Wei, Jingxuan and Ren, Yue
year 2020
title Apartment Floor Plans Generation via Generative Adversarial Networks
doi https://doi.org/10.52842/conf.caadria.2020.2.599
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 599-608
summary When drawing architectural plans, designers should always define every detail, so the images can contain enough information to support design. This process usually costs much time in the early design stage when the design boundary has not been finally determined. Thus the designers spend a lot of time working forward and backward drawing sketches for different site conditions. Meanwhile, Machine Learning, as a decision-making tool, has been widely used in many fields. Generative Adversarial Network (GAN) is a model frame in machine learning, specially designed to learn and generate image data. Therefore, this research aims to apply GAN in creating architectural plan drawings, helping designers automatically generate the predicted details of apartment floor plans with given boundaries. Through the machine learning of image pairs that show the boundary and the details of plan drawings, the learning program will build a model to learn the connections between two given images, and then the evaluation program will generate architectural drawings according to the inputted boundary images. This automatic design tool can help release the heavy load of architects in the early design stage, quickly providing a preview of design solutions for architectural plans.
keywords Machine Learning; Artificial Intelligence; Architectural Design; Interior Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id acadia20_228
id acadia20_228
authors Alawadhi, Mohammad; Yan, Wei
year 2020
title BIM Hyperreality
doi https://doi.org/10.52842/conf.acadia.2020.1.228
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 228-236.
summary Deep learning is expected to offer new opportunities and a new paradigm for the field of architecture. One such opportunity is teaching neural networks to visually understand architectural elements from the built environment. However, the availability of large training datasets is one of the biggest limitations of neural networks. Also, the vast majority of training data for visual recognition tasks is annotated by humans. In order to resolve this bottleneck, we present a concept of a hybrid system—using both building information modeling (BIM) and hyperrealistic (photorealistic) rendering—to synthesize datasets for training a neural network for building object recognition in photos. For generating our training dataset, BIMrAI, we used an existing BIM model and a corresponding photorealistically rendered model of the same building. We created methods for using renderings to train a deep learning model, trained a generative adversarial network (GAN) model using these methods, and tested the output model on real-world photos. For the specific case study presented in this paper, our results show that a neural network trained with synthetic data (i.e., photorealistic renderings and BIM-based semantic labels) can be used to identify building objects from photos without using photos in the training data. Future work can enhance the presented methods using available BIM models and renderings for more generalized mapping and description of photographed built environments.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ecaade2020_017
id ecaade2020_017
authors Chan, Yick Hin Edwin and Spaeth, A. Benjamin
year 2020
title Architectural Visualisation with Conditional Generative Adversarial Networks (cGAN). - What machines read in architectural sketches.
doi https://doi.org/10.52842/conf.ecaade.2020.2.299
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 299-308
summary As a form of visual reasoning, sketching is a human cognitive activity instrumental to architectural design. In the process of sketching, abstract sketches invoke new mental imageries and subsequently lead to new sketches. This iterative transformation is repeated until the final design emerges. Artificial Intelligence and Deep Neural Networks have been developed to imitate human cognitive processes. Amongst these networks, the Conditional Generative Adversarial Network (cGAN) has been developed for image-to-image translation and is able to generate realistic images from abstract sketches. To mimic the cyclic process of abstracting and imaging in architectural concept design, a Cyclic-cGAN that consists of two cGANs is proposed in this paper. The first cGAN transforms sketches to images, while the second from images to sketches. The training of the Cyclic-cGAN is presented and its performance illustrated by using two sketches from well-known architects, and two from architecture students. The results show that the proposed Cyclic-cGAN can emulate architects' mode of visual reasoning through sketching. This novel approach of utilising deep neural networks may open the door for further development of Artificial Intelligence in assisting architects in conceptual design.
keywords visual cognition; design computation; machine learning; artificial intelligence
series eCAADe
email
last changed 2022/06/07 07:55

_id caadria2020_446
id caadria2020_446
authors Cho, Dahngyu, Kim, Jinsung, Shin, Eunseo, Choi, Jungsik and Lee, Jin-Kook
year 2020
title Recognizing Architectural Objects in Floor-plan Drawings Using Deep-learning Style-transfer Algorithms
doi https://doi.org/10.52842/conf.caadria.2020.2.717
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 717-725
summary This paper describes an approach of recognizing floor plans by assorting essential objects of the plan using deep-learning based style transfer algorithms. Previously, the recognition of floor plans in the design and remodeling phase was labor-intensive, requiring expert-dependent and manual interpretation. For a computer to take in the imaged architectural plan information, the symbols in the plan must be understood. However, the computer has difficulty in extracting information directly from the preexisting plans due to the different conditions of the plans. The goal is to change the preexisting plans to an integrated format to improve the readability by transferring their style into a comprehensible way using Conditional Generative Adversarial Networks (cGAN). About 100-floor plans were used for the dataset which was previously constructed by the Ministry of Land, Infrastructure, and Transport of Korea. The proposed approach has such two steps: (1) to define the important objects contained in the floor plan which needs to be extracted and (2) to use the defined objects as training input data for the cGAN style transfer model. In this paper, wall, door, and window objects were selected as the target for extraction. The preexisting floor plans would be segmented into each part, altered into a consistent format which would then contribute to automatically extracting information for further utilization.
keywords Architectural objects; floor plan recognition; deep-learning; style-transfer
series CAADRIA
email
last changed 2022/06/07 07:56

_id ecaade2020_018
id ecaade2020_018
authors Sato, Gen, Ishizawa, Tsukasa, Iseda, Hajime and Kitahara, Hideo
year 2020
title Automatic Generation of the Schematic Mechanical System Drawing by Generative Adversarial Network
doi https://doi.org/10.52842/conf.ecaade.2020.1.403
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 403-410
summary In the front-loaded project workflow, mechanical, electrical, and plumbing (MEP) design requires precision from the beginning of the design phase. Leveraging insights from as-built drawings during the early design stage can be beneficial to design enhancement. This study proposes a GAN (Generative Adversarial Networks)-based system which populates the fire extinguishing (FE) system onto the architectural drawing image as its input. An algorithm called Pix2Pix with the improved loss function enabled such generation. The algorithm was trained by the dataset, which includes pairs of as-built building plans with and without FE equipment. A novel index termed Piping Coverage Rate was jointly proposed to evaluate the obtained results. The system produces the output within 45 seconds, which is drastically faster than the conventional manual workflow. The system realizes the prompt engineering study learned from past as-built information, which contributes to further the data-driven decision making.
keywords Generative Adversarial Network; MEP; as-built drawing; automated design; data-driven design
series eCAADe
email
last changed 2022/06/07 07:57

_id caadria2020_234
id caadria2020_234
authors Zhang, Hang and Blasetti, Ezio
year 2020
title 3D Architectural Form Style Transfer through Machine Learning
doi https://doi.org/10.52842/conf.caadria.2020.2.659
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 659-668
summary In recent years, a tremendous amount of progress is being made in the field of machine learning, but it is still very hard to directly apply 3D Machine Learning on the architectural design due to the practical constraints on model resolution and training time. Based on the past several years' development of GAN (Generative Adversarial Network), also the method of spatial sequence rules, the authors mainly introduces 3D architectural form style transfer on 2 levels of scale (overall and detailed) through multiple methods of machine learning algorithms which are trained with 2 types of 2D training data set (serial stack and multi-view) at a relatively decent resolution. By exploring how styles interact and influence the original content in neural networks on the 2D level, it is possible for designers to manually control the expected output of 2D images, result in creating the new style 3D architectural model with a clear designing approach.
keywords 3D; Form Finding; Style Transfer; Machine Learning; Architectural Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id acadia20_238
id acadia20_238
authors Zhang, Hang
year 2020
title Text-to-Form
doi https://doi.org/10.52842/conf.acadia.2020.1.238
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 238-247.
summary Traditionally, architects express their thoughts on the design of 3D architectural forms via perspective renderings and standardized 2D drawings. However, as architectural design is always multidimensional and intricate, it is difficult to make others understand the design intention, concrete form, and even spatial layout through simple language descriptions. Benefiting from the fast development of machine learning, especially natural language processing and convolutional neural networks, this paper proposes a Linguistics-based Architectural Form Generative Model (LAFGM) that could be trained to make 3D architectural form predictions based simply on language input. Several related works exist that focus on learning text-to-image generation, while others have taken a further step by generating simple shapes from the descriptions. However, the text parsing and output of these works still remain either at the 2D stage or confined to a single geometry. On the basis of these works, this paper used both Stanford Scene Graph Parser (Sebastian et al. 2015) and graph convolutional networks (Kipf and Welling 2016) to compile the analytic semantic structure for the input texts, then generated the 3D architectural form expressed by the language descriptions, which is also aided by several optimization algorithms. To a certain extent, the training results approached the 3D form intended in the textual description, not only indicating the tremendous potential of LAFGM from linguistic input to 3D architectural form, but also innovating design expression and communication regarding 3D spatial information.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ecaade2020_047
id ecaade2020_047
authors Brown, Lachlan, Yip, Michael, Gardner, Nicole, Haeusler, M. Hank, Khean, Nariddh, Zavoleas, Yannis and Ramos, Cristina
year 2020
title Drawing Recognition - Integrating Machine Learning Systems into Architectural Design Workflows
doi https://doi.org/10.52842/conf.ecaade.2020.2.289
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 289-298
summary Machine Learning (ML) has valuable applications that are yet to be proliferated in the AEC industry. Yet, ML offers arguably significant new ways to produce and assist design. However, ML tools are too often out of the reach of designers, severely limiting opportunities to improve the methods by which designers design. To address this and to optimise the practices of designers, the research aims to create a ML tool that can be integrated into architectural design workflows. Thus, this research investigates how ML can be used to universally move BIM data across various design platforms through the development of a convolutional neural network (CNN) for the recognition and labelling of rooms within floor plan images of multi-residential apartments. The effects of this computation and thinking shift will have meaningful impacts on future practices enveloping all major aspects of our built environment from designing, to construction to management.
keywords machine learning; convolutional neural networks; labelling and classification; design recognition
series eCAADe
email
last changed 2022/06/07 07:54

_id caadria2020_146
id caadria2020_146
authors Lertsithichai, Surapong
year 2020
title Fantastic Facades and How to Build Them
doi https://doi.org/10.52842/conf.caadria.2020.1.355
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 1, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 355-364
summary As part of an ongoing investigation in augmented architecture, the exploration of an architectural facade as a crucial element of architecture is a challenging design experiment. We believe that new architectural facades when seamlessly integrated with augmented architecture, enhanced with multiple functionalities, interactivity and performative qualities can extend a building's use beyond its typical function and limited lifespan. Augmented facades or "Fantastic Facades," can be seen as a separate entity from the internal spaces inside the building but at the same time, can also be seen as an integral part of the building as a whole that connects users, spaces, functions and interactivity between inside and outside. An option design studio for 4th year architecture students was offered to conduct this investigation for a duration of one semester. During the process of form generations, students experimented with various 2D and 3D techniques including biomimicry and generative designs, biomechanics or animal movement patterns, leaf stomata patterns, porous bubble patterns, and origami fold patterns. Eventually, five facade designs were carried on towards the final step of incorporating performative interactions and contextual programs to the facade requirements of an existing building or structure in Bangkok.
keywords Facade Design; Augmented Architecture; Form Generation; Surface System; Performative Interactions
series CAADRIA
email
last changed 2022/06/07 07:52

_id acadia20_218
id acadia20_218
authors Rossi, Gabriella; Nicholas, Paul
year 2020
title Encoded Images
doi https://doi.org/10.52842/conf.acadia.2020.1.218
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 218-227.
summary In this paper, we explore conditional generative adversarial networks (cGANs) as a new way of bridging the gap between design and analysis in contemporary architectural practice. By substituting analytical finite element analysis (FEA) modeling with cGAN predictions during the iterative design phase, we develop novel workflows that support iterative computational design and digital fabrication processes in new ways. This paper reports two case studies of increasing complexity that utilize cGANs for structural analysis. Central to both experiments is the representation of information within the data set the cGAN is trained on. We contribute a prototypical representational technique to encode multiple layers of geometric and performative description into false color images, which we then use to train a Pix2Pix neural network architecture on entirely digital generated data sets as a proxy for the performance of physically fabricated elements. The paper describes the representational workflow and reports the process and results of training and their integration into the design experiments. Last, we identify potentials and limits of this approach within the design processes.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id artificial_intellicence2019_117
id artificial_intellicence2019_117
authors Stanislas Chaillou
year 2020
title ArchiGAN: Artificial Intelligence x Architecture
doi https://doi.org/https://doi.org/10.1007/978-981-15-6568-7_8
source Architectural Intelligence Selected Papers from the 1st International Conference on Computational Design and Robotic Fabrication (CDRF 2022)
summary AI will soon massively empower architects in their day-to-day practice. This article provides a proof of concept. The framework used here offers a springboard for discussion, inviting architects to start engaging with AI, and data scientists to consider Architecture as a field of investigation. In this article, we summarize a part of our thesis, submitted at Harvard in May 2019, where Generative Adversarial Neural Networks (or GANs) get leveraged to design floor plans and entire buildings .
series Architectural Intelligence
email
last changed 2022/09/29 07:28

_id ecaade2020_007
id ecaade2020_007
authors Yu, De
year 2020
title Reprogramming Urban Block by Machine Creativity - How to use neural networks as generative tools to design space
doi https://doi.org/10.52842/conf.ecaade.2020.1.249
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 249-258
summary The democratization of design requires balancing all sorts of factors in space design. However, the traditional way to organize spatial relationships cannot deal with such complex design objectives. Can one find another form of creativity rather the human brain to design space? As Margaret Boden mentioned, "computers and creativity make interesting partners with respect to two different projects." This paper addresses whether machine creativity in the form of neural networks could be considered as a powerful generative tool to reprogram urban block in order to meet multi-users' needs. It tested this theory in a specific block model called Agri-tecture, a new architectural form combing farming with the urban built environment. Specifically, the machine empowered by Generative Adversarial Network designed spatial layouts based on learning the existing cases. Nevertheless, since the machine can hardly avoid errors, architects need to intervene and verify the machine's work. Thus, a synergy between human creativity and machine creativity is called for.
keywords machine creativity; Generative Adversarial Network; spatial layout; creativity combination; Agri-tecture
series eCAADe
email
last changed 2022/06/07 07:57

_id cdrf2019_134
id cdrf2019_134
authors Zhen Han, Wei Yan, and Gang Liu
year 2020
title A Performance-Based Urban Block Generative Design Using Deep Reinforcement Learning and Computer Vision
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_13
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary In recent years, generative design methods are widely used to guide urban or architectural design. Some performance-based generative design methods also combine simulation and optimization algorithms to obtain optimal solutions. In this paper, a performance-based automatic generative design method was proposed to incorporate deep reinforcement learning (DRL) and computer vision for urban planning through a case study to generate an urban block based on its direct sunlight hours, solar heat gains as well as the aesthetics of the layout. The method was tested on the redesign of an old industrial district located in Shenyang, Liaoning Province, China. A DRL agent - deep deterministic policy gradient (DDPG) agent - was trained to guide the generation of the schemes. The agent arranges one building in the site at one time in a training episode according to the observation. Rhino/Grasshopper and a computer vision algorithm, Hough Transform, were used to evaluate the performance and aesthetics, respectively. After about 150 h of training, the proposed method generated 2179 satisfactory design solutions. Episode 1936 which had the highest reward has been chosen as the final solution after manual adjustment. The test results have proven that the method is a potentially effective way for assisting urban design.
series cdrf
email
last changed 2022/09/29 07:51

_id caadria2020_045
id caadria2020_045
authors Zheng, Hao and Ren, Yue
year 2020
title Machine Learning Neural Networks Construction and Analysis in Vectorized Design Drawings
doi https://doi.org/10.52842/conf.caadria.2020.2.707
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 707-716
summary Machine Learning, a recently prevalent research domain in data prediction and analysis, has been widely used in a variety of fields. In the design field, especially for architectural design, a machine learning method to learn and generate design data as pixelized images has been developed in previous researches. However, proceeding pixelized image data will cause the problems of precision loss and calculation waste, since the geometric architectural design data is efficiently stored and presented as vectorized CAD files. Thus, in this article, the author developed a specific machine learning neural network to learn and predict design drawings as vectorized data, speeding up the learning and predicting process, while improving the accuracy. First, two necessary geometric tests have been successfully done, which shows the central concept of neural network construct. Then, a design rule prediction model was built to demonstrate the methods to optimize the neural network and data structure. Lastly, a generation model based on human-made design data was constructed, which can be used to predict and generate the bedroom furniture positions by inputting the boundary data of the room, door, and window.
keywords Machine Learning; Artificial Intelligence; Generative Design; Geometric Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id sigradi2020_60
id sigradi2020_60
authors Asmar, Karen El; Sareen, Harpreet
year 2020
title Machinic Interpolations: A GAN Pipeline for Integrating Lateral Thinking in Computational Tools of Architecture
source SIGraDi 2020 [Proceedings of the 24th Conference of the Iberoamerican Society of Digital Graphics - ISSN: 2318-6968] Online Conference 18 - 20 November 2020, pp. 60-66
summary In this paper, we discuss a new tool pipeline that aims to re-integrate lateral thinking strategies in computational tools of architecture. We present a 4-step AI-driven pipeline, based on Generative Adversarial Networks (GANs), that draws from the ability to access the latent space of a machine and use this space as a digital design environment. We demonstrate examples of navigating in this space using vector arithmetic and interpolations as a method to generate a series of images that are then translated to 3D voxel structures. Through a gallery of forms, we show how this series of techniques could result in unexpected spaces and outputs beyond what could be produced by human capability alone.
keywords Latent space, GANs, Lateral thinking, Computational tools, Artificial intelligence
series SIGraDi
email
last changed 2021/07/16 11:48

_id caadria2020_054
id caadria2020_054
authors Shen, Jiaqi, Liu, Chuan, Ren, Yue and Zheng, Hao
year 2020
title Machine Learning Assisted Urban Filling
doi https://doi.org/10.52842/conf.caadria.2020.2.679
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 679-688
summary When drawing urban scale plans, designers should always define the position and the shape of each building. This process usually costs much time in the early design stage when the condition of a city has not been finally determined. Thus the designers spend a lot of time working forward and backward drawing sketches for different characteristics of cities. Meanwhile, machine learning, as a decision-making tool, has been widely used in many fields. Generative Adversarial Network (GAN) is a model frame in machine learning, specially designed to learn and generate image data. Therefore, this research aims to apply GAN in creating urban design plans, helping designers automatically generate the predicted details of buildings configuration with a given condition of cities. Through the machine learning of image pairs, the result shows the relationship between the site conditions (roads, green lands, and rivers) and the configuration of buildings. This automatic design tool can help release the heavy load of urban designers in the early design stage, quickly providing a preview of design solutions for urban design tasks. The analysis of different machine learning models trained by the data from different cities inspires urban designers with design strategies and features in distinct conditions.
keywords Artificial Intelligence; Urban Design; Generative Adversarial Networks; Machine Learning
series CAADRIA
email
last changed 2022/06/07 07:56

_id cdrf2019_169
id cdrf2019_169
authors Yubo Liu, Yihua Luo, Qiaoming Deng, and Xuanxing Zhou
year 2020
title Exploration of Campus Layout Based on Generative Adversarial Network Discussing the Significance of Small Amount Sample Learning for Architecture
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_16
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary This paper aims to explore the idea and method of using deep learning with a small amount sample to realize campus layout generation. From the perspective of the architect, we construct two small amount sample campus layout data sets through artificial screening with the preference of the specific architects. These data sets are used to train the ability of Pix2Pix model to automatically generate the campus layout under the condition of the given campus boundary and surrounding roads. Through the analysis of the experimental results, this paper finds that under the premise of effective screening of the collected samples, even using a small amount sample data set for deep learning can achieve a good result.
series cdrf
email
last changed 2022/09/29 07:51

_id ecaade2022_16
id ecaade2022_16
authors Bailey, Grayson, Kammler, Olaf, Weiser, Rene, Fuchkina, Ekaterina and Schneider, Sven
year 2022
title Performing Immersive Virtual Environment User Studies with VREVAL
doi https://doi.org/10.52842/conf.ecaade.2022.2.437
source Pak, B, Wurzer, G and Stouffs, R (eds.), Co-creating the Future: Inclusion in and through Design - Proceedings of the 40th Conference on Education and Research in Computer Aided Architectural Design in Europe (eCAADe 2022) - Volume 2, Ghent, 13-16 September 2022, pp. 437–446
summary The new construction that is projected to take place between 2020 and 2040 plays a critical role in embodied carbon emissions. The change in material selection is inversely proportional to the budget as the project progresses. Given the fact that early-stage design processes often do not include environmental performance metrics, there is an opportunity to investigate a toolset that enables early-stage design processes to integrate this type of analysis into the preferred workflow of concept designers. The value here is that early-stage environmental feedback can inform the crucial decisions that are made in the beginning, giving a greater chance for a building with better environmental performance in terms of its life cycle. This paper presents the development of a tool called LearnCarbon, as a plugin of Rhino3d, used to educate architects and engineers in the early stages about the environmental impact of their design. It facilitates two neural networks trained with the Embodied Carbon Benchmark Study by Carbon Leadership Forum, which learns the relationship between building geometry, typology, and construction type with the Global Warming potential (GWP) in tons of C02 equivalent (tCO2e). The first one, a regression model, can predict the GWP based on the massing model of a building, along with information about typology and location. The second one, a classification model, predicts the construction type given a massing model and target GWP. LearnCarbon can help improve the building life cycle impact significantly through early predictions of the structure’s material and can be used as a tool for facilitating sustainable discussions between the architect and the client.
keywords Pre-Occupancy Evaluation, Immersive Virtual Environment, Wayfinding, User Centered Design, Architectural Study Design
series eCAADe
email
last changed 2024/04/22 07:10

_id caadria2020_066
id caadria2020_066
authors Gaudilliere, Nadja
year 2020
title Computational Tools in Architecture and Their Genesis: The Development of Agent-based Models in Spatial Design
doi https://doi.org/10.52842/conf.caadria.2020.2.497
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 497-506
summary Based on the assumption that socio-technical networks of computation in architecture exist and must be analyzed deeper in order to understand the impact of algorithmic tools on the design process, the present paper offers a foray into it, drawing on science studies methodologies. The research explores in what regard multi-agent systems (MAS) are representative as much from the existence of these socio-technical networks as of how their development influences the tension between tacit and explicit knowledge at play in procedural design processes and of the strategies architectural designers develop to resolve this tension. A methodology of analysis of these phenomena is provided as well as results of the application of this method to MAS, leading to a better understanding of their development and impact in CAAD in the past two decades. Tactics of resolution shaped by early MAS users enable, through a double appropriation, a skillful implementation of architectural practice. Furthermore, their approach partially circumvents the establishment of technical biases tied to this algorithmic typology, at the cost of a lesser massive democratization of the algorithmic tools developed in relation to it.
keywords Computational tools; multi-agent system; architectural practice; tacit knowledge; digital heritage
series CAADRIA
email
last changed 2022/06/07 07:51

_id cdrf2019_159
id cdrf2019_159
authors Hang Zhang and Ye Huang
year 2020
title Machine Learning Aided 2D-3D Architectural Form Finding at High Resolution
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_15
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary In the past few years, more architects and engineers start thinking about the application of machine learning algorithms in the architectural design field such as building facades generation or floor plans generation, etc. However, due to the relatively slow development of 3D machine learning algorithms, 3D architecture form exploration through machine learning is still a difficult issue for architects. As a result, most of these applications are confined to the level of 2D. Based on the state-of-the-art 2D image generation algorithm, also the method of spatial sequence rules, this article proposes a brand-new strategy of encoding, decoding, and form generation between 2D drawings and 3D models, which we name 2D-3D Form Encoding WorkFlow. This method could provide some innovative design possibilities that generate the latent 3D forms between several different architectural styles. Benefited from the 2D network advantages and the image amplification network nested outside the benchmark network, we have significantly expanded the resolution of training results when compared with the existing form-finding algorithm and related achievements in recent years
series cdrf
email
last changed 2022/09/29 07:51

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 32HOMELOGIN (you are user _anon_239402 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002