id |
caadria2022_231 |
authors |
Kim, Frederick Chando and Huang, Jeffrey |
year |
2022 |
title |
Deep Architectural Archiving (DAA), Towards a Machine Understanding of Architectural Form |
source |
Jeroen van Ameijde, Nicole Gardner, Kyung Hoon Hyun, Dan Luo, Urvi Sheth (eds.), POST-CARBON - Proceedings of the 27th CAADRIA Conference, Sydney, 9-15 April 2022, pp. 727-736 |
doi |
https://doi.org/10.52842/conf.caadria.2022.1.727
|
summary |
With the ‚digital turn‚, machines now have the intrinsic capacity to learn from big data in order to understand the intricacies of architectural form. This paper explores the research question: how can architectural form become machine computable? The research objective is to develop "Deep Architectural Archiving‚ (DAA), a new method devised to address this question. DAA consists of the combination of four distinct steps: (1) Data mining, (2) 3D Point cloud extraction, (3) Deep form learning, as well as (4) Form mapping and clustering. The paper discusses the DAA method using an extensive dataset of architecture competitions in Switzerland (with over 360+ architectural projects) as a case study resource. Machines learn the particularities of forms using 'architectural' point clouds as an opportune machine-learnable format. The result of this procedure is a multidimensional, spatialized, and machine-enabled clustering of forms that allows for the visualization of comparative relationships among form-correlated datasets that exceeds what the human eye can generally perceive. Such work is necessary to create a dedicated digital archive for enhancing the formal knowledge of architecture and enabling a better understanding of innovation, both of which provide architects a basis for developing effective architectural form in a post-carbon world. |
keywords |
artificial intelligence, deep learning, architectural form, architectural competitions, architectural archive, 3D dataset, SDG 11 |
series |
CAADRIA |
email |
|
full text |
file.pdf (9,385,787 bytes) |
references |
Content-type: text/plain
|
Chupin, J.-P., Cucuzzella, C. & Helal, B. (2015)
Architecture Competitions and the Production of Culture, Quality and Knowledge: An International Inquiry
, Potential Architecture Books Inc
|
|
|
|
Maaten, L. van der & Hinton, G. (2008)
Visualizing Data using t-SNE
, Journal of Machine Learning Research, 9(86), 2579–2605
|
|
|
|
Moussavi, F. & Lopez, D. (2009)
The Function of Form
, Actar
|
|
|
|
Newton, D. (2019)
Generative Deep Learning in Architectural Design
, Technology | Architecture + Design, 3(2), 176–189. https://doi.org/10.1080/24751448.2019.1640536
|
|
|
|
Rodríguez, J. de M., Villafane, M. E., Piškorec, L. & Caparrini, F. S. (2020)
Generation of geometric interpolations of building types with deep variational autoencoders
, Design Science, 6. https://doi.org/10.1017/dsj.2020.31
|
|
|
|
Steinfeld, K., Park, K. S., Menges, A. & Walker, S. (2019)
Fresh Eyes: A Framework for the Application of Machine Learning to Generative Architectural Design, and a Report of Activities
, Smartgeometry 2018. https://doi.org/10.1007/978-981-13-8410-3_3
|
|
|
|
Stoter, J. E., Arroyo Ohori, G. a. K., Dukai, B., Labetski, A., Kavisha, K., Vitalis, S. & Ledoux, H. (2020)
State of the Art in 3D City Modelling: Six Challenges Facing 3D Data as a Platform
, GIM International: The Worldwide Magazine for Geomatics, 34
|
|
|
|
Wattenberg, M., Viégas, F. & Johnson, I. (2016)
How to Use t-SNE Effectively
, Distill, 1(10), e2. https://doi.org/10.23915/distill.00002
|
|
|
|
Zamorski, M., Zięba, M., Klukowski, P., Nowak, R., Kurach, K., Stokowiec, W. & Trzciński, T. (2020)
Adversarial autoencoders for compact representations of 3D point clouds
, Computer Vision and Image Understanding, 193, 102921. https://doi.org/10.1016/j.cviu.2020.102921
|
|
|
|
Zhang, H. & Huang, Y. (2021)
Machine Learning Aided 2D-3D Architectural Form Finding at High Resolution
, P. F. Yuan, J. Yao, C. Yan, X. Wang, & N. Leach (Eds.), Proceedings of the 2020 DigitalFUTURES (pp. 159–168). Springer. HYPERLINK "https://doi.org/10.1007/978-981-33-4400-6_15\\" https://doi.org/10.1007/978-981-33-4400-6_15
|
|
|
|
last changed |
2022/07/22 07:34 |
|