CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 653

_id sigradi2020_60
id sigradi2020_60
authors Asmar, Karen El; Sareen, Harpreet
year 2020
title Machinic Interpolations: A GAN Pipeline for Integrating Lateral Thinking in Computational Tools of Architecture
source SIGraDi 2020 [Proceedings of the 24th Conference of the Iberoamerican Society of Digital Graphics - ISSN: 2318-6968] Online Conference 18 - 20 November 2020, pp. 60-66
summary In this paper, we discuss a new tool pipeline that aims to re-integrate lateral thinking strategies in computational tools of architecture. We present a 4-step AI-driven pipeline, based on Generative Adversarial Networks (GANs), that draws from the ability to access the latent space of a machine and use this space as a digital design environment. We demonstrate examples of navigating in this space using vector arithmetic and interpolations as a method to generate a series of images that are then translated to 3D voxel structures. Through a gallery of forms, we show how this series of techniques could result in unexpected spaces and outputs beyond what could be produced by human capability alone.
keywords Latent space, GANs, Lateral thinking, Computational tools, Artificial intelligence
series SIGraDi
email
last changed 2021/07/16 11:48

_id caadria2020_146
id caadria2020_146
authors Lertsithichai, Surapong
year 2020
title Fantastic Facades and How to Build Them
doi https://doi.org/10.52842/conf.caadria.2020.1.355
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 1, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 355-364
summary As part of an ongoing investigation in augmented architecture, the exploration of an architectural facade as a crucial element of architecture is a challenging design experiment. We believe that new architectural facades when seamlessly integrated with augmented architecture, enhanced with multiple functionalities, interactivity and performative qualities can extend a building's use beyond its typical function and limited lifespan. Augmented facades or "Fantastic Facades," can be seen as a separate entity from the internal spaces inside the building but at the same time, can also be seen as an integral part of the building as a whole that connects users, spaces, functions and interactivity between inside and outside. An option design studio for 4th year architecture students was offered to conduct this investigation for a duration of one semester. During the process of form generations, students experimented with various 2D and 3D techniques including biomimicry and generative designs, biomechanics or animal movement patterns, leaf stomata patterns, porous bubble patterns, and origami fold patterns. Eventually, five facade designs were carried on towards the final step of incorporating performative interactions and contextual programs to the facade requirements of an existing building or structure in Bangkok.
keywords Facade Design; Augmented Architecture; Form Generation; Surface System; Performative Interactions
series CAADRIA
email
last changed 2022/06/07 07:52

_id acadia20_688
id acadia20_688
authors del Campo, Matias; Carlson, Alexandra; Manninger, Sandra
year 2020
title 3D Graph Convolutional Neural Networks in Architecture Design
doi https://doi.org/10.52842/conf.acadia.2020.1.688
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 688-696.
summary The nature of the architectural design process can be described along the lines of the following representational devices: the plan and the model. Plans can be considered one of the oldest methods to represent spatial and aesthetic information in an abstract, 2D space. However, to be used in the design process of 3D architectural solutions, these representations are inherently limited by the loss of rich information that occurs when compressing the three-dimensional world into a two-dimensional representation. During the first Digital Turn (Carpo 2013), the sheer amount and availability of models increased dramatically, as it became viable to create vast amounts of model variations to explore project alternatives among a much larger range of different physical and creative dimensions. 3D models show how the design object appears in real life, and can include a wider array of object information that is more easily understandable by nonexperts, as exemplified in techniques such as building information modeling and parametric modeling. Therefore, the ground condition of this paper considers that the inherent nature of architectural design and sensibility lies in the negotiation of 3D space coupled with the organization of voids and spatial components resulting in spatial sequences based on programmatic relationships, resulting in an assemblage (DeLanda 2016). These conditions constitute objects representing a material culture (the built environment) embedded in a symbolic and aesthetic culture (DeLanda 2016) that is created by the designer and captures their sensibilities.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id caadria2020_446
id caadria2020_446
authors Cho, Dahngyu, Kim, Jinsung, Shin, Eunseo, Choi, Jungsik and Lee, Jin-Kook
year 2020
title Recognizing Architectural Objects in Floor-plan Drawings Using Deep-learning Style-transfer Algorithms
doi https://doi.org/10.52842/conf.caadria.2020.2.717
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 717-725
summary This paper describes an approach of recognizing floor plans by assorting essential objects of the plan using deep-learning based style transfer algorithms. Previously, the recognition of floor plans in the design and remodeling phase was labor-intensive, requiring expert-dependent and manual interpretation. For a computer to take in the imaged architectural plan information, the symbols in the plan must be understood. However, the computer has difficulty in extracting information directly from the preexisting plans due to the different conditions of the plans. The goal is to change the preexisting plans to an integrated format to improve the readability by transferring their style into a comprehensible way using Conditional Generative Adversarial Networks (cGAN). About 100-floor plans were used for the dataset which was previously constructed by the Ministry of Land, Infrastructure, and Transport of Korea. The proposed approach has such two steps: (1) to define the important objects contained in the floor plan which needs to be extracted and (2) to use the defined objects as training input data for the cGAN style transfer model. In this paper, wall, door, and window objects were selected as the target for extraction. The preexisting floor plans would be segmented into each part, altered into a consistent format which would then contribute to automatically extracting information for further utilization.
keywords Architectural objects; floor plan recognition; deep-learning; style-transfer
series CAADRIA
email
last changed 2022/06/07 07:56

_id acadia20_272
id acadia20_272
authors del Campo, Matias; Carlson, Alexandra; Manninger, Sandra
year 2020
title How Machines Learn to Plan
doi https://doi.org/10.52842/conf.acadia.2020.1.272
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 272-281.
summary This paper strives to interrogate the abilities of machine vision techniques based on a family of deep neural networks, called generative adversarial neural networks (GANs), to devise alternative planning solutions. The basis for these processes is a large database of existing planning solutions. For the experimental setup of this paper, these plans were divided into two separate learning classes: Modern and Baroque. The proposed algorithmic technique leverages the large amount of structural and symbolic information that is inherent to the design of planning solutions throughout history to generate novel unseen plans. In this area of inquiry, aspects of culture such as creativity, agency, and authorship are discussed, as neural networks can conceive solutions currently alien to designers. These can range from alien morphologies to advanced programmatic solutions. This paper is primarily interested in interrogating the second existing but uncharted territory.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ijac202018103
id ijac202018103
authors Kimm, Geoff
year 2020
title Actual and experiential shadow origin tagging: A 2.5D algorithm for efficient precinct-scale modelling
source International Journal of Architectural Computing vol. 18 - no. 1, 41-52
summary This article describes a novel algorithm for built environment 2.5D digital model shadow generation that allows identities of shadowing sources to be efficiently precalculated. For any point on the ground, all sources of shadowing can be identified and are classified as actual or experiential obstructions to sunlight. The article justifies a 2.5D raster approach in the context of modelling of architectural and urban environments that has in recent times shifted from 2D to 3D, and describes in detail the algorithm which builds on precedents for 2.5D raster calculation of shadows. The algorithm is efficient and is applicable at even precinct scale in low-end computing environments. The simplicity of this new technique, and its independence of GPU coding, facilitates its easy use in research, prototyping and civic engagement contexts. Two research software applications are presented with technical details to demonstrate the algorithm’s use for participatory built environment simulation and generative modelling applications. The algorithm and its shadow origin tagging can be applied to many digital workflows in architectural and urban design, including those using big data, artificial intelligence or community participative processes.
keywords 2.5D raster, actual and experiential shadow origins, generative techniques, participatory built environment simulation, reactive scripting for design
series journal
email
last changed 2020/11/02 13:34

_id cdrf2019_103
id cdrf2019_103
authors Runjia Tian
year 2020
title Suggestive Site Planning with Conditional GAN and Urban GIS Data
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_10
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary In architecture, landscape architecture, and urban design, site planning refers to the organizational process of site layout. A fundamental step for site planning is the design of building layout across the site. This process is hard to automate due to its multi-modal nature: it takes multiple constraints such as street block shape, orientation, program, density, and plantation. The paper proposes a prototypical and extensive framework to generate building footprints as masterplan references for architects, landscape architects, and urban designers by learning from the existing built environment with Artificial Neural Networks. Pix2PixHD Conditional Generative Adversarial Neural Network is used to learn the mapping from a site boundary geometry represented with a pixelized image to that of an image containing building footprint color-coded to various programs. A dataset containing necessary information is collected from open source GIS (Geographic Information System) portals from the city of Boston, wrangled with geospatial analysis libraries in python, trained with the TensorFlow framework. The result is visualized in Rhinoceros and Grasshopper, for generating site plans interactively.
series cdrf
email
last changed 2022/09/29 07:51

_id ecaade2020_018
id ecaade2020_018
authors Sato, Gen, Ishizawa, Tsukasa, Iseda, Hajime and Kitahara, Hideo
year 2020
title Automatic Generation of the Schematic Mechanical System Drawing by Generative Adversarial Network
doi https://doi.org/10.52842/conf.ecaade.2020.1.403
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 403-410
summary In the front-loaded project workflow, mechanical, electrical, and plumbing (MEP) design requires precision from the beginning of the design phase. Leveraging insights from as-built drawings during the early design stage can be beneficial to design enhancement. This study proposes a GAN (Generative Adversarial Networks)-based system which populates the fire extinguishing (FE) system onto the architectural drawing image as its input. An algorithm called Pix2Pix with the improved loss function enabled such generation. The algorithm was trained by the dataset, which includes pairs of as-built building plans with and without FE equipment. A novel index termed Piping Coverage Rate was jointly proposed to evaluate the obtained results. The system produces the output within 45 seconds, which is drastically faster than the conventional manual workflow. The system realizes the prompt engineering study learned from past as-built information, which contributes to further the data-driven decision making.
keywords Generative Adversarial Network; MEP; as-built drawing; automated design; data-driven design
series eCAADe
email
last changed 2022/06/07 07:57

_id cdrf2022_209
id cdrf2022_209
authors Yecheng Zhang, Qimin Zhang, Yuxuan Zhao, Yunjie Deng, Feiyang Liu, Hao Zheng
year 2022
title Artificial Intelligence Prediction of Urban Spatial Risk Factors from an Epidemic Perspective
doi https://doi.org/https://doi.org/10.1007/978-981-19-8637-6_18
source Proceedings of the 2022 DigitalFUTURES The 4st International Conference on Computational Design and Robotic Fabrication (CDRF 2022)
summary From the epidemiological perspective, previous research methods of COVID-19 are generally based on classical statistical analysis. As a result, spatial information is often not used effectively. This paper uses image-based neural networks to explore the relationship between urban spatial risk and the distribution of infected populations, and the design of urban facilities. We take the Spatio-temporal data of people infected with new coronary pneumonia before February 28 in Wuhan in 2020 as the research object. We use kriging spatial interpolation technology and core density estimation technology to establish the epidemic heat distribution on fine grid units. We further examine the distribution of nine main spatial risk factors, including agencies, hospitals, park squares, sports fields, banks, hotels, Etc., which are tested for the significant positive correlation with the heat distribution of the epidemic. The weights of the spatial risk factors are used for training Generative Adversarial Network models, which predict the heat distribution of the outbreak in a given area. According to the trained model, optimizing the relevant environment design in urban areas to control risk factors effectively prevents and manages the epidemic from dispersing. The input image of the machine learning model is a city plan converted by public infrastructures, and the output image is a map of urban spatial risk factors in the given area.
series cdrf
email
last changed 2024/05/29 14:02

_id caadria2020_234
id caadria2020_234
authors Zhang, Hang and Blasetti, Ezio
year 2020
title 3D Architectural Form Style Transfer through Machine Learning
doi https://doi.org/10.52842/conf.caadria.2020.2.659
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 659-668
summary In recent years, a tremendous amount of progress is being made in the field of machine learning, but it is still very hard to directly apply 3D Machine Learning on the architectural design due to the practical constraints on model resolution and training time. Based on the past several years' development of GAN (Generative Adversarial Network), also the method of spatial sequence rules, the authors mainly introduces 3D architectural form style transfer on 2 levels of scale (overall and detailed) through multiple methods of machine learning algorithms which are trained with 2 types of 2D training data set (serial stack and multi-view) at a relatively decent resolution. By exploring how styles interact and influence the original content in neural networks on the 2D level, it is possible for designers to manually control the expected output of 2D images, result in creating the new style 3D architectural model with a clear designing approach.
keywords 3D; Form Finding; Style Transfer; Machine Learning; Architectural Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id acadia20_238
id acadia20_238
authors Zhang, Hang
year 2020
title Text-to-Form
doi https://doi.org/10.52842/conf.acadia.2020.1.238
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 238-247.
summary Traditionally, architects express their thoughts on the design of 3D architectural forms via perspective renderings and standardized 2D drawings. However, as architectural design is always multidimensional and intricate, it is difficult to make others understand the design intention, concrete form, and even spatial layout through simple language descriptions. Benefiting from the fast development of machine learning, especially natural language processing and convolutional neural networks, this paper proposes a Linguistics-based Architectural Form Generative Model (LAFGM) that could be trained to make 3D architectural form predictions based simply on language input. Several related works exist that focus on learning text-to-image generation, while others have taken a further step by generating simple shapes from the descriptions. However, the text parsing and output of these works still remain either at the 2D stage or confined to a single geometry. On the basis of these works, this paper used both Stanford Scene Graph Parser (Sebastian et al. 2015) and graph convolutional networks (Kipf and Welling 2016) to compile the analytic semantic structure for the input texts, then generated the 3D architectural form expressed by the language descriptions, which is also aided by several optimization algorithms. To a certain extent, the training results approached the 3D form intended in the textual description, not only indicating the tremendous potential of LAFGM from linguistic input to 3D architectural form, but also innovating design expression and communication regarding 3D spatial information.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id caadria2020_015
id caadria2020_015
authors Zheng, Hao, An, Keyao, Wei, Jingxuan and Ren, Yue
year 2020
title Apartment Floor Plans Generation via Generative Adversarial Networks
doi https://doi.org/10.52842/conf.caadria.2020.2.599
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 599-608
summary When drawing architectural plans, designers should always define every detail, so the images can contain enough information to support design. This process usually costs much time in the early design stage when the design boundary has not been finally determined. Thus the designers spend a lot of time working forward and backward drawing sketches for different site conditions. Meanwhile, Machine Learning, as a decision-making tool, has been widely used in many fields. Generative Adversarial Network (GAN) is a model frame in machine learning, specially designed to learn and generate image data. Therefore, this research aims to apply GAN in creating architectural plan drawings, helping designers automatically generate the predicted details of apartment floor plans with given boundaries. Through the machine learning of image pairs that show the boundary and the details of plan drawings, the learning program will build a model to learn the connections between two given images, and then the evaluation program will generate architectural drawings according to the inputted boundary images. This automatic design tool can help release the heavy load of architects in the early design stage, quickly providing a preview of design solutions for architectural plans.
keywords Machine Learning; Artificial Intelligence; Architectural Design; Interior Design
series CAADRIA
email
last changed 2022/06/07 07:57

_id caadria2020_384
id caadria2020_384
authors Patt, Trevor Ryan
year 2020
title Spectral Clustering for Urban Networks
doi https://doi.org/10.52842/conf.caadria.2020.2.091
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 91-100
summary As planetary urbanization accelerates, the significance of developing better methods for analyzing and making sense of complex urban networks also increases. The complexity and heterogeneity of contemporary urban space poses a challenge to conventional descriptive tools. In recent years, the emergence of urban network analysis and the widespread availability of GIS data has brought network analysis methods into the discussion of urban form. This paper describes a method for computationally identifying clusters within urban and other spatial networks using spectral analysis techniques. While spectral clustering has been employed in some limited urban studies, on large spatialized datasets (particularly in identifying land use from orthoimages), it has not yet been thoroughly studied in relation to the space of the urban network itself. We present the construction of a weighted graph Laplacian matrix representation of the network and the processing of the network by eigen decomposition and subsequent clustering of eigenvalues in 4d-space.In this implementation, the algorithm computes a cross-comparison for different numbers of clusters and recommends the best option based on either the 'elbow method,' or by "eigen gap" criteria. The results of the clustering operation are immediately visualized on the original map and can also be validated numerically according to a selection of cluster metrics. Cohesion and separation values are calculated simultaneously for all nodes. After presenting these, the paper also expands on the 'silhouette' value, which is a composite measure that seems especially suited to urban network clustering.This research is undertaken with the aim of informing the design process and so the visualization of results within the active 3d model is essential. Within the paper, we illustrate the process as applied to formal grids and also historic, vernacular urban fabric; first on small, extract urban fragments and then over an entire city networks to indicate the scalability.
keywords Urban morphology; network analysis; spectral clustering; computation
series CAADRIA
email
last changed 2022/06/07 07:59

_id acadia20_228
id acadia20_228
authors Alawadhi, Mohammad; Yan, Wei
year 2020
title BIM Hyperreality
doi https://doi.org/10.52842/conf.acadia.2020.1.228
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 228-236.
summary Deep learning is expected to offer new opportunities and a new paradigm for the field of architecture. One such opportunity is teaching neural networks to visually understand architectural elements from the built environment. However, the availability of large training datasets is one of the biggest limitations of neural networks. Also, the vast majority of training data for visual recognition tasks is annotated by humans. In order to resolve this bottleneck, we present a concept of a hybrid system—using both building information modeling (BIM) and hyperrealistic (photorealistic) rendering—to synthesize datasets for training a neural network for building object recognition in photos. For generating our training dataset, BIMrAI, we used an existing BIM model and a corresponding photorealistically rendered model of the same building. We created methods for using renderings to train a deep learning model, trained a generative adversarial network (GAN) model using these methods, and tested the output model on real-world photos. For the specific case study presented in this paper, our results show that a neural network trained with synthetic data (i.e., photorealistic renderings and BIM-based semantic labels) can be used to identify building objects from photos without using photos in the training data. Future work can enhance the presented methods using available BIM models and renderings for more generalized mapping and description of photographed built environments.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ecaade2020_017
id ecaade2020_017
authors Chan, Yick Hin Edwin and Spaeth, A. Benjamin
year 2020
title Architectural Visualisation with Conditional Generative Adversarial Networks (cGAN). - What machines read in architectural sketches.
doi https://doi.org/10.52842/conf.ecaade.2020.2.299
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 299-308
summary As a form of visual reasoning, sketching is a human cognitive activity instrumental to architectural design. In the process of sketching, abstract sketches invoke new mental imageries and subsequently lead to new sketches. This iterative transformation is repeated until the final design emerges. Artificial Intelligence and Deep Neural Networks have been developed to imitate human cognitive processes. Amongst these networks, the Conditional Generative Adversarial Network (cGAN) has been developed for image-to-image translation and is able to generate realistic images from abstract sketches. To mimic the cyclic process of abstracting and imaging in architectural concept design, a Cyclic-cGAN that consists of two cGANs is proposed in this paper. The first cGAN transforms sketches to images, while the second from images to sketches. The training of the Cyclic-cGAN is presented and its performance illustrated by using two sketches from well-known architects, and two from architecture students. The results show that the proposed Cyclic-cGAN can emulate architects' mode of visual reasoning through sketching. This novel approach of utilising deep neural networks may open the door for further development of Artificial Intelligence in assisting architects in conceptual design.
keywords visual cognition; design computation; machine learning; artificial intelligence
series eCAADe
email
last changed 2022/06/07 07:55

_id caadria2020_118
id caadria2020_118
authors Chow, Ka Lok and van Ameijde, Jeroen
year 2020
title Generative Housing Communities - Design of Participatory Spaces in Public Housing Using Network Configurational Theories
doi https://doi.org/10.52842/conf.caadria.2020.2.283
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 283-292
summary This research-by-design project explores how public housing estates can accommodate social diversity and the appropriation of shared spaces, using qualitative and quantitative analysis of circulation networks. A case study housing estate in Hong Kong was analysed through field observations of movements and activities and as a site for the speculative re-design of shared spaces. Generative design processes were developed based on several parameters, including shortest paths, visibility integration and connectivity integration (Hillier & Hanson, 1984). Additional tools were developed to combine these techniques with optimisation of sunlight access, maximisation of views for residential towers and the provision of permeability of ground level building volumes. The project demonstrates how flexibility of use and social engagement can constitute a platform for self-organisation, similar to Jane Jacobs' notion of vibrant streets leading to active and progressive communities. It shows how computational design and configurational theories can promote a bottom-up approach for generating new types of residential environments that support participatory and diverse communities, rather than a conventional top-down approach that is perceived to embody mechanisms of social regimentation.
keywords Urban Planning and Design; Network Configuration; Community Space and Social Interaction; Hong Kong Public Housing
series CAADRIA
email
last changed 2022/06/07 07:56

_id acadia20_668
id acadia20_668
authors Pasquero, Claudia; Poletto, Marco
year 2020
title Deep Green
doi https://doi.org/10.52842/conf.acadia.2020.1.668
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 668-677.
summary Ubiquitous computing enables us to decipher the biosphere’s anthropogenic dimension, what we call the Urbansphere (Pasquero and Poletto 2020). This machinic perspective unveils a new postanthropocentric reality, where the impact of artificial systems on the natural biosphere is indeed global, but their agency is no longer entirely human. This paper explores a protocol to design the Urbansphere, or what we may call the urbanization of the nonhuman, titled DeepGreen. With the development of DeepGreen, we are testing the potential to bring the interdependence of digital and biological intelligence to the core of architectural and urban design research. This is achieved by developing a new biocomputational design workflow that enables the pairing of what is algorithmically drawn with what is biologically grown (Pasquero and Poletto 2016). In other words, and more in detail, the paper will illustrate how generative adversarial network (GAN) algorithms (Radford, Metz, and Soumith 2015) can be trained to “behave” like a Physarum polycephalum, a unicellular organism endowed with surprising computational abilities and self-organizing behaviors that have made it popular among scientist and engineers alike (Adamatzky 2010) (Fig. 1). The trained GAN_Physarum is deployed as an urban design technique to test the potential of polycephalum intelligence in solving problems of urban remetabolization and in computing scenarios of urban morphogenesis within a nonhuman conceptual framework.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id cdrf2019_68
id cdrf2019_68
authors Pierre Cutellic
year 2020
title Growing Shapes with a Generalised Model from Neural Correlates of Visual Discrimination
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_7
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary This paper focuses on the application of visual Event-Related Potentials (ERP) in better generalisations for design and architectural modelling. It makes use of previously built techniques and trained models on EEG signals of a singular individual and observes the robustness of advanced classification models to initiate the development of presentation and classification techniques for enriched visual environments by developing an iterative and generative design process of growing shapes. The pursued interest is to observe if visual ERP as correlates of visual discrimination can hold in structurally similar, but semantically different, experiments and support the discrimination of meaningful design solutions. Following bayesian terms, we will coin this endeavour a Design Belief and elaborate a method to explore and exploit such features decoded from human visual cognition.
series cdrf
email
last changed 2022/09/29 07:51

_id caadria2020_259
id caadria2020_259
authors Rhee, Jinmo, Veloso, Pedro and Krishnamurti, Ramesh
year 2020
title Integrating building footprint prediction and building massing - an experiment in Pittsburgh
doi https://doi.org/10.52842/conf.caadria.2020.2.669
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 669-678
summary We present a novel method for generating building geometry using deep learning techniques based on contextual geometry in urban context and explore its potential to support building massing. For contextual geometry, we opted to investigate the building footprint, a main interface between urban and architectural forms. For training, we collected GIS data of building footprints and geometries of parcels from Pittsburgh and created a large dataset of Diagrammatic Image Dataset (DID). We employed a modified version of a VGG neural network to model the relationship between (c) a diagrammatic image of a building parcel and context without the footprint, and (q) a quadrilateral representing the original footprint. The option for simple geometrical output enables direct integration with custom design workflows because it obviates image processing and increases training speed. After training the neural network with a curated dataset, we explore a generative workflow for building massing that integrates contextual and programmatic data. As trained model can suggest a contextual boundary for a new site, we used Massigner (Rhee and Chung 2019) to recommend massing alternatives based on the subtraction of voids inside the contextual boundary that satisfy design constraints and programmatic requirements. This new method suggests the potential that learning-based method can be an alternative of rule-based design methods to grasp the complex relationships between design elements.
keywords Deep Learning; Prediction; Building Footprint; Massing; Generative Design
series CAADRIA
email
last changed 2022/06/07 07:56

_id acadia20_218
id acadia20_218
authors Rossi, Gabriella; Nicholas, Paul
year 2020
title Encoded Images
doi https://doi.org/10.52842/conf.acadia.2020.1.218
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 218-227.
summary In this paper, we explore conditional generative adversarial networks (cGANs) as a new way of bridging the gap between design and analysis in contemporary architectural practice. By substituting analytical finite element analysis (FEA) modeling with cGAN predictions during the iterative design phase, we develop novel workflows that support iterative computational design and digital fabrication processes in new ways. This paper reports two case studies of increasing complexity that utilize cGANs for structural analysis. Central to both experiments is the representation of information within the data set the cGAN is trained on. We contribute a prototypical representational technique to encode multiple layers of geometric and performative description into false color images, which we then use to train a Pix2Pix neural network architecture on entirely digital generated data sets as a proxy for the performance of physically fabricated elements. The paper describes the representational workflow and reports the process and results of training and their integration into the design experiments. Last, we identify potentials and limits of this approach within the design processes.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 32HOMELOGIN (you are user _anon_356677 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002