CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 367

_id cdrf2019_169
id cdrf2019_169
authors Yubo Liu, Yihua Luo, Qiaoming Deng, and Xuanxing Zhou
year 2020
title Exploration of Campus Layout Based on Generative Adversarial Network Discussing the Significance of Small Amount Sample Learning for Architecture
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_16
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary This paper aims to explore the idea and method of using deep learning with a small amount sample to realize campus layout generation. From the perspective of the architect, we construct two small amount sample campus layout data sets through artificial screening with the preference of the specific architects. These data sets are used to train the ability of Pix2Pix model to automatically generate the campus layout under the condition of the given campus boundary and surrounding roads. Through the analysis of the experimental results, this paper finds that under the premise of effective screening of the collected samples, even using a small amount sample data set for deep learning can achieve a good result.
series cdrf
email
last changed 2022/09/29 07:51

_id cdrf2019_179
id cdrf2019_179
authors Yuzhe Pan, Jin Qian, and Yingdong Hu
year 2020
title A Preliminary Study on the Formation of the General Layouts on the Northern Neighborhood Community Based on GauGAN Diversity Output Generator
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_17
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary Recently, the mainstream gradually has become replacing neighborhood-style communities with high-density residences. The original pleasant scale and enclosed residential spaces have been broken, and the traditional neighborhood relations are going away. This research uses machine learning to train the model to generate a new plan, which is used in today’s residential design. First, in order to obtain a better generation effect, this study extracts the transcendental information of the neighborhood community in north of China, using roads, buildings etc. as morphological representations; GauGAN, compared to the pix2pix and pix2pixHD, used by predecessors, can achieve a clearer and a more diversified output and also fit irregular contours more realistically. ANN model trained by 167 general layout samples of a neighborhood community in north of China from 1950s to 1970s can generate various general layouts in different shapes and scales. The experiment proves that GauGAN is more suitable for general layout generation than pix2pix (pix2pixHD); Distributed training can improve the clarity of the generation and allow later vectorization to be more convenient.
series cdrf
email
last changed 2022/09/29 07:51

_id acadia20_238
id acadia20_238
authors Zhang, Hang
year 2020
title Text-to-Form
doi https://doi.org/10.52842/conf.acadia.2020.1.238
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 238-247.
summary Traditionally, architects express their thoughts on the design of 3D architectural forms via perspective renderings and standardized 2D drawings. However, as architectural design is always multidimensional and intricate, it is difficult to make others understand the design intention, concrete form, and even spatial layout through simple language descriptions. Benefiting from the fast development of machine learning, especially natural language processing and convolutional neural networks, this paper proposes a Linguistics-based Architectural Form Generative Model (LAFGM) that could be trained to make 3D architectural form predictions based simply on language input. Several related works exist that focus on learning text-to-image generation, while others have taken a further step by generating simple shapes from the descriptions. However, the text parsing and output of these works still remain either at the 2D stage or confined to a single geometry. On the basis of these works, this paper used both Stanford Scene Graph Parser (Sebastian et al. 2015) and graph convolutional networks (Kipf and Welling 2016) to compile the analytic semantic structure for the input texts, then generated the 3D architectural form expressed by the language descriptions, which is also aided by several optimization algorithms. To a certain extent, the training results approached the 3D form intended in the textual description, not only indicating the tremendous potential of LAFGM from linguistic input to 3D architectural form, but also innovating design expression and communication regarding 3D spatial information.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id cdrf2019_159
id cdrf2019_159
authors Hang Zhang and Ye Huang
year 2020
title Machine Learning Aided 2D-3D Architectural Form Finding at High Resolution
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_15
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary In the past few years, more architects and engineers start thinking about the application of machine learning algorithms in the architectural design field such as building facades generation or floor plans generation, etc. However, due to the relatively slow development of 3D machine learning algorithms, 3D architecture form exploration through machine learning is still a difficult issue for architects. As a result, most of these applications are confined to the level of 2D. Based on the state-of-the-art 2D image generation algorithm, also the method of spatial sequence rules, this article proposes a brand-new strategy of encoding, decoding, and form generation between 2D drawings and 3D models, which we name 2D-3D Form Encoding WorkFlow. This method could provide some innovative design possibilities that generate the latent 3D forms between several different architectural styles. Benefited from the 2D network advantages and the image amplification network nested outside the benchmark network, we have significantly expanded the resolution of training results when compared with the existing form-finding algorithm and related achievements in recent years
series cdrf
email
last changed 2022/09/29 07:51

_id ecaade2020_222
id ecaade2020_222
authors Ikeno, Kazunosuke, Fukuda, Tomohiro and Yabuki, Nobuyoshi
year 2020
title Automatic Generation of Horizontal Building Mask Images by Using a 3D Model with Aerial Photographs for Deep Learning
doi https://doi.org/10.52842/conf.ecaade.2020.2.271
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 271-278
summary Information extracted from aerial photographs is widely used in urban planning and design. An effective method for detecting buildings in aerial photographs is to use deep learning for understanding the current state of a target region. However, the building mask images used to train the deep learning model are manually generated in many cases. To solve this challenge, a method has been proposed for automatically generating mask images by using virtual reality 3D models for deep learning. Because normal virtual models do not have the realism of a photograph, it is difficult to obtain highly accurate detection results in the real world even if the images are used for deep learning training. Therefore, the objective of this research is to propose a method for automatically generating building mask images by using 3D models with textured aerial photographs for deep learning. The model trained on datasets generated by the proposed method could detect buildings in aerial photographs with an accuracy of IoU = 0.622. Work left for the future includes changing the size and type of mask images, training the model, and evaluating the accuracy of the trained model.
keywords Urban planning and design; Deep learning; Semantic segmentation; Mask image; Training data; Automatic design
series eCAADe
email
last changed 2022/06/07 07:50

_id cdrf2019_103
id cdrf2019_103
authors Runjia Tian
year 2020
title Suggestive Site Planning with Conditional GAN and Urban GIS Data
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_10
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary In architecture, landscape architecture, and urban design, site planning refers to the organizational process of site layout. A fundamental step for site planning is the design of building layout across the site. This process is hard to automate due to its multi-modal nature: it takes multiple constraints such as street block shape, orientation, program, density, and plantation. The paper proposes a prototypical and extensive framework to generate building footprints as masterplan references for architects, landscape architects, and urban designers by learning from the existing built environment with Artificial Neural Networks. Pix2PixHD Conditional Generative Adversarial Neural Network is used to learn the mapping from a site boundary geometry represented with a pixelized image to that of an image containing building footprint color-coded to various programs. A dataset containing necessary information is collected from open source GIS (Geographic Information System) portals from the city of Boston, wrangled with geospatial analysis libraries in python, trained with the TensorFlow framework. The result is visualized in Rhinoceros and Grasshopper, for generating site plans interactively.
series cdrf
email
last changed 2022/09/29 07:51

_id cdrf2019_134
id cdrf2019_134
authors Zhen Han, Wei Yan, and Gang Liu
year 2020
title A Performance-Based Urban Block Generative Design Using Deep Reinforcement Learning and Computer Vision
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_13
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary In recent years, generative design methods are widely used to guide urban or architectural design. Some performance-based generative design methods also combine simulation and optimization algorithms to obtain optimal solutions. In this paper, a performance-based automatic generative design method was proposed to incorporate deep reinforcement learning (DRL) and computer vision for urban planning through a case study to generate an urban block based on its direct sunlight hours, solar heat gains as well as the aesthetics of the layout. The method was tested on the redesign of an old industrial district located in Shenyang, Liaoning Province, China. A DRL agent - deep deterministic policy gradient (DDPG) agent - was trained to guide the generation of the schemes. The agent arranges one building in the site at one time in a training episode according to the observation. Rhino/Grasshopper and a computer vision algorithm, Hough Transform, were used to evaluate the performance and aesthetics, respectively. After about 150 h of training, the proposed method generated 2179 satisfactory design solutions. Episode 1936 which had the highest reward has been chosen as the final solution after manual adjustment. The test results have proven that the method is a potentially effective way for assisting urban design.
series cdrf
email
last changed 2022/09/29 07:51

_id ascaad2022_102
id ascaad2022_102
authors Turki, Laila; Ben Saci, Abdelkader
year 2022
title Generative Design for a Sustainable Urban Morphology
source Hybrid Spaces of the Metaverse - Architecture in the Age of the Metaverse: Opportunities and Potentials [10th ASCAAD Conference Proceedings] Debbieh (Lebanon) [Virtual Conference] 12-13 October 2022, pp. 434-449
summary The present work concerns the applications of generative design for sustainable urban fabric. This represents an iterative process that involves an algorithm for the generation of solar envelopes to satisfy solar and density constraints. We propose in this paper to explore a meta-universe of human-machine interaction. It aims to design urban forms that offer solar access. This being to minimize heating energy expenditure and provide solar well-being. We propose to study the impact of the solar strategy of building morphosis on energy exposure. It consists of determining the layout and shape of the constructions based on the shading cut-off time. This is a period of desirable solar access. We propose to define it as a balance between the solar irradiation received in winter and that received in summer. We rely on the concept of the solar envelope defined since the 1970s by Knowles and its many derivatives (Koubaa Turki & al., 2020). We propose a parametric model to generate solar envelopes at the scale of an urban block. The generative design makes it possible to create a digital model of the different density solutions by varying the solar access duration. The virtual environment created allows exploring urban morphologies resilient both to urban densification and better use of the context’s resources. The seasonal energy balance, between overexposure in summer and access to the sun in winter, allows reaching high energy and environmental efficiency of the buildings. We have developed an algorithm on Dynamo for the generation of the solar envelope by shading exchange. The program makes it possible to detect the boundaries of the parcels imported from Revit, establish the layout of the building, and generate the solar envelopes for each variation of the shading cut-off time. It also calculates the FAR1 and the FSI2 from the variation of the shading cut-off time for each parcel of the island. We compare the solutions generated according to the urban density coefficients and the solar access duration. Once the optimal solution has been determined, we export the results back into Revit environment to complete the BIM modelling for solar study. This article proposes a method for designing buildings and neighbourhoods in a virtual environment. The latter acts upstream of the design process and can be extended to the different phases of the building life cycle: detailed design, construction, and use.
series ASCAAD
email
last changed 2024/02/16 13:38

_id caadria2020_306
id caadria2020_306
authors Akizuki, Yuta, Bernhard, Mathias, Kakooee, Reza, Kladeftira, Marirena and Dillenburger, Benjamin
year 2020
title Generative Modelling with Design Constraints - Reinforcement Learning for Object Generation
doi https://doi.org/10.52842/conf.caadria.2020.1.445
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 1, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 445-454
summary Generative design has been explored to produce unprecedented geometries, nevertheless design constraints are, in most cases, second-graded in the computational process. In this paper, reinforcement learning is deployed in order to explore the potential of generative design satisfying design objectives. The aim is to overcome the three issues identified in the state of the art: topological inconsistency, less variations in style and unpredictability in design. The goal of this paper is to develop a machine learning framework, which works as an intellectual design interpreter capable of codifying an input geometry to form a new geometry. Experiments demonstrate that the proposed method can generate a family of tables of unique aesthetics, satisfying topological consistency under given constraints.
keywords generative design; computational design; data-driven design; reinforcement learning; machine learning
series CAADRIA
email
last changed 2022/06/07 07:54

_id acadia20_228
id acadia20_228
authors Alawadhi, Mohammad; Yan, Wei
year 2020
title BIM Hyperreality
doi https://doi.org/10.52842/conf.acadia.2020.1.228
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 228-236.
summary Deep learning is expected to offer new opportunities and a new paradigm for the field of architecture. One such opportunity is teaching neural networks to visually understand architectural elements from the built environment. However, the availability of large training datasets is one of the biggest limitations of neural networks. Also, the vast majority of training data for visual recognition tasks is annotated by humans. In order to resolve this bottleneck, we present a concept of a hybrid system—using both building information modeling (BIM) and hyperrealistic (photorealistic) rendering—to synthesize datasets for training a neural network for building object recognition in photos. For generating our training dataset, BIMrAI, we used an existing BIM model and a corresponding photorealistically rendered model of the same building. We created methods for using renderings to train a deep learning model, trained a generative adversarial network (GAN) model using these methods, and tested the output model on real-world photos. For the specific case study presented in this paper, our results show that a neural network trained with synthetic data (i.e., photorealistic renderings and BIM-based semantic labels) can be used to identify building objects from photos without using photos in the training data. Future work can enhance the presented methods using available BIM models and renderings for more generalized mapping and description of photographed built environments.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ecaade2020_133
id ecaade2020_133
authors Andrade Zandavali, Barbara, Paul Anderson, Joshua and Patel, Chetan
year 2020
title Embodied Learning through Fabrication Aware Design
doi https://doi.org/10.52842/conf.ecaade.2020.2.145
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 145-154
summary The contemporary culture of geometry-driven design stands as consequence of an institutionalised segregation between the fields of architecture, structure and construction. In turn, digital design methods that are both material and fabrication aware from the outset create space for uncertainty and the potential for embodied learning. Following this principle, this paper summarises the outcomes of a workshop developed to investigate the contribution of fabrication aware design methods in the production of a masonry block using both analogue and digital manufacturing. Students were to develop and investigate a design, through assembly techniques and configurations orientated around manual hot wire cutting, robotic tooling and three-dimensional printing. Outcomes were manufactured and compared regarding work precision, production time, material efficiency, cost and scalability. The analysis indicated that the most accurate results yielded from the robotic tooling system, and simultaneously exhibited the most efficient use of time, while the three-dimensional printer generated the least material waste, due to the nature of additive production. Fabrication aware design and comparative analysis enabled students to make more informed decisions while the use of rapid prototyping facilitated a relationship between digitalization and materiality allowing for a space in which uncertainty and reflection could be fostered. Reinforcing that fabrication aware design methods can unify the field and provide guidance to designers over multi-lateral aspects of a project.
keywords Fabrication-Aware Design; Rapid Prototyping; Embodiment
series eCAADe
email
last changed 2022/06/07 07:54

_id ecaade2020_499
id ecaade2020_499
authors Ashour, Ziad and Yan, Wei
year 2020
title BIM-Powered Augmented Reality for Advancing Human-Building Interaction
doi https://doi.org/10.52842/conf.ecaade.2020.1.169
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 1, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 169-178
summary The shift from computer-aided design (CAD) to building information modeling (BIM) has made the adoption of augmented reality (AR) promising in the field of architecture, engineering and construction. Despite the potential of AR in this field, the industry and professionals have still not fully adopted it due to registration and tracking limitations and visual occlusions in dynamic environments. We propose our first prototype (BIMxAR), which utilizes existing buildings' semantically rich BIM models and contextually aligns geometrical and non-geometrical information with the physical buildings. The proposed prototype aims to solve registration and tracking issues in dynamic environments by utilizing tracking and motion sensors already available in many mobile phones and tablets. The experiment results indicate that the system can support BIM and physical building registration in outdoor and part of indoor environments, but cannot maintain accurate alignment indoor when relying only on a device's motion sensors. Therefore, additional computer vision and AI (deep learning) functions need to be integrated into the system to enhance AR model registration in the future.
keywords Augmented Reality; BIM; BIM-enabled AR; GPS; Human-Building Interactions; Education
series eCAADe
email
last changed 2022/06/07 07:54

_id ascaad2021_142
id ascaad2021_142
authors Bakir, Ramy; Sara Alsaadani, Sherif Abdelmohsen
year 2021
title Student Experiences of Online Design Education Post COVID-19: A Mixed Methods Study
source Abdelmohsen, S, El-Khouly, T, Mallasi, Z and Bennadji, A (eds.), Architecture in the Age of Disruptive Technologies: Transformations and Challenges [9th ASCAAD Conference Proceedings ISBN 978-1-907349-20-1] Cairo (Egypt) [Virtual Conference] 2-4 March 2021, pp. 142-155
summary This paper presents findings of a survey conducted to assess students’ experiences within the online instruction stage of their architectural education during the lockdown period caused by the COVID-19 pandemic between March and June 2020. The study was conducted in two departments of architecture in both Cairo branches of the Arab Academy for Science, Technology & Maritime Transport (AASTMT), Egypt, with special focus on courses involving a CAAD component. The objective of this exploratory study was to understand students’ learning experiences within the online period, and to investigate challenges facing architectural education. A mixed methods study was used, where a questionnaire-based survey was developed to gather qualitative and quantitative data based on the opinions of a sample of students from both departments. Findings focus on the qualitative component to describe students’ experiences, with quantitative data used for triangulation purposes. Results underline students’ positive learning experiences and challenges faced. Insights regarding digital tool preferences were also revealed. Findings are not only significant in understanding an important event that caused remote architectural education in Egypt but may also serve as an important stepping-stone towards the future of design education in light of newly-introduced disruptive online learning technologies made necessary in response to lockdowns worldwide
series ASCAAD
email
last changed 2021/08/09 13:13

_id ecaade2020_180
id ecaade2020_180
authors Bolshakova, Veronika, Besançon, Franck, Guerriero, Annie and Halin, Gilles
year 2020
title Use of a Digital Collaboration Tool for Project Review - A pedagogical experiment with multidisciplinary teams
doi https://doi.org/10.52842/conf.ecaade.2020.2.651
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 651-660
summary This paper emphasizes feedback from a pedagogical experiment in the context of teaching collaboration and design to multidisciplinary teams. A digital collaboration tool, a multi-touch table and collaboration software, was used as a support for discussion and decision-making for weekly project review meetings. The experiment participants' feedback on the use and usability of the digital collaboration tool highlights the potential for the use of synchronous collaboration technology and project-based learning for higher-level education. It also highlights the need for a transition towards implementation of digital tools at project review sessions.
keywords : Synchronous collaboration; Pedagogical experiment; Project-based learning; CSCW; NUI; BIM
series eCAADe
email
last changed 2022/06/07 07:54

_id ecaade2020_047
id ecaade2020_047
authors Brown, Lachlan, Yip, Michael, Gardner, Nicole, Haeusler, M. Hank, Khean, Nariddh, Zavoleas, Yannis and Ramos, Cristina
year 2020
title Drawing Recognition - Integrating Machine Learning Systems into Architectural Design Workflows
doi https://doi.org/10.52842/conf.ecaade.2020.2.289
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 289-298
summary Machine Learning (ML) has valuable applications that are yet to be proliferated in the AEC industry. Yet, ML offers arguably significant new ways to produce and assist design. However, ML tools are too often out of the reach of designers, severely limiting opportunities to improve the methods by which designers design. To address this and to optimise the practices of designers, the research aims to create a ML tool that can be integrated into architectural design workflows. Thus, this research investigates how ML can be used to universally move BIM data across various design platforms through the development of a convolutional neural network (CNN) for the recognition and labelling of rooms within floor plan images of multi-residential apartments. The effects of this computation and thinking shift will have meaningful impacts on future practices enveloping all major aspects of our built environment from designing, to construction to management.
keywords machine learning; convolutional neural networks; labelling and classification; design recognition
series eCAADe
email
last changed 2022/06/07 07:54

_id sigradi2020_668
id sigradi2020_668
authors Cenci, Laline Elisangela; Pires, Júlio César Pinheiro; Vieira, Stéphane Soares
year 2020
title Measuring the experience of algorithmic thought digital analogue design in architecture teaching
source SIGraDi 2020 [Proceedings of the 24th Conference of the Iberoamerican Society of Digital Graphics - ISSN: 2318-6968] Online Conference 18 - 20 November 2020, pp. 668-675
summary Due to constant technological developments, society’s priorities and cultural perspectives have changed, requiring a redefinition of experiences in education. In the field of architecture teaching, the transition from CAD (Computer-Aided Design) to the design systems in other digital media, such as the parametric design, can be observed. This article aims to demonstrate two analog-digital experiences in an architecture school. The methodology consisted of dividing the activities into three stages: analog, logical, and digital. The results are described through quantitative and qualitative data acquired in the experiences. The data allowed toreflect on the strategies adopted, lessons learned, and futures challenges.
keywords Teaching-learning, Parametric Design, Design Script, Dynamo Studio
series SIGraDi
email
last changed 2021/07/16 11:52

_id ecaade2020_017
id ecaade2020_017
authors Chan, Yick Hin Edwin and Spaeth, A. Benjamin
year 2020
title Architectural Visualisation with Conditional Generative Adversarial Networks (cGAN). - What machines read in architectural sketches.
doi https://doi.org/10.52842/conf.ecaade.2020.2.299
source Werner, L and Koering, D (eds.), Anthropologic: Architecture and Fabrication in the cognitive age - Proceedings of the 38th eCAADe Conference - Volume 2, TU Berlin, Berlin, Germany, 16-18 September 2020, pp. 299-308
summary As a form of visual reasoning, sketching is a human cognitive activity instrumental to architectural design. In the process of sketching, abstract sketches invoke new mental imageries and subsequently lead to new sketches. This iterative transformation is repeated until the final design emerges. Artificial Intelligence and Deep Neural Networks have been developed to imitate human cognitive processes. Amongst these networks, the Conditional Generative Adversarial Network (cGAN) has been developed for image-to-image translation and is able to generate realistic images from abstract sketches. To mimic the cyclic process of abstracting and imaging in architectural concept design, a Cyclic-cGAN that consists of two cGANs is proposed in this paper. The first cGAN transforms sketches to images, while the second from images to sketches. The training of the Cyclic-cGAN is presented and its performance illustrated by using two sketches from well-known architects, and two from architecture students. The results show that the proposed Cyclic-cGAN can emulate architects' mode of visual reasoning through sketching. This novel approach of utilising deep neural networks may open the door for further development of Artificial Intelligence in assisting architects in conceptual design.
keywords visual cognition; design computation; machine learning; artificial intelligence
series eCAADe
email
last changed 2022/06/07 07:55

_id caadria2020_446
id caadria2020_446
authors Cho, Dahngyu, Kim, Jinsung, Shin, Eunseo, Choi, Jungsik and Lee, Jin-Kook
year 2020
title Recognizing Architectural Objects in Floor-plan Drawings Using Deep-learning Style-transfer Algorithms
doi https://doi.org/10.52842/conf.caadria.2020.2.717
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 717-725
summary This paper describes an approach of recognizing floor plans by assorting essential objects of the plan using deep-learning based style transfer algorithms. Previously, the recognition of floor plans in the design and remodeling phase was labor-intensive, requiring expert-dependent and manual interpretation. For a computer to take in the imaged architectural plan information, the symbols in the plan must be understood. However, the computer has difficulty in extracting information directly from the preexisting plans due to the different conditions of the plans. The goal is to change the preexisting plans to an integrated format to improve the readability by transferring their style into a comprehensible way using Conditional Generative Adversarial Networks (cGAN). About 100-floor plans were used for the dataset which was previously constructed by the Ministry of Land, Infrastructure, and Transport of Korea. The proposed approach has such two steps: (1) to define the important objects contained in the floor plan which needs to be extracted and (2) to use the defined objects as training input data for the cGAN style transfer model. In this paper, wall, door, and window objects were selected as the target for extraction. The preexisting floor plans would be segmented into each part, altered into a consistent format which would then contribute to automatically extracting information for further utilization.
keywords Architectural objects; floor plan recognition; deep-learning; style-transfer
series CAADRIA
email
last changed 2022/06/07 07:56

_id cdrf2019_17
id cdrf2019_17
authors Chuan Liu, Jiaqi Shen, Yue Ren, and Hao Zheng
year 2020
title Pipes of AI – Machine Learning Assisted 3D Modeling Design
doi https://doi.org/https://doi.org/10.1007/978-981-33-4400-6_2
source Proceedings of the 2020 DigitalFUTURES The 2nd International Conference on Computational Design and Robotic Fabrication (CDRF 2020)
summary Style transfer is a design technique that is based on Artificial Intelligence and Machine Learning, which is an innovative way to generate new images with the intervention of style images. The output image will carry the characteristic of style image and maintain the content of the input image. However, the design technique is employed in generating 2D images, which has a limited range in practical use. Thus, the goal of the project is to utilize style transfer as a toolset for architectural design and find out the possibility for a 3D modeling design. To implement style transfer into the research, floor plans of different heights are selected from a given design boundary and set as the content images, while a framework of a truss structure is set as the style image. Transferred images are obtained after processing the style transfer neural network, then the geometric images are translated into floor plans for new structure design. After the selection of the tilt angle and the degree of density, vertical components that connecting two adjacent layers are generated to be the pillars of the structure. At this stage, 2D style transferred images are successfully transformed into 3D geometries, which can be applied to the architectural design processes. Generally speaking, style transfer is an intelligent design tool that provides architects with a variety of choices of idea-generating. It has the potential to inspire architects at an early stage of design with not only 2D but also 3D format.
series cdrf
email
last changed 2022/09/29 07:51

_id ijac202018403
id ijac202018403
authors Dagmar Reinhardt, Matthias Hank Haeusler, Kerry London, Lian Loke, Yingbin Feng, Eduardo De Oliveira Barata, Charlotte Firth, Kate Dunn, Nariddh Khean, Alessandra Fabbri, Dylan Wozniak-O’Connor and Rin Masuda
year 2020
title CoBuilt 4.0: Investigating the potential of collaborative robotics for subject matter experts
source International Journal of Architectural Computing vol. 18 - no. 4, 353–370
summary Human-robot interactions can offer alternatives and new pathways for construction industries, industrial growth and skilled labour, particularly in a context of industry 4.0. This research investigates the potential of collaborative robots (CoBots) for the construction industry and subject matter experts; by surveying industry requirements and assessments of CoBot acceptance; by investing processes and sequences of work protocols for standard architecture robots; and by exploring motion capture and tracking systems for a collaborative framework between human and robot co-workers. The research investigates CoBots as a labour and collaborative resource for construction processes that require precision, adaptability and variability.Thus, this paper reports on a joint industry, government and academic research investigation in an Australian construction context. In section 1, we introduce background data to architecture robotics in the context of construction industries and reports on three sections. Section 2 reports on current industry applications and survey results from industry and trade feedback for the adoption of robots specifically to task complexity, perceived safety, and risk awareness. Section 3, as a result of research conducted in Section 2, introduces a pilot study for carpentry task sequences with capture of computable actions. Section 4 provides a discussion of results and preliminary findings. Section 5 concludes with an outlook on how the capture of computable actions provide the foundation to future research for capturing motion and machine learning.
keywords Industry 4.0, collaborative robotics, on-site robotic fabrication, industry research, machine learning
series journal
email
last changed 2021/06/03 23:29

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 18HOMELOGIN (you are user _anon_997779 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002