CumInCAD is a Cumulative Index about publications in Computer Aided Architectural Design
supported by the sibling associations ACADIA, CAADRIA, eCAADe, SIGraDi, ASCAAD and CAAD futures

PDF papers
References

Hits 1 to 20 of 618

_id acadia19_412
id acadia19_412
authors Del Campo, Matias; Manninger, Sandra; Carlson, Alexandra
year 2019
title Imaginary Plans
source ACADIA 19:UBIQUITY AND AUTONOMY [Proceedings of the 39th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-59179-7] (The University of Texas at Austin School of Architecture, Austin, Texas 21-26 October, 2019) pp. 412-418
doi https://doi.org/10.52842/conf.acadia.2019.412
summary Artificial Neural Networks (NN) have become ubiquitous across disciplines due to their high performance in modeling the real world to execute complex tasks in the wild. This paper presents a computational design approach that uses the internal representations of deep vision neural networks to generate and transfer stylistic form edits to both 2D floor plans and building sections. The main aim of this paper is to demonstrate and interrogate a design technique based on deep learning. The discussion includes aspects of machine learning, 2D to 2D style transfers, and generative adversarial processes. The paper examines the meaning of agency in a world where decision making processes are defined by human/machine collaborations (Figure 1), and their relationship to aspects of a Posthuman design ecology. Taking cues from the language used by experts in AI, such as Hallucinations, Dreaming, Style Transfer, and Vision, the paper strives to clarify the position and role of Artificial Intelligence in the discipline of Architecture.
series ACADIA
type normal paper
email
last changed 2022/06/07 07:55

_id caadria2019_109
id caadria2019_109
authors Kim, Jinsung, Song, Jaeyeol and Lee, Jin-Kook
year 2019
title Approach to Auto-recognition of Design Elements for the Intelligent Management of Interior Pictures
source M. Haeusler, M. A. Schnabel, T. Fukuda (eds.), Intelligent & Informed - Proceedings of the 24th CAADRIA Conference - Volume 2, Victoria University of Wellington, Wellington, New Zealand, 15-18 April 2019, pp. 785-794
doi https://doi.org/10.52842/conf.caadria.2019.2.785
summary This paper explores automated recognition of elements in interior design pictures for an intelligent design reference management system. Precedent design references have a significant role to help architects, designer and even clients in general architecture design process. Pictures are one of the representation that could exactly show a kind of design idea and knowledge. Due to the velocity, variety and volume of reference pictures data with growth of references platform, it is hard and time-consuming to handle the data with current manual way. To solve this problem , this paper depicts a deep learning-based approach to figuring out design elements and recognizing the design feature of them on the interior pictures using faster-RCNN and CNN algorithms. The targets are the residential furniture such as a table and a seating. Through proposed application, input pictures can automatically have tagging data as follows; seating1(type: sofa, seating capacity: two-seaters, design style: classic)
keywords Interior design picture; Design element; Design feature; Automated recognition; Design Reference management
series CAADRIA
email
last changed 2022/06/07 07:52

_id acadia19_298
id acadia19_298
authors Leach, Neil
year 2019
title Do Robots Dream of Digital Sleep?
source ACADIA 19:UBIQUITY AND AUTONOMY [Proceedings of the 39th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-59179-7] (The University of Texas at Austin School of Architecture, Austin, Texas 21-26 October, 2019) pp. 298-309
doi https://doi.org/10.52842/conf.acadia.2019.298
summary AI is playing an increasingly important role in everyday life. But can AI actually design? This paper takes its point of departure from Philip K Dick’s novel, Do Androids Dream of Electric Sheep? and refers to Google’s DeepDream software, and other AI techniques such as GANs, Progressive GANs, CANs and StyleGAN, that can generate increasingly convincing images, a process often described as ‘dreaming’. It notes that although generative AI does not possess consciousness, and therefore cannot literally dream, it can still be a powerful design tool that becomes a prosthetic extension to the human imagination. Although the use of GANs and other deep learning AI tools is still in its infancy, we are at the dawn of an exciting – but also potentially terrifying – new era for architectural design. Most importantly, the paper concludes, the development of AI is also helping us to understand human intelligence and 'creativity'.
series ACADIA
type normal paper
email
last changed 2022/06/07 07:52

_id caadria2019_396
id caadria2019_396
authors Cao, Rui, Fukuda, Tomohiro and Yabuki, Nobuyoshi
year 2019
title Quantifying Visual Environment by Semantic Segmentation Using Deep Learning - A Prototype for Sky View Factor
source M. Haeusler, M. A. Schnabel, T. Fukuda (eds.), Intelligent & Informed - Proceedings of the 24th CAADRIA Conference - Volume 2, Victoria University of Wellington, Wellington, New Zealand, 15-18 April 2019, pp. 623-632
doi https://doi.org/10.52842/conf.caadria.2019.2.623
summary Sky view factor (SVF) is the ratio of radiation received by a planar surface from the sky to that received from the entire hemispheric radiating environment, in the past 20 years, it was more applied to urban-climatic areas such as urban air temperature analysis. With the urbanization and the development of cities, SVF has been paid more and more attention on as the important parameter in urban construction and city planning area because of increasing building coverage ratio to promote urban forms and help creating a more comfortable and sustainable urban residential building environment to citizens. Therefore, efficient, low cost, high precision, easy to operate, rapid building-wide SVF estimation method is necessary. In the field of image processing, semantic segmentation based on deep learning have attracted considerable research attention. This study presents a new method to estimate the SVF of residential environment by constructing a deep learning network for segmenting the sky areas from 360-degree camera images. As the result of this research, an easy-to-operate estimation system for SVF based on high efficiency sky label mask images database was developed.
keywords Visual environment; Sky view factor; Semantic segmentation; Deep learning; Landscape simulation
series CAADRIA
email
last changed 2022/06/07 07:54

_id ecaadesigradi2019_514
id ecaadesigradi2019_514
authors de Miguel, Jaime, Villafa?e, Maria Eugenia, Piškorec, Luka and Sancho-Caparrini, Fernando
year 2019
title Deep Form Finding - Using Variational Autoencoders for deep form finding of structural typologies
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 1, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 71-80
doi https://doi.org/10.52842/conf.ecaade.2019.1.071
summary In this paper, we are aiming to present a methodology for generation, manipulation and form finding of structural typologies using variational autoencoders, a machine learning model based on neural networks. We are giving a detailed description of the neural network architecture used as well as the data representation based on the concept of a 3D-canvas with voxelized wireframes. In this 3D-canvas, the input geometry of the building typologies is represented through their connectivity map and subsequently augmented to increase the size of the training set. Our variational autoencoder model then learns a continuous latent distribution of the input data from which we can sample to generate new geometry instances, essentially hybrids of the initial input geometries. Finally, we present the results of these computational experiments and lay out the conclusions as well as outlook for future research in this field.
keywords artificial intelligence; deep neural networks; variational autoencoders; generative design; form finding; structural design
series eCAADeSIGraDi
email
last changed 2022/06/07 07:55

_id ecaadesigradi2019_065
id ecaadesigradi2019_065
authors Fukuda, Tomohiro, Novak, Marcos and Fujii, Hiroyuki
year 2019
title Development of Segmentation-Rendering on Virtual Reality for Training Deep-learning, Simulating Landscapes and Advanced User Experience
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 2, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 433-440
doi https://doi.org/10.52842/conf.ecaade.2019.2.433
summary Virtual reality (VR) has been suggested for various purposes in the field of architecture, engineering, and construction (AEC). This research explores new roles for VR toward the super-smart society in the near future. In particular, we propose to develop post-processing rendering, segmentation-rendering and shadow-casting rendering algorithms for novel VR expressions to enable more versatile approaches than the normal photorealistic red, green, and blue (RGB) expressions. We succeeded in applying a wide variety of VR renderings in urban-design projects after implementation. The developed system can create images in real time to train deep-learning algorithms, can also be applied to landscape analysis and contribute to advanced user experience.
keywords Super-smart society; Virtual Reality; Segmentation; Deep-learning; Landscape simulation; Shader
series eCAADeSIGraDi
email
last changed 2022/06/07 07:50

_id ecaadesigradi2019_357
id ecaadesigradi2019_357
authors Gönenç Sorguç, Arzu, Özgenel, Ça?lar F?rat, Kruşa Yemişcio?lu, Müge, Küçüksubaş?, Fatih, Y?ld?r?m, Soner, Antonini, Ernesto, Bartolomei, Luigi, Ovesen, Nis and Stein?, Nicolai
year 2019
title STEAM Approach for Architecture Education
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 1, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 137-146
doi https://doi.org/10.52842/conf.ecaade.2019.1.137
summary Starting with the first founded university, higher education has been evolving continuously, yet the pace of this evolution is not as fast as the changes that we observe in practice. Today, this discrepancy is not only limited to the content of the curricula but also the expected skills and competencies. It is evident that 21st-century skills and competencies should be much different than the ones delivered in the 20th-century due to rapidly developing and spreading new design and information technologies. Each and every discipline has been in continuous search of the "right" way of formalization of education both content and skill wise. This paper focuses on architectural design education incorporating discussions on the role of STEAM (Science Technology, Engineering, Art and Mathematics). The study presents the outcomes of the ArchiSTEAM project, which is funded by EU Erasmus+ Programme, with the aim of re-positioning STEAM in architectural design education by contemplating 21st-century skills (a.k.a. survival skills) of architects. Three educational modules together with the andragogic approaches, learning objectives, contents, learning/teaching activities and assessment methods determined with respect to the skill sets defined for 21st-century architects.
keywords STEAM; Architectural Education; Survival Skills
series eCAADeSIGraDi
email
last changed 2022/06/07 07:50

_id acadia19_16
id acadia19_16
authors Hosmer, Tyson; Tigas, Panagiotis
year 2019
title Deep Reinforcement Learning for Autonomous Robotic Tensegrity (ART)
source ACADIA 19:UBIQUITY AND AUTONOMY [Proceedings of the 39th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-59179-7] (The University of Texas at Austin School of Architecture, Austin, Texas 21-26 October, 2019) pp. 16-29
doi https://doi.org/10.52842/conf.acadia.2019.016
summary The research presented in this paper is part of a larger body of emerging research into embedding autonomy in the built environment. We develop a framework for designing and implementing effective autonomous architecture defined by three key properties: situated and embodied agency, facilitated variation, and intelligence.We present a novel application of Deep Reinforcement Learning to learn adaptable behaviours related to autonomous mobility, self-structuring, self-balancing, and spatial reconfiguration. Architectural robotic prototypes are physically developed with principles of embodied agency and facilitated variation. Physical properties and degrees of freedom are applied as constraints in a simulated physics-based environment where our simulation models are trained to achieve multiple objectives in changing environments. This holistic and generalizable approach to aligning deep reinforcement learning with physically reconfigurable robotic assembly systems takes into account both computational design and physical fabrication. Autonomous Robotic Tensegrity (ART) is presented as an extended case study project for developing our methodology. Our computational design system is developed in Unity3D with simulated multi-physics and deep reinforcement learning using Unity’s ML-agents framework. Topological rules of tensegrity are applied to develop assemblies with actuated tensile members. Single units and assemblies are trained for a series of policies using reinforcement learning in single-agent and multi-agent setups. Physical robotic prototypes are built and actuated to test simulated results.
series ACADIA
type normal paper
email
last changed 2022/06/07 07:50

_id acadia20_382
id acadia20_382
authors Hosmer, Tyson; Tigas, Panagiotis; Reeves, David; He, Ziming
year 2020
title Spatial Assembly with Self-Play Reinforcement Learning
source ACADIA 2020: Distributed Proximities / Volume I: Technical Papers [Proceedings of the 40th Annual Conference of the Association of Computer Aided Design in Architecture (ACADIA) ISBN 978-0-578-95213-0]. Online and Global. 24-30 October 2020. edited by B. Slocum, V. Ago, S. Doyle, A. Marcus, M. Yablonina, and M. del Campo. 382-393.
doi https://doi.org/10.52842/conf.acadia.2020.1.382
summary We present a framework to generate intelligent spatial assemblies from sets of digitally encoded spatial parts designed by the architect with embedded principles of prefabrication, assembly awareness, and reconfigurability. The methodology includes a bespoke constraint-solving algorithm for autonomously assembling 3D geometries into larger spatial compositions for the built environment. A series of graph-based analysis methods are applied to each assembly to extract performance metrics related to architectural space-making goals, including structural stability, material density, spatial segmentation, connectivity, and spatial distribution. Together with the constraint-based assembly algorithm and analysis methods, we have integrated a novel application of deep reinforcement (RL) learning for training the models to improve at matching the multiperformance goals established by the user through self-play. RL is applied to improve the selection and sequencing of parts while considering local and global objectives. The user’s design intent is embedded through the design of partial units of 3D space with embedded fabrication principles and their relational constraints over how they connect to each other and the quantifiable goals to drive the distribution of effective features. The methodology has been developed over three years through three case study projects called ArchiGo (2017–2018), NoMAS (2018–2019), and IRSILA (2019-2020). Each demonstrates the potential for buildings with reconfigurable and adaptive life cycles.
series ACADIA
type paper
email
last changed 2023/10/22 12:06

_id ecaadesigradi2019_117
id ecaadesigradi2019_117
authors Kido, Daiki, Fukuda, Tomohiro and Yabuki, Nobuyoshi
year 2019
title Development of a Semantic Segmentation System for Dynamic Occlusion Handling in Mixed Reality for Landscape Simulation
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 1, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 641-648
doi https://doi.org/10.52842/conf.ecaade.2019.1.641
summary The use of mixed reality (MR) for landscape simulation has attracted attention recently. MR can produce a realistic landscape simulation by merging a three-dimensional computer graphic (3DCG) model of a new building on a real space. One challenge with MR that remains to be tackled is occlusion. Properly handling occlusion is important for the understanding of the spatial relationship between physical and virtual objects. When the occlusion targets move or the target's shape changes, depth-based methods using a special camera have been applied for dynamic occlusion handling. However, these methods have a limitation of the distance to obtain depth information and are unsuitable for outdoor landscape simulation. This study focuses on a dynamic occlusion handling method for MR-based landscape simulation. We developed a real-time semantic segmentation system to perform dynamic occlusion handling. We designed this system for use in mobile devices with client-server communication for real-time semantic segmentation processing in mobile devices. Additionally, we used a normal monocular camera for practice use.
keywords Mixed Reality; Dynamic occlusion handling; Semantic segmentation; Deep learning; Landscape simulation
series eCAADeSIGraDi
email
last changed 2022/06/07 07:52

_id cf2019_004
id cf2019_004
authors Kim, Jinsung; Jaeyeol Song and Jin-Kook Lee
year 2019
title Recognizing and Classifying Unknown Object in BIM using 2D CNN
source Ji-Hyun Lee (Eds.) "Hello, Culture!"  [18th International Conference, CAAD Futures 2019, Proceedings / ISBN 978-89-89453-05-5] Daejeon, Korea, p. 23
summary This paper aims to propose an approach to automated classifying building element instance in BIM using deep learning-based 3D object classification algorithm. Recently, studies related to checking or validating engine of BIM object for ensuring data integrity of BIM instances are getting attention. As a part of this research, this paper train recognition models that are targeted at basic building element and interior element using 3D object recognition technique that uses images of objects as inputs. Object recognition is executed in two stages; 1) class of object (e.g. wall, window, seating furniture, toilet fixture and etc.), 2) sub-type of specific classes (e.g. Toilet or Urinal). Using the trained models, BIM plug-in prototype is developed and the performance of this AI-based approach with test BIM model is checked. We expect this recognition approach to help ensure the integrity of BIM data and contribute to the practical use of BIM.
keywords 3D object classification, Building element, Building information modeling, Data integrity, Interior element
series CAAD Futures
email
last changed 2019/07/29 14:08

_id ecaadesigradi2019_339
id ecaadesigradi2019_339
authors Kinugawa, Hina and Takizawa, Atsushi
year 2019
title Deep Learning Model for Predicting Preference of Space by Estimating the Depth Information of Space using Omnidirectional Images
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 2, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 61-68
doi https://doi.org/10.52842/conf.ecaade.2019.2.061
summary In this study, we developed a method for generating omnidirectional depth images from corresponding omnidirectional RGB images of streetscapes by learning each pair of omnidirectional RGB and depth images created by computer graphics using pix2pix. Then, the models trained with different series of images shot under different site and weather conditions were applied to Google street view images to generate depth images. The validity of the generated depth images was then evaluated visually. In addition, we conducted experiments to evaluate Google street view images using multiple participants. We constructed a model that estimates the evaluation value of these images with and without the depth images using the learning-to-rank method with deep convolutional neural network. The results demonstrate the extent to which the generalization performance of the streetscape evaluation model changes depending on the presence or absence of depth images.
keywords Omnidirectional image; depth image; Unity; Google street view; pix2pix; RankNet
series eCAADeSIGraDi
email
last changed 2022/06/07 07:52

_id cf2019_022
id cf2019_022
authors Koh, Immanuel and Jeffrey Huang
year 2019
title Citizen Visual Search Engine:Detection and Curation of Urban Objects
source Ji-Hyun Lee (Eds.) "Hello, Culture!"  [18th International Conference, CAAD Futures 2019, Proceedings / ISBN 978-89-89453-05-5] Daejeon, Korea, p. 170
summary Increasingly, the ubiquity of satellite imagery has made the data analysis and machine learning of large geographical datasets one of the building blocks of visuospatial intelligence. It is the key to discover current (and predict future) cultural, social, financial and political realities. How can we, as designers and researchers, empower citizens to understand and participate in the design of our cities amid this technological shift? As an initial step towards this broader ambition, a series of creative web applications, in the form of visual search engines, has been developed and implemented to data mine large datasets. Using open sourced deep learning and computer vision libraries, these applications facilitate the searching, detecting and curating of urban objects. In turn, the paper proposes and formulates a framework to design truly citizen-centric creative visual search engines -- a contribution to citizen science and citizen journalism in spatial terms.
keywords Deep Learning, Computer Vision, Satellite Imagery, Citizen Science, Artificial Intelligence
series CAAD Futures
email
last changed 2019/07/29 14:08

_id ecaadesigradi2019_173
id ecaadesigradi2019_173
authors Matthias, Kulcke and Martens, Bob
year 2019
title Digital Empowerment for the "Experimental Bureau" - Work Based Learning in Architectural Education
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 1, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 117-126
doi https://doi.org/10.52842/conf.ecaade.2019.1.117
summary This paper describes the concept of the "Experimental Bureau" as a didactic environment aiming to deal with real-life design tasks within the framework of architectural education. Its main focus lies on the specific opportunities for digital empowerment of students who learn about the design process - sometimes even in the role of contractors - in real-life oriented project work. Thus the following questions come under scrutiny and discussion from an angle of work based learning: What kind of design problems are tackled in a meaningful way by students through the utilization of a digital strategy? What kind of software (or software mix) is chosen and what problems are addressed by the choice and handling of these digital tools? These questions are answered in a different way applying the format of the Experimental Bureau, driven by its real-life projects and client communication, in comparison to largely artificial tasks confined to the academic realm.
keywords design education; real-life case study; stakeholder communication; real-world experience; didactic approach
series eCAADeSIGraDi
email
last changed 2022/06/07 07:58

_id ecaadesigradi2019_135
id ecaadesigradi2019_135
authors Newton, David
year 2019
title Deep Generative Learning for the Generation and Analysis of Architectural Plans with Small Datasets
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 2, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 21-28
doi https://doi.org/10.52842/conf.ecaade.2019.2.021
summary The field of generative architectural design has explored a wide range of approaches in the automation of design production, but these approaches have demonstrated limited artificial intelligence. Generative Adversarial Networks (GANs) are a leading deep generative model that use deep neural networks (DNNs) to learn from a set of training examples in order to create new design instances with a degree of flexibility and fidelity that outperform competing generative approaches. Their application to generative tasks in architecture, however, has been limited. This research contributes new knowledge on the use of GANs for architectural plan generation and analysis in relation to the work of specific architects. Specifically, GANs are trained to synthesize architectural plans from the work of the architect Le Corbusier and are used to provide analytic insight. Experiments demonstrate the efficacy of different augmentation techniques that architects can use when working with small datasets.
keywords generative design; deep learning; artificial intelligence; generative adversarial networks
series eCAADeSIGraDi
email
last changed 2022/06/07 07:58

_id caadria2019_650
id caadria2019_650
authors Papasotiriou, Tania
year 2019
title Identifying the Landscape of Machine Learning-Aided Architectural Design - A Term Clustering and Scientometrics Study
source M. Haeusler, M. A. Schnabel, T. Fukuda (eds.), Intelligent & Informed - Proceedings of the 24th CAADRIA Conference - Volume 2, Victoria University of Wellington, Wellington, New Zealand, 15-18 April 2019, pp. 815-824
doi https://doi.org/10.52842/conf.caadria.2019.2.815
summary Recent advances in Machine Learning and Deep Learning revolutionise many industry disciplines and underpin new ways of problem-solving. This paradigm shift hasn't left Architecture unaffected. To investigate the impact on architectural design, this study utilises two approaches. First, a text mining method for content analysis is employed, to perform a robust review of the field's literature. This allows identifying and discussing current trends and possible future directions of this research domain in a systematic manner. Second, a Scientometrics study based on bibliometric reviews is employed to obtain quantitative measures of the global research activity in the described domain. Insights on research trends and identification of the most influential networks in this dataset were acquired by analysing terms co-occurrence, scientific collaborations, geographic distribution, and co-citation analysis. The paper concludes with a discussion on the limitations, opportunities and future research directions in the field of Machine Learning-aided architectural design.
keywords Machine Learning; Text mining; Scientometrics
series CAADRIA
email
last changed 2022/06/07 08:00

_id ecaadesigradi2019_462
id ecaadesigradi2019_462
authors Perelli Soto, Bruno and Soza Ruiz, Pedro
year 2019
title CoDesign Spaces - Experiences of EBD research at an industrial design makerspace
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 1, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 417-422
doi https://doi.org/10.52842/conf.ecaade.2019.1.417
summary During the last years, insertion of technology accelerates its incursion both in the design process and in the teaching-learning process. Design education has gone through different visions: Some hold the vision of education in design with a look at professional training. Others, have chosen to study the roots and problems of the training process, the ultimate goal is to generate experts in future designers. An element that - consistently - is often absent from such discussions is the role played by prototypes in the teaching-learning process. This research reviews the role that the prototype has played, as a central element, in the process of collecting evidence, with a view to informing the decision making during the development of Project Design. The paper discusses the role that prototypes - from the standpoint of CoDesign, Evidence Design, and evolutionary design - have played in the teaching experiences of the last four semesters within a Computer Lab for students of Industrial Design. The systematization of information extracted from the research experiences has evolved from the Lab model to the Maker-space experience.
keywords Prototype; FSB Framework; Makerspace; Industrial Design
series eCAADeSIGraDi
email
last changed 2022/06/07 08:00

_id ecaadesigradi2019_549
id ecaadesigradi2019_549
authors Reinhardt, Dagmar, Haeusler, M. Hank, Loke, Lian, de Oliveira Barata, Eduardo, Firth, Charlotte, Khean, Nariddh, London, Kerry, Feng, Yingbin and Watt, Rodney
year 2019
title CoBuilt - Towards a novel methodology for workflow capture and analysis of carpentry tasks for human-robot collaboration
source Sousa, JP, Xavier, JP and Castro Henriques, G (eds.), Architecture in the Age of the 4th Industrial Revolution - Proceedings of the 37th eCAADe and 23rd SIGraDi Conference - Volume 3, University of Porto, Porto, Portugal, 11-13 September 2019, pp. 207-216
doi https://doi.org/10.52842/conf.ecaade.2019.3.207
summary Advanced manufacturing and robotic fabrication for the housing construction industry is mainly focused on the use of industrial robots in the pre-fabrication stage. Yet to be fully developed is the use on-site of collaborative robots, able to work cooperatively with humans in a range of construction trades. Our study focuses on the trade of carpentry in small-to-medium size enterprises in the Australian construction industry, seeking to understand and identify opportunities in the current workflows of carpenters for the role of collaborative robots. Prior to presenting solutions for this problem, we first developed a novel methodology for the capture and analysis of the body movements of carpenters, resulting in a suite of visual resources to aid us in thinking through where, what, and how a collaborative robot could participate in the carpentry task. We report on the challenges involved, and outline how the results of applying this methodology will inform the next stage of our research.
keywords Robotic Fabrication; Collaborative Robots; Training Methodology; Machine Learning; Interaction Analysis
series eCAADeSIGraDi
email
last changed 2022/06/07 08:00

_id caadria2021_053
id caadria2021_053
authors Rhee, Jinmo and Veloso, Pedro
year 2021
title Generative Design of Urban Fabrics Using Deep Learning
source A. Globa, J. van Ameijde, A. Fingrut, N. Kim, T.T.S. Lo (eds.), PROJECTIONS - Proceedings of the 26th CAADRIA Conference - Volume 1, The Chinese University of Hong Kong and Online, Hong Kong, 29 March - 1 April 2021, pp. 31-40
doi https://doi.org/10.52842/conf.caadria.2021.1.031
summary This paper describes the Urban Structure Synthesizer (USS), a research prototype based on deep learning that generates diagrams of morphologically consistent urban fabrics from context-rich urban datasets. This work is part of a larger research on computational analysis of the relationship between urban context and morphology. USS relies on a data collection method that extracts GIS data and converts it to diagrams with context information (Rhee et al., 2019). The resulting dataset with context-rich diagrams is used to train a Wasserstein GAN (WGAN) model, which learns how to synthesize novel urban fabric diagrams with the morphological and contextual qualities present in the dataset. The model is also trained with a random vector in the input, which is later used to enable parametric control and variation for the urban fabric diagram. Finally, the resulting diagrams are translated to 3D geometric entities using computer vision techniques and geometric modeling. The diagrams generated by USS suggest that a learning-based method can be an alternative to methods that rely on experts to build rule sets or parametric models to grasp the morphological qualities of the urban fabric.
keywords Deep Learning; Urban Fabric; Generative Design; Artificial Intelligence; Urban Morphology
series CAADRIA
email
last changed 2022/06/07 07:56

_id caadria2020_259
id caadria2020_259
authors Rhee, Jinmo, Veloso, Pedro and Krishnamurti, Ramesh
year 2020
title Integrating building footprint prediction and building massing - an experiment in Pittsburgh
source D. Holzer, W. Nakapan, A. Globa, I. Koh (eds.), RE: Anthropocene, Design in the Age of Humans - Proceedings of the 25th CAADRIA Conference - Volume 2, Chulalongkorn University, Bangkok, Thailand, 5-6 August 2020, pp. 669-678
doi https://doi.org/10.52842/conf.caadria.2020.2.669
summary We present a novel method for generating building geometry using deep learning techniques based on contextual geometry in urban context and explore its potential to support building massing. For contextual geometry, we opted to investigate the building footprint, a main interface between urban and architectural forms. For training, we collected GIS data of building footprints and geometries of parcels from Pittsburgh and created a large dataset of Diagrammatic Image Dataset (DID). We employed a modified version of a VGG neural network to model the relationship between (c) a diagrammatic image of a building parcel and context without the footprint, and (q) a quadrilateral representing the original footprint. The option for simple geometrical output enables direct integration with custom design workflows because it obviates image processing and increases training speed. After training the neural network with a curated dataset, we explore a generative workflow for building massing that integrates contextual and programmatic data. As trained model can suggest a contextual boundary for a new site, we used Massigner (Rhee and Chung 2019) to recommend massing alternatives based on the subtraction of voids inside the contextual boundary that satisfy design constraints and programmatic requirements. This new method suggests the potential that learning-based method can be an alternative of rule-based design methods to grasp the complex relationships between design elements.
keywords Deep Learning; Prediction; Building Footprint; Massing; Generative Design
series CAADRIA
email
last changed 2022/06/07 07:56

For more results click below:

this is page 0show page 1show page 2show page 3show page 4show page 5... show page 30HOMELOGIN (you are user _anon_988443 from group guest) CUMINCAD Papers Powered by SciX Open Publishing Services 1.002