id |
acadia23_v2_92 |
authors |
Pinochet, Diego |
year |
2023 |
title |
A Computational Gestural Making Framework: A Multi-modal Approach to Digital Fabrication Mapping Human Gestures to Machine Actions |
source |
ACADIA 2023: Habits of the Anthropocene: Scarcity and Abundance in a Post-Material Economy [Volume 2: Proceedings of the 43rd Annual Conference for the Association for Computer Aided Design in Architecture (ACADIA) ISBN 979-8-9891764-0-3]. Denver. 26-28 October 2023. edited by A. Crawford, N. Diniz, R. Beckett, J. Vanucchi, M. Swackhamer 92-103. |
summary |
This research project implements a multimodal body-centric approach to interactive fabrication aimed to test the conversational aspects of a design framework (Figure 1). It focuses on the development of a gesture language as the primary mode of commu- nication, as well as the means to generate effective communication with a machine for design endeavors. To do so, we first developed a gesture recognition system that aims to establish fluid communication with a machine based on three types of gestures: symbolic, exploratory, and sequential. Second, we developed a system for machine vision to detect, recognize, and calculate physical objects in space. Third, we developed a system for robotic motion using path-planning algorithms and reinforcement learning for colli- sion-free machine movement. Finally, those three modules were integrated into a system for human-robot interaction in real time based on gestures. The ultimate goal of this imple- mentation is to establish a multimodal framework for interactive design that is based on human-robotic interaction through the use of gestures as a communication mechanism for exploring computational design potential toward unique and original creations. |
series |
ACADIA |
type |
paper |
email |
|
full text |
file.pdf (1,520,005 bytes) |
references |
Content-type: text/plain
|
last changed |
2024/12/20 09:12 |
|