Modeling of predictive human movement coordination patterns for applications in computer graphics
Deutscher übersetzter Titel: | Modellierung von prädiktiven Koordinationsmustern der menschlichen Bewegung für Anwendungen in der Computergrafik |
---|---|
Autor: | Land, William M.; Schack, Thomas; Giese, Martin; Mukovskiy, Albert |
Erschienen in: | Journal of WSCG |
Veröffentlicht: | 23 (2015), 2, S. 139-146, Lit. |
Format: | Literatur (SPOLIT) |
Publikationstyp: | Zeitschriftenartikel |
Medienart: | Elektronische Ressource (online) Elektronische Ressource (Datenträger) Gedruckte Ressource |
Sprache: | Englisch |
ISBN: | 12136980, 12136972, 12136972 |
ISSN: | 1213-6972, 1213-6980, 1213-6964 |
Schlagworte: | |
Online Zugang: | |
Erfassungsnummer: | PU201710009176 |
Quelle: | BISp |
Abstract des Autors
The planning of human body movements is highly predictive. Within a sequence of actions, the anticipation of a final task goal modulates the individual actions within the overall pattern of motion. An example is a sequence of steps, which is coordinated with the grasping of an object at the end of the step sequence. Opposed to this property of natural human movements, real-time animation systems in computer graphics often model complex activities by a sequential concatenation of individual pre-stored movements, where only the movement before accomplishing the goal is adapted. We present a learning-based technique that models the highly adaptive predictive movement coordination in humans, illustrated for the example of the coordination of walking and reaching. The proposed system for the real-time synthesis of human movements models complex activities by a sequential concatenation of movements, which are approximated by the superposition of kinematic primitives that have been learned from trajectory data by anechoic demixing, using a step-wise regression approach. The kinematic primitives are then approximated by stable solutions of nonlinear dynamical systems (dynamic primitives) that can be embedded in control architectures. We present a control architecture that generates highly adaptive predictive full-body movements for reaching while walking with highly human-like appearance. We demonstrate that the generated behavior is highly robust, even in presence of strong perturbations that require the insertion of additional steps online in order to accomplish the desired task.