{Juvenile} Flippancy


"If we knew what it was we were doing, it would not be called research, would it? -- Albert Einstein

Learning Two-Person Interaction Models

We address the problem of creating believable animations for virtual humans and humanoid robots that need to react to the body movements of a human interaction partner in real-time. Our data-driven approach uses prerecorded motion capture data of two interacting persons and performs motion adaptation during the live human-agent interaction. Extending the interaction mesh approach, our main contribution is a new scheme for efficient identification of motions in the prerecorded animation data that are similar to the live interaction. A global low-dimensional posture space serves to select the most similar interaction example, while local, more detail-rich posture spaces are used to identify poses closely matching the human motion. Using the interaction mesh of the selected motion example, an animation can then be synthesized that takes into account both spatial and temporal similarities between the prerecorded and live interactions.

Selected Publications (since late 2012)

D. Vogt, B. Lorenz, S. Grehl, B. Jung. Behavior Generation for Interactive Virtual Humans Using Context-Dependent Interaction Meshes and Automated Constraint Extraction. Journal of Computer Animation and Virtual Worlds (CAVW). 2015. DOI: 10.1002/cav.1648.

D. Vogt, S. Grehl, E. Berger, H. Ben Amor, B. Jung. A Data-Driven Method for Real-Time Character Animation in Human-Agent Interaction. Intelligent Virtual Agents,14th International Conference, IVA 2014, Boston, MA, USA, August 27-29, 2014. Proceedings, Lecture Notes in Computer Science, Vol. 8637, Springer, pp 463-476. DOI 10.1007/978-3-319-09767-1_57

D. Vogt, H. Ben Amor, E. Berger, B. Jung. Learning Two-Person Interaction Models for Responsive Synthetic Humanoids, Journal of Virtual Reality and Broadcastings, 11(2014), No 1, Link

D. Vogt, E. Berger, H. Ben Amor & B. Jung. A Task-Space Two-Person Interaction Model for Human-Robot Interaction. 10. Workshop Virtuelle und Erweiterte Realität 2013 der Fachgruppe Virtuelle Realität und Augmented Reality

H. Ben Amor, D. Vogt, M. Ewerton, E. Berger, B. Jung, J. Peters. Learning Responsive Robot Behavior by Imitation, Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, pp. 3257 - 3264.

D. Vogt, E. Berger, H. Ben Amor and B. Jung. A Task-Space Two-Person Interaction Model for Human-Robot-Interaction. 10. Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe Virtuelle Realität und Augmented Reality. 2013. S. 77-84.

Z. Wang, M.P. Deisenroth, H. Ben Amor, D. Vogt, B. Schölkopf and J. Peters. Probabilistic Modeling of Human Movements for Intention Inference, RSS 2012 – Robotics: Science and Systems. 2012.

See all publications

See all videos


"Children see magic because the they look for it." -- Christopher Moore

Web Development

"If you can't make it good, at least make it look good." -- Bill Gates


david.vogt at flippancy.de