Task Learning through Visual Demonstration and Situated Dialogue

Changsong Liu1, Joyce Y. Chai1, Nishant Shukla2, Song-Chun Zhu2

Language And Interaction Research, Michigan State University1

Center for Vision, Cognition, Learning and Autonomy, UCLA2

Introduction

To enable effective collaborations between humans and cognitive robots, it is important for robots to continuously acquire and learn task knowledge from human partners. To address this issue, we are currently developing a framework that supports task learning through visual demonstration and natural language dialogue. One core component of this framework is the integration of language and vision that is driven by dialogue for task representations. This paper describes our ongoing effort, particularly, grounded task acquisition through joint processing of discourse and video using And-Or-Graphs (AOG).

Paper and Demo

Demo

Paper

Task Learning through Visual Demonstration and Situated Dialogue
AAAI 2016 Workshop on Symbiotic Cognitive Systems
[ pdf ]

Contact

Please contact Nishant Shukla (nxs@ucla.edu)