## Cost-Sensitive Top-down/Bottom-up Inference for Multiscale Activity Recognition

### Abstract

This paper addresses a new problem, that of multiscale activity recognition. Our goal is to detect and localize a wide range of activities, including individual actions and group activities, which may simultaneously co-occur in high-resolution video. The video resolution allows for digital zoom-in (or zoom-out) for examining fine details (or coarser scales), as needed for recognition. The key challenge is how to avoid running a multitude of detectors at all spatiotemporal scales, and yet arrive at a holistically consistent video interpretation. To this end, we use a three-layered AND-OR graph to jointly model group activities, individual actions, and participating objects. The AND-OR graph allows a principled formulation of efficient, cost-sensitive inference via an explore-exploit strategy. Our inference optimally schedules the following computational processes: 1) direct application of activity detectors -- called $$\alpha$$ process; 2) bottom-up inference based on detecting activity parts -- called $$\beta$$ process; and 3) top-down inference based on detecting activity context -- called $$\gamma$$ process. The scheduling iteratively maximizes the log-posteriors of the resulting parse graphs. For evaluation, we have compiled and benchmarked a new dataset of high-resolution videos of group and individual activities co-occurring in a courtyard of the UCLA campus.

### Paper and Slides

• Cost-Sensitive Top-down/Bottom-up Inference for Multiscale Activity Recognition [pdf][web with dataset ]
M. R. Amer, D. Xie, M. Zhao, S. Todorovic, and S.C. Zhu
European Conf. on Computer Vision (ECCV), 2012.

• ### BibTeX

@inproceedings{Amer2012,
author = {Amer, Mohamed R. and Xie, Dan and Zhao, Mingtian and Todorovic, Sinisa and Zhu, Song-Chun},
booktitle = {ECCV},
title = {{Cost-sensitive top-down / bottom-up inference for multiscale activity recognition}},
year = {2012}
}