Reference Hub1
Common Spatial Patterns for Real-Time Classification of Human Actions

Common Spatial Patterns for Real-Time Classification of Human Actions

Ronald Poppe
Copyright: © 2010 |Pages: 19
ISBN13: 9781605669007|ISBN10: 1605669008|ISBN13 Softcover: 9781616922160|EISBN13: 9781605669014
DOI: 10.4018/978-1-60566-900-7.ch004
Cite Chapter Cite Chapter

MLA

Poppe, Ronald. "Common Spatial Patterns for Real-Time Classification of Human Actions." Machine Learning for Human Motion Analysis: Theory and Practice, edited by Liang Wang, et al., IGI Global, 2010, pp. 55-73. https://doi.org/10.4018/978-1-60566-900-7.ch004

APA

Poppe, R. (2010). Common Spatial Patterns for Real-Time Classification of Human Actions. In L. Wang, L. Cheng, & G. Zhao (Eds.), Machine Learning for Human Motion Analysis: Theory and Practice (pp. 55-73). IGI Global. https://doi.org/10.4018/978-1-60566-900-7.ch004

Chicago

Poppe, Ronald. "Common Spatial Patterns for Real-Time Classification of Human Actions." In Machine Learning for Human Motion Analysis: Theory and Practice, edited by Liang Wang, Li Cheng, and Guoying Zhao, 55-73. Hershey, PA: IGI Global, 2010. https://doi.org/10.4018/978-1-60566-900-7.ch004

Export Reference

Mendeley
Favorite

Abstract

We present a discriminative approach to human action recognition. At the heart of our approach is the use of common spatial patterns (CSP), a spatial filter technique that transforms temporal feature data by using differences in variance between two classes. Such a transformation focuses on differences between classes, rather than on modeling each class individually. As a result, to distinguish between two classes, we can use simple distance metrics in the low-dimensional transformed space. The most likely class is found by pairwise evaluation of all discriminant functions, which can be done in real-time. Our image representations are silhouette boundary gradients, spatially binned into cells. We achieve scores of approximately 96% on the Weizmann human action dataset, and show that reasonable results can be obtained when training on only a single subject. We further compare our results with a recent examplarbased approach. Future work is aimed at combining our approach with automatic human detection.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.