Action Recognition Framework using Saliency Detection and Random Subspace Ensemble Classifier

  • Sai Maung Maung Zaw Faculty of Computer Systems and Technologies, University of Computer Studies, Mandalay, Myanmar
  • Hnin Mya Aye Image Processing Lab, University of Computer Studies, Mandalay, Myanmar

Abstract

Action recognition can be defined as a problem to determine what kind of action is happening in a video. It is a process of matching the observation with the previously labelled samples and assigning label to that observation. In this paper, a framework of the action recognition system based on saliency detection and random subspace ensemble classifier, is introduced in order to increase the performance of the action recognition. The proposed action recognition framework can be partitioned into three main processing phases. The first processing phase is detecting salient foreground objects by considering pattern and color distinctness of a set of pixels in each video frame. In the second processing phase, changing gradient orientation features are used as a useful feature representation. The third processing phase is recognizing actions using random subspace ensemble classifier with discriminant learner. Experimental results are evaluated on the UIUC action dataset. The proposed action recognition framework achieved satisfying action recognition accuracy.

Downloads

Download data is not yet available.

References

[1] Kwak, N.J. and Song, T.S., “Human Action Recognition Using Accumulated Moving Information,” International Journal of Multimedia and Ubiquitous Engineering, 10(10), pp.211-222, 2015.
[2] Cheng, G., Wan, Y., Saudagar, A.N., Namuduri, K. and Buckles, B.P., “Advances in human action recognition: A survey,” arXiv preprint arXiv:1501.05964, 2015.
[3] Wang, J., Chen, Z. and Wu, Y., “Action recognition with multiscale spatio-temporal contexts,” In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on (pp. 3185-3192), June 2011.
[4] Bagheri, M., Gao, Q., Escalera, S., Clapes, A., Nasrollahi, K., Holte, M.B. and Moeslund, T.B., “Keep it accurate and diverse: Enhancing action recognition performance by ensemble learning,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 22-29), 2015.
[5] Rohr, K., “Towards model-based recognition of human movements in image sequences,” CVGIP: Image understanding, 59(1), pp.94-115, 1994.
[6] Yilmaz, A. and Shah, M., “Recognizing human actions in videos acquired by uncalibrated moving cameras”, In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on (Vol. 1, pp. 150-157), October, 2005.
[7] Ali, S., Basharat, A. and Shah, M., “Chaotic invariants for human action recognition,” In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on (pp. 1-8), October, 2007.
[8] Wang, L. and Suter, D., “Informative shape representations for human action recognition,” In Pattern Recognition, 2006. ICPR 2006. 18th International Conference on (Vol. 2, pp. 1266-1269), August, 2006.
[9] Efros, A.A., Berg, A.C., Mori, G. and Malik, J., “Recognizing action at a distance,” In null (p. 726), October, 2003.
[10] Blank, M., Gorelick, L., Shechtman, E., Irani, M. and Basri, R., “Actions as space-time shapes,” In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on (Vol. 2, pp. 1395-1402), October, 2005.
[11] Laptev, I.,Lindeberg, T., “Space-time interest points,” International conferenceon computer vision, IEEE, 2003.
[12] Dollár, P., Rabaud, V., Cottrell, G. and Belongie, S., “Behavior recognition via sparse spatio-temporal features,” In Visual Surveillance and Performance Evaluation of Tracking and Surveillance, 2005. 2nd Joint IEEE International Workshop on (pp. 65-72), October, 2005.
[13] Messing, R., Pal, C. and Kautz, H., “Activity recognition using the velocity histories of tracked keypoints,” In Computer Vision, 2009 IEEE 12th International Conference on (pp. 104-111), September, 2009.
[14] Margolin, R., Tal, A. and Zelnik-Manor, L., “What makes a patch distinct?,” In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on (pp. 1139-1146), June, 2013.
[15] Yang, B., Zhang, X., Liu, J., Chen, L. and Gao, Z., “Principal components analysis-based visual saliency detection,” In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on (pp. 1936-1940), March, 2016.
[16] Hnin, M.A. and Sai, M.M.Z., “Histogram of Accumulated Changing Gradient Orientation (HACGO) for Saliency Navigated Action Recognition,” 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD, 2017), Kanazawa, Japan, pp. 225-230, ISBN: 978–1–5090–5504–3.Pattern Analysis & Applications, 5(2), pp.121-135, June 26-28, 2017.
[17] Skurichina, M. and Duin, R.P., “Bagging, boosting and the random subspace method for linear classifiers,” Pattern Analysis & Applications, 5(2), pp.121-135, 2002.
[18] Kayal, P. and Kannan, S., “An Ensemble Classifier Adopting Random Subspace Method based on Fuzzy Partial Mining,” Indian Journal of Science and Technology, 10(12), 2017.
[19] Tran, D. and Sorokin, A., “Human activity recognition with metric learning,” In European conference on computer vision (pp. 548-561). Springer, Berlin, Heidelberg, October 2008.
[20] Lever, J., Krzywinski, M. and Altman, “Points of significance: classification evaluation,”, 2016.
Published
2019-03-15
How to Cite
ZAW, Sai Maung Maung; AYE, Hnin Mya. Action Recognition Framework using Saliency Detection and Random Subspace Ensemble Classifier. International Journal of Research and Engineering, [S.l.], v. 6, n. 2, p. 580-588, mar. 2019. ISSN 2348-7860. Available at: <https://digital.ijre.org/index.php/int_j_res_eng/article/view/374>. Date accessed: 16 dec. 2019. doi: https://doi.org/10.21276/ijre.2019.6.2.2.