New Insights of Background Estimation and Region Localization

  • Htet Htet Lin Software Department Computer University (BanMaw), Myanmar

Abstract

Subtraction of background in a crowded scene is a crucial and challenging task of monitoring the surveillance systems. Because of the similarity between the foreground object and the background, it is known that the background detection and moving foreground objects is difficult. Most of the previous works emphasize this field but they cannot distinguish the foreground from background due to the challenges of gradual or sudden illumination changes, high-frequencies background objects of motion changes, background geometry changes and noise. After getting the foreground objects, segmentation is need to localize the objects region. Image segmentation is a useful tool in many areas, such as object recognition, image processing, medical image analysis, 3D reconstruction, etc. In order to provide a reliable foreground image, a carefully estimated background model is needed. To tackle the issues of illumination changes and motion changes, this paper establishes an effective new insight of background subtraction and segmentation that accurately detect and segment the foreground people. The scene background is investigates by a new insight, namely Mean Subtraction Background Estimation (MS), which identifies and modifies the pixels extracted from the difference of the background and the current frame. Unlike other works, the first frame is calculated by MS instead of taking the first frame as an initial background. Then, this paper make the foreground segmentation in the noisy scene by foreground detection and then localize these detected areas by analyzing various segmentation methods. Calculation experiments on the challenging public crowd counting dataset achieve the best accuracy than state-of-the-art results. This indicates the effectiveness of the proposed work.

Downloads

Download data is not yet available.

References

[1] N. A. Mandellos, I. Keramitsoglou, and C. T. Kiranoudis, “A background subtraction algorithm for detecting and tracking vehicles,” Expert Syst. Appl., vol. 38, pp. 1619–1631, Mar 2011.
[2] H. Zhou, Y. Chen, and R. Feng, “A novel background subtraction method based on color invariants,” Comput. Vis. Image Und., vol. 117, no. 11, pp. 1589–1597, Nov 2013.
[3] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 246–252, Jun 1999.
[4] Z. Zivkovic, “Improved adaptive gaussian mixture model for background subtraction,” Proc. IEEE Int. Conf. Pattern Recognition (ICPR), vol. 2, pp. 28–31, Aug 2004.
[5] D.-S. Lee, “Effective gaussian mixture learning for video background subtraction,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 827–832, May 2005.
[6] Z. Wang, H. Xu, L. Sun, and S. Yang, “Background subtraction in dynamic scenes with adaptive spatial fusing,” Proc. IEEE Int. Workshop Multimedia Signal Processing (MMSP), pp. 1–6, Oct 2009.
[7] A. Elqursh and A. Elgammal, “Online moving camera background subtraction,” Proc. European Conf. Computer Vision (ECCV), vol. 4, pp. 228–241, 2012.
[8] D. Li, L. Xu, and E. D. Goodman, “Illumination-robust foreground detection in a video surveillance system,” IEEE Trans. Circuits Syst. Video Technol., vol. 23, no. 10, pp. 1637–1650, Oct 2013.
[9] T. S. F. Haines and T. Xiang, “Background subtraction with dirichlet process mixture models,” IEEE Trans. Image Process., vol. 36, no. 7, pp. 670–683, Apr 2014.
[10] J.-M. Guo, C.-H. Hsia, Y.-F. Liu, M.-H. Shih, C.-H. Chang, and J.Y. Wu, “Fast background subtraction based on a multilayer codebook model for moving object detection,” IEEE Trans. Circuits Syst. Video Technol., vol. 23, no. 10, pp. 1809–1821, Oct 2013.
[11] T. Huynh-The et al., “Background subtraction with neighbor-based intensity correction algorithm”, in Research Gate, 2015.
[12] Z. Zivkovic, “Improved adaptive gaussian mixture model for background subtraction,” Proc. IEEE Int. Conf. Pattern Recognition (ICPR), vol. 2, pp. 28–31, Aug 2004.
[13] O. Barnich and M. V. Droogenbroeck, “Vibe: A universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process., vol. 20, no. 6, pp. 1709–1724, Jun 2011.
Published
2019-01-04
How to Cite
LIN, Htet Htet. New Insights of Background Estimation and Region Localization. International Journal of Research and Engineering, [S.l.], v. 6, n. 1, p. 556-562, jan. 2019. ISSN 2348-7860. Available at: <https://digital.ijre.org/index.php/int_j_res_eng/article/view/367>. Date accessed: 16 dec. 2019. doi: https://doi.org/10.21276/ijre.2019.6.1.2.