donderdag 13 januari 2011

A Spatiotemporal Saliency Model for Video Surveillance

 
 

Sent to you by Frouke via Google Reader:

 
 


Abstract  
A video sequence is more than a sequence of still images. It contains a strong spatial–temporal correlation between the regions of consecutive frames. The most important characteristic of videos is the perceived motion foreground objects across the frames. The motion of foreground objects dramatically changes the importance of the objects in a scene and leads to a different saliency map of the frame representing the scene. This makes the saliency analysis of videos much more complicated than that of still images. In this paper, we investigate saliency in video sequences and propose a novel spatiotemporal saliency model devoted for video surveillance applications. Compared to classical saliency models based on still images, such as Itti's model, and space–time saliency models, the proposed model is more correlated to visual saliency perception of surveillance videos. Both bottom-up and top-down attention mechanisms are involved in this model. Stationary saliency and motion saliency are, respectively, analyzed. First, a new method for background subtraction and foreground extraction is developed based on content analysis of the scene in the domain of video surveillance. Then, a stationary saliency model is setup based on multiple features computed from the foreground. Every feature is analyzed with a multi-scale Gaussian pyramid, and all the features conspicuity maps are combined using different weights. The stationary model integrates faces as a supplement feature to other low level features such as color, intensity and orientation. Second, a motion saliency map is calculated using the statistics of the motion vectors field. Third, both motion saliency map and stationary saliency map are merged based on center-surround framework defined by an approximated Gaussian function. The video saliency maps computed from our model have been compared to the gaze maps obtained from subjective experiments with SMI eye tracker for surveillance video sequences. The results show strong correlation between the output of the proposed spatiotemporal saliency model and the experimental gaze maps.

  • Content Type Journal Article
  • Pages 1-23
  • DOI 10.1007/s12559-010-9094-8
  • Authors
    • Tong Yubing, Laboratoire Hubert Crurien UMR 5516, Université Jean Monnet, 42000 Saint-Etienne, France
    • Faouzi Alaya Cheikh, Faculty of Computer Science and Media Technology, Gjøvik University College, Gjøvik, Norway
    • Fahad Fazal Elahi Guraya, Faculty of Computer Science and Media Technology, Gjøvik University College, Gjøvik, Norway
    • Hubert Konik, Laboratoire Hubert Crurien UMR 5516, Université Jean Monnet, 42000 Saint-Etienne, France
    • Alain Trémeau, Laboratoire Hubert Crurien UMR 5516, Université Jean Monnet, 42000 Saint-Etienne, France

 
 

Things you can do from here:

 
 

Geen opmerkingen:

Een reactie posten