You are in the accessibility menu

Please use this identifier to cite or link to this item: http://acervodigital.unesp.br/handle/11449/8292
Title: 
IFTrace: Video segmentation of deformable objects using the Image Foresting Transform
Author(s): 
Institution: 
  • Universidade Estadual de Campinas (UNICAMP)
  • Universidade Estadual Paulista (UNESP)
ISSN: 
1077-3142
Sponsorship: 
  • Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
  • Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
  • Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Sponsorship Process Number: 
  • FAPESP: 07/54201-6
  • FAPESP: 09/11908-8
  • FAPESP: 07/52015-0
  • FAPESP: 09/16206-1
  • CNPq: 481556/2009-5
  • CNPq: 472402/2007-2
  • CNPq: 306631/2007-5
  • CNPq: 303673/2010-9
  • CAPES: 592/08
Abstract: 
We introduce IFTrace, a method for video segmentation of deformable objects. The algorithm makes minimal assumptions about the nature of the tracked object: basically, that it consists of a few connected regions, and has a well-defined border. The objects to be tracked are interactively segmented in the first frame of the video, and a set of markers is then automatically selected in the interior and immediate surroundings of the object. These markers are then located in the next frame by a combination of KLT feature finding and motion extrapolation. Object boundaries are then identified from these markers by the Image Foresting Transform (IFT). These steps are repeated for all subsequent frames until the end of the movie. Thanks to the IFT and a special boundary detection operator, IFTrace can reliably track deformable objects in the presence of partial and total occlusions, camera motion, lighting and color changes, and other complications. Tests on real videos show that the IFT is better suited to this task than Graph-Cut methods, and that IFTrace is more robust than other state-of-the art algorithms - namely, the OpenCV Snake and Cam-Shift algorithms, Hess's Particle-Filter, and Zhong and Chang's method based on spatio-temporal consistency. (C) 2011 Elsevier B.V. All rights reserved.
Issue Date: 
1-Feb-2012
Citation: 
Computer Vision and Image Understanding. San Diego: Academic Press Inc. Elsevier B.V., v. 116, n. 2, p. 274-291, 2012.
Time Duration: 
274-291
Publisher: 
Academic Press Inc. Elsevier B.V.
Keywords: 
  • Segmentation/tracking of moving objects
  • Object delineation
  • Image/video segmentation
  • Image Foresting Transform
  • Graph-based image segmentation
Source: 
http://dx.doi.org/10.1016/j.cviu.2011.10.003
URI: 
Access Rights: 
Acesso restrito
Type: 
outro
Source:
http://repositorio.unesp.br/handle/11449/8292
Appears in Collections:Artigos, TCCs, Teses e Dissertações da Unesp

There are no files associated with this item.
 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.