You are in the accessibility menu

Please use this identifier to cite or link to this item: http://acervodigital.unesp.br/handle/11449/8292
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMinetto, R.-
dc.contributor.authorSpina, T. V.-
dc.contributor.authorFalcao, A. X.-
dc.contributor.authorLeite, N. J.-
dc.contributor.authorPapa, João Paulo-
dc.contributor.authorStolfi, J.-
dc.date.accessioned2014-05-20T13:25:57Z-
dc.date.accessioned2016-10-25T16:46:12Z-
dc.date.available2014-05-20T13:25:57Z-
dc.date.available2016-10-25T16:46:12Z-
dc.date.issued2012-02-01-
dc.identifierhttp://dx.doi.org/10.1016/j.cviu.2011.10.003-
dc.identifier.citationComputer Vision and Image Understanding. San Diego: Academic Press Inc. Elsevier B.V., v. 116, n. 2, p. 274-291, 2012.-
dc.identifier.issn1077-3142-
dc.identifier.urihttp://hdl.handle.net/11449/8292-
dc.identifier.urihttp://acervodigital.unesp.br/handle/11449/8292-
dc.description.abstractWe introduce IFTrace, a method for video segmentation of deformable objects. The algorithm makes minimal assumptions about the nature of the tracked object: basically, that it consists of a few connected regions, and has a well-defined border. The objects to be tracked are interactively segmented in the first frame of the video, and a set of markers is then automatically selected in the interior and immediate surroundings of the object. These markers are then located in the next frame by a combination of KLT feature finding and motion extrapolation. Object boundaries are then identified from these markers by the Image Foresting Transform (IFT). These steps are repeated for all subsequent frames until the end of the movie. Thanks to the IFT and a special boundary detection operator, IFTrace can reliably track deformable objects in the presence of partial and total occlusions, camera motion, lighting and color changes, and other complications. Tests on real videos show that the IFT is better suited to this task than Graph-Cut methods, and that IFTrace is more robust than other state-of-the art algorithms - namely, the OpenCV Snake and Cam-Shift algorithms, Hess's Particle-Filter, and Zhong and Chang's method based on spatio-temporal consistency. (C) 2011 Elsevier B.V. All rights reserved.en
dc.description.sponsorshipFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)-
dc.description.sponsorshipConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)-
dc.description.sponsorshipCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)-
dc.format.extent274-291-
dc.language.isoeng-
dc.publisherAcademic Press Inc. Elsevier B.V.-
dc.sourceWeb of Science-
dc.subjectSegmentation/tracking of moving objectsen
dc.subjectObject delineationen
dc.subjectImage/video segmentationen
dc.subjectImage Foresting Transformen
dc.subjectGraph-based image segmentationen
dc.titleIFTrace: Video segmentation of deformable objects using the Image Foresting Transformen
dc.typeoutro-
dc.contributor.institutionUniversidade Estadual de Campinas (UNICAMP)-
dc.contributor.institutionUniversidade Estadual Paulista (UNESP)-
dc.description.affiliationUniv Estadual Campinas, UNICAMP, Inst Comp, Campinas, SP, Brazil-
dc.description.affiliationUniv Estadual Paulista, Dept Comp, Bauru, Brazil-
dc.description.affiliationUnespUniv Estadual Paulista, Dept Comp, Bauru, Brazil-
dc.description.sponsorshipIdFAPESP: 07/54201-6-
dc.description.sponsorshipIdFAPESP: 09/11908-8-
dc.description.sponsorshipIdFAPESP: 07/52015-0-
dc.description.sponsorshipIdFAPESP: 09/16206-1-
dc.description.sponsorshipIdCNPq: 481556/2009-5-
dc.description.sponsorshipIdCNPq: 472402/2007-2-
dc.description.sponsorshipIdCNPq: 306631/2007-5-
dc.description.sponsorshipIdCNPq: 303673/2010-9-
dc.description.sponsorshipIdCAPES: 592/08-
dc.identifier.doi10.1016/j.cviu.2011.10.003-
dc.identifier.wosWOS:000299365400009-
dc.rights.accessRightsAcesso restrito-
dc.relation.ispartofComputer Vision and Image Understanding-
Appears in Collections:Artigos, TCCs, Teses e Dissertações da Unesp

There are no files associated with this item.
 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.