Please use this identifier to cite or link to this item: http://hdl.handle.net/2381/41190
Title: A Biologically Inspired Appearance Model for Robust Visual Tracking
Authors: Zhang, Shengping
Lan, Xiangyuan
Yao, Hongxun
Zhou, Huiyu
Tao, Dacheng
Li, Xuelong
First Published: 15-Sep-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Citation: IEEE Transactions on Neural Networks and Learning Systems, 2017, 28 (10), pp. 2357-2370 (14)
Abstract: In this paper, we propose a biologically inspired appearance model for robust visual tracking. Motivated in part by the success of the hierarchical organization of the primary visual cortex (area V1), we establish an architecture consisting of five layers: whitening, rectification, normalization, coding, and pooling. The first three layers stem from the models developed for object recognition. In this paper, our attention focuses on the coding and pooling layers. In particular, we use a discriminative sparse coding method in the coding layer along with spatial pyramid representation in the pooling layer, which makes it easier to distinguish the target to be tracked from its background in the presence of appearance variations. An extensive experimental study shows that the proposed method has higher tracking accuracy than several state-of-the-art trackers.
DOI Link: 10.1109/TNNLS.2016.2586194
ISSN: 2162-237X
eISSN: 2162-2388
Links: http://ieeexplore.ieee.org/document/7516592/
http://hdl.handle.net/2381/41190
Version: Post-print
Status: Peer-reviewed
Type: Journal Article
Rights: Copyright © 2016, IEEE. Deposited with reference to the publisher’s open access archiving policy.
Appears in Collections:Published Articles, Dept. of Computer Science

Files in This Item:
File Description SizeFormat 
main.pdfPost-review (final submitted author manuscript)414.38 kBAdobe PDFView/Open


Items in LRA are protected by copyright, with all rights reserved, unless otherwise indicated.