Please use this identifier to cite or link to this item: http://hdl.handle.net/2381/43630
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLan, Xiangyuan-
dc.contributor.authorYe, Mang-
dc.contributor.authorShao, Rui-
dc.contributor.authorZhong, Bineng-
dc.contributor.authorYuen, Pong C.-
dc.contributor.authorZhou, Huiyu-
dc.date.accessioned2019-03-25T17:08:33Z-
dc.date.available2019-03-25T17:08:33Z-
dc.date.issued2019-02-15-
dc.identifier.citationIEEE Transactions on Industrial Electronics, 2019en
dc.identifier.issn0278-0046-
dc.identifier.urihttps://ieeexplore.ieee.org/abstract/document/8643077en
dc.identifier.urihttp://hdl.handle.net/2381/43630-
dc.description.abstractWith a large number of video surveillance systems installed for the requirement from industrial security, the task of object tracking, which aims to locate objects of interest in videos, is very important. Although numerous tracking algorithms for RGB videos have been developed in the decade, the tracking performance and robustness of these systems may be degraded dramatically when the information from RGB video is unreliable (e.g. poor illumination conditions or very low resolution). To address this issue, this paper presents a new tracking system which aims to combine the information from RGB and infrared modalities for object tracking. The proposed tracking systems is based on our proposed machine learning model. Particularly, the learning model can alleviate the modality discrepancy issue under the proposed modality consistency constraint from both representation patterns and discriminability, and generate discriminative feature templates for collaborative representations and discrimination in heterogeneous modalities. Experiments on a variety of challenging RGB-infrared videos demonstrate the effectiveness of the proposed algorithm.en
dc.description.sponsorshipThis work was supported in part by Hong Kong Research Grants Council RGC/HKBU12254316 and Hong Kong Baptist University Tier 1 Start-up Grant. The work of H. Zhou was supported by UK EPSRC under Grant EP/N011074/1, Royal Society-Newton Advanced Fellowship under Grant NA160342, and European Union’s Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No. 720325. The work of B. Zhong was supported by the National Natural Science Foundation of China under Grant 61572205.en
dc.language.isoenen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en
dc.rightsCopyright © 2019, Institute of Electrical and Electronics Engineers (IEEE). Deposited with reference to the publisher’s open access archiving policy. (http://www.rioxx.net/licenses/all-rights-reserved)en
dc.subjectMultimodal sensor fusionen
dc.subjecttracking systemen
dc.subjectvideo surveillance systemen
dc.titleLearning Modality-Consistency Feature Templates: A Robust RGB-Infrared Tracking Systemen
dc.typeJournal Articleen
dc.identifier.doi10.1109/TIE.2019.2898618-
dc.description.statusPeer-revieweden
dc.description.versionPost-printen
dc.type.subtypeArticle-
pubs.organisational-group/Organisationen
pubs.organisational-group/Organisation/COLLEGE OF SCIENCE AND ENGINEERINGen
pubs.organisational-group/Organisation/COLLEGE OF SCIENCE AND ENGINEERING/Department of Informaticsen
dc.dateaccepted2019-01-18-
Appears in Collections:Published Articles, Dept. of Computer Science

Files in This Item:
File Description SizeFormat 
91640.pdfPost-review (final submitted author manuscript)4.14 MBAdobe PDFView/Open


Items in LRA are protected by copyright, with all rights reserved, unless otherwise indicated.