Please use this identifier to cite or link to this item: http://hdl.handle.net/2381/45168
Title: How Deep Should be the Depth of Convolutional Neural Networks: a Backyard Dog Case Study
Authors: Gorban, Alexander N.
Mirkes, Evgeny M.
Tyukin, Ivan Y.
First Published: 7-Aug-2019
Publisher: Springer (part of Springer Nature)
Citation: Cognitive Computation, 2019
Abstract: The work concerns the problem of reducing a pre-trained deep neuronal network to a smaller network, with just few layers, whilst retaining the network’s functionality on a given task. In this particular case study, we are focusing on the networks developed for the purposes of face recognition. The proposed approach is motivated by the observation that the aim to deliver the highest accuracy possible in the broadest range of operational conditions, which many deep neural networks models strive to achieve, may not necessarily be always needed, desired or even achievable due to the lack of data or technical constraints. In relation to the face recognition problem, we formulated an example of such a use case, the ‘backyard dog’ problem. The ‘backyard dog’, implemented by a lean network, should correctly identify members from a limited group of individuals, a ‘family’, and should distinguish between them. At the same time, the network must produce an alarm to an image of an individual who is not in a member of the family, i.e. a ‘stranger’. To produce such a lean network, we propose a network shallowing algorithm. The algorithm takes an existing deep learning model on its input and outputs a shallowed version of the model. The algorithm is non-iterative and is based on the advanced supervised principal component analysis. Performance of the algorithm is assessed in exhaustive numerical experiments. Our experiments revealed that in the above use case, the ‘backyard dog’ problem, the method is capable of drastically reducing the depth of deep learning neural networks, albeit at the cost of mild performance deterioration. In this work, we proposed a simple non-iterative method for shallowing down pre-trained deep convolutional networks. The method is generic in the sense that it applies to a broad class of feed-forward networks, and is based on the advanced supervise principal component analysis. The method enables generation of families of smaller-size shallower specialized networks tuned for specific operational conditions and tasks from a single larger and more universal legacy network.
DOI Link: 10.1007/s12559-019-09667-7
ISSN: 1866-9956
eISSN: 1866-9964
Links: https://link.springer.com/article/10.1007%2Fs12559-019-09667-7
http://hdl.handle.net/2381/45168
Version: Publisher Version
Status: Peer-reviewed
Type: Journal Article
Rights: Copyright © the authors, 2019. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Appears in Collections:Published Articles, Dept. of Mathematics

Files in This Item:
File Description SizeFormat 
Gorban2019_Article_HowDeepShouldBeTheDepthOfConvo.pdfPublished (publisher PDF)466.99 kBAdobe PDFView/Open


Items in LRA are protected by copyright, with all rights reserved, unless otherwise indicated.