Please use this identifier to cite or link to this item: http://hdl.handle.net/2381/38711
Title: Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning
Authors: Gorban, A. N.
Mirkes, E. M.
Zinovyev, A.
First Published: 30-Aug-2016
Publisher: Elsevier for European Neural Network Society (ENNS), International Neural Network Society (INNS), Japanese Neural Network Society (JNNS)
Citation: Neural Networks, 2016, 84, pp. 28-38
Abstract: Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L1 norm or even sub-linear potentials corresponding to quasinorms Lp (0<p<1). The back side of these approaches is increase in computational cost for optimization. Till so far, no approaches have been suggested to deal with arbitrary error functionals, in a flexible and computationally efficient framework. In this paper, we develop a theory and basic universal data approximation algorithms (k-means, principal components, principal manifolds and graphs, regularized and sparse regression), based on piece-wise quadratic error potentials of subquadratic growth (PQSQ potentials). We develop a new and universal framework to minimize arbitrary sub-quadratic error potentials using an algorithm with guaranteed fast convergence to the local or global error minimum. The theory of PQSQ potentials is based on the notion of the cone of minorant functions, and represents a natural approximation formalism based on the application of min-plus algebra. The approach can be applied in most of existing machine learning methods, including methods of data approximation and regularized and sparse regression, leading to the improvement in the computational cost/accuracy trade-off. We demonstrate that on synthetic and real-life datasets PQSQ-based machine learning methods achieve orders of magnitude faster computational performance than the corresponding state-of-the-art methods, having similar or better approximation accuracy.
DOI Link: 10.1016/j.neunet.2016.08.007
ISSN: 0893-6080
eISSN: 1879-2782
Links: http://www.sciencedirect.com/science/article/pii/S0893608016301113
http://hdl.handle.net/2381/38711
Version: Pre-print
Status: Peer-reviewed
Type: Journal Article
Rights: Creative Commons “Attribution Non-Commercial No Derivatives” licence CC BY-NC-ND, further details of which can be found via the following link: http://creativecommons.org/licenses/by-nc-nd/4.0/ Archived with reference to SHERPA/RoMEO and publisher website.
Appears in Collections:Published Articles, Dept. of Mathematics

Files in This Item:
File Description SizeFormat 
1605.06276v2.pdfPre-review (submitted draft)478.03 kBAdobe PDFView/Open


Items in LRA are protected by copyright, with all rights reserved, unless otherwise indicated.