You are here

Accelerated learning of Generalized Sammon Mappings

TitleAccelerated learning of Generalized Sammon Mappings
Publication TypeConference Paper
Year of Publication2011
AuthorsHuang Y, Georgiopoulos M, Anagnostopoulos GC
Conference NameNeural Networks (IJCNN), The 2011 International Joint Conference on
Date PublishedJuly
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Conference LocationSan Jose, California, USA
Keywordsaccelerated learning, Acceleration, Convergence, data visualization, dimensionality reduction, Euclidean inter-sample distances, exploratory data analysis, generalized Sammon mappings, inter-sample dissimilarities, iterative majorization, Kernel, kernel Sammon mapping, learning (artificial intelligence), manifold learning, parallel tangents acceleration, projection space, Prototypes, standard gradient-based methods, Stress, successive over-relaxation, Training
Abstract

The Sammon Mapping (SM) has established itself as a valuable tool in dimensionality reduction, manifold learning, exploratory data analysis and, particularly, in data visualization. The SM is capable of projecting high-dimensional data into a low-dimensional space, so that they can be visualized and interpreted. This is accomplished by representing inter-sample dissimilarities in the original space by Euclidean inter-sample distances in the projection space. Recently, Kernel Sammon Mapping (KSM) has been shown to subsume the SM and a few other related extensions to SM. Both of the aforementioned models feature a set of linear weights that are estimated via Iterative Majorization (IM). While IM is significantly faster than other standard gradient-based methods, tackling data sets of larger than moderate sizes becomes a challenging learning task, as IM's convergence significantly slows down with increasing data set cardinality. In this paper we derive two improved training algorithms based on Successive Over-Relaxation (SOR) and Parallel Tangents (PARTAN) acceleration, that, while still being first-order methods, exhibit faster convergence than IM. Both algorithms are relatively easy to understand, straightforward to implement and, performance-wise, are as robust as IM. We also present comparative results that illustrate their computational advantages on a set of benchmark problems.

Notes

Acceptance rate 75% (468/620).

DOI10.1109/IJCNN.2011.6033609

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer