You are here
Accelerated learning of Generalized Sammon Mappings
|Title||Accelerated learning of Generalized Sammon Mappings|
|Publication Type||Conference Paper|
|Year of Publication||2011|
|Authors||Huang Y, Georgiopoulos M, Anagnostopoulos GC|
|Conference Name||Neural Networks (IJCNN), The 2011 International Joint Conference on|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Conference Location||San Jose, California, USA|
|Keywords||accelerated learning, Acceleration, Convergence, data visualization, dimensionality reduction, Euclidean inter-sample distances, exploratory data analysis, generalized Sammon mappings, inter-sample dissimilarities, iterative majorization, Kernel, kernel Sammon mapping, learning (artificial intelligence), manifold learning, parallel tangents acceleration, projection space, Prototypes, standard gradient-based methods, Stress, successive over-relaxation, Training|
The Sammon Mapping (SM) has established itself as a valuable tool in dimensionality reduction, manifold learning, exploratory data analysis and, particularly, in data visualization. The SM is capable of projecting high-dimensional data into a low-dimensional space, so that they can be visualized and interpreted. This is accomplished by representing inter-sample dissimilarities in the original space by Euclidean inter-sample distances in the projection space. Recently, Kernel Sammon Mapping (KSM) has been shown to subsume the SM and a few other related extensions to SM. Both of the aforementioned models feature a set of linear weights that are estimated via Iterative Majorization (IM). While IM is significantly faster than other standard gradient-based methods, tackling data sets of larger than moderate sizes becomes a challenging learning task, as IM's convergence significantly slows down with increasing data set cardinality. In this paper we derive two improved training algorithms based on Successive Over-Relaxation (SOR) and Parallel Tangents (PARTAN) acceleration, that, while still being first-order methods, exhibit faster convergence than IM. Both algorithms are relatively easy to understand, straightforward to implement and, performance-wise, are as robust as IM. We also present comparative results that illustrate their computational advantages on a set of benchmark problems.
Acceptance rate 75% (468/620).