Antenna Beamforming using Support Vector Machines

From MeliWiki
Revision as of 08:13, 2 June 2010 by Georgio (Talk | contribs)

Jump to: navigation, search

Abstract

Support Vector Machines (SVM) have improved generalization performance over other classical optimization techniques. Here, we introduce an SVM-based approach for linear array processing and beamforming. The development of a modified cost function is presented and it is shown how it can be applied to the problem of linear beamforning. Finally, comparison examples are included to show the validity of the new minimization approach.


Introduction

Support Vector Machines (SVM) have shown clear advantage in prediction, regression and estimation over classical approaches in a wide range of applications due to its improved generalization performance. We introduce here a framework to apply the SVM approach to linear antenna array processing.

Antenna array signal processing involves complex signals, for which a complex formulation of the SVM is needed. We introduce this formulation by introducing the real and imaginary parts of the error in the primal optimization and then proceeding as usual to solve a complex valued constrained optimization problem. We formulate the algorithm using a modified cost function which includes the unavoidable numerical regularization for the optimization. The adjustment of the parameters into this cost function leads to an improved robustness of the method against thermal and multiuser noise.

The resulting algorithm is a natural counterpart of the real valued Support Vector Regressor, which can be immediately applied to array signal processing.

We introduce the derived formulation as the optimizer for a linear beamformer. Several experiments illustrate the advantage of SVM over the minimum mean squared error (MMSE) based algorithms due to its robustness against outliers and improved generalization ability. The first experiments compare the behavior of both algorithms in an environment in which interferent signals are close to the desired ones, thus producing nongaussian noise. The last experiment illustrates the improved generalization ability of the SVM when small data sets are used for training, which is common in communications applications.


The Support Vector Approach and the Cost Function

Let <math>\mathbf{x}[n]</math> be spatially sampled data. A linear beamformer can be written as

<math>d[n]=\mathbf{w}^T \mathbf{x}[n]+\epsilon[n]</math>
 ::::::(1)

where <math>{\mathbf{x}}[n]</math> is the vector of <math>M</math> elements of the array output and <math>\epsilon[n]</math> is the output error.

Coefficients <math>\mathbf{w}</math> are usually estimated through the minimization of a certain cost function on <math>\epsilon[n]</math>

The SVM approach can be applied to the adjustment of this model. The main idea of SVM is to obtain the solution which has the minimum norm of <math>\mathbf{w}</math>. Due to the minimization of the weight vector norm, the solution will be regularized in the sense of Thikonov [1], improving the generalization performance. The minimization has to be subject to the constraints

<math>d[n]-{\mathbf{w}}^T {\mathbf{x}}[n] \leq \varepsilon + \xi_n</math>
<math>-d[n]+{\mathbf{w}}^T {\mathbf{x}}[n]\leq \varepsilon + \xi'_n</math>
<math>\xi[n], \xi'[n] \geq 0 </math>

not to be trivial. <math>\xi_n</math> and <math>\xi'_n</math> are the "slack" variables or losses. The optimization is intended to minimize a cost function over these variables. The parameter <math>\varepsilon</math> is used to allow those <math>\xi_n</math> or <math>\xi'_n</math> for which the error is less that <math>\varepsilon</math> to be zero. This is equivalent to the minimization of the so-called <math>\varepsilon</math>-insensitive or Vapnik Loss Function [2], given by

<math>L_{\varepsilon}(\epsilon)=

\begin{cases} |\epsilon|-\varepsilon &|e|>\varepsilon\\ 0&|e|<\varepsilon

\end{cases}</math>

The functional to be minimized is then

<math>L_p=||{\mathbf{w}}||^2+C\sum_n L_\varepsilon({\xi_n},{\xi'_n})</math>

subject to <math>\xi_n, \xi'_n \geq 0</math> where <math>C</math> is the trade-off between the minimization of the norm (to improve generalization ability) and the minimization of the errors [2].

The optimization of the above constrained problem through Lagrange multipliers <math>\alpha_i</math>, <math>\alpha'_i</math> leads to the dual formulation[3]

<math>L_d=-({\boldsymbol \alpha}-{\boldsymbol \alpha'})^T{\mathbf{R}}({\boldsymbol \alpha}-{\boldsymbol \alpha'})+({\boldsymbol

\alpha}-{\boldsymbol \alpha'})^T{\mathbf{y}}-({\boldsymbol

\alpha}+{\boldsymbol \alpha'}){\mathbf{1}}\varepsilon</math>

to be minimized with respect to <math>({\alpha}_i-{\alpha'}_i)</math>.

It involves the Gram matrix <math>\mathbf{R}</math> of the dot products of the data vectors <math>\mathbf{x}[n]</math>. This matrix may be singular and thus producing an ill-conditioned problem. To avoid this numerical inconvenience, a small diagonal <math>\gamma {\mathbf{I}}</math> is added to the matrix prior to the numerical optimization.

We present here a modified derivation of the SVM regressor which leads to a more convenient equivalent cost function (Fig. \ref{fig:robustcost})

<math>L_{R}(e)=

\begin{cases} 0&|e|<\varepsilon\\ \frac{1}{2\gamma}(|e|-\varepsilon)^2-\varepsilon &\varepsilon \leq |e| \leq \varepsilon+e_C\\ C(|e|-\varepsilon)-\frac{1}{2}\gamma C^2 & e_C

\leq |e|
\end{cases}</math>

where <math>e_C=\varepsilon+\gamma C</math>

File:Figure 1 Beamforming.ps


This cost function provides a functional that is numerically regularized by the matrix <math>\gamma I</math>. As it can be seen, the cost function is quadratic for the data which produce errors between <math>\varepsilon</math> and <math>e_C</math>, and linear for errors above <math>e_C</math>. Thus, one can adjust the parameter <math>e_C</math> to apply a quadratic cost for the samples which are mainly affected by thermal noise (i.e., for which the quadratic cost is Maximum Likelihood). The linear cost is then applied to the samples which are outliers [4], [5] Using a linear cost function, the contribution of the outliers to the solution will not depend on its error value, but only on its sign, thus avoiding the bias that a quadratic cost function produces.

Finally, we generalize the derivation to the complex-valued case, as it is necessary for array processing.