Element Failure Diagnostics for Planar Antenna Arrays using Support Vector Machines

From MeliWiki
Jump to: navigation, search

Introduction

With the increasing demand for range and capacity of wireless communication systems, more and more attention has been given to the design and implementation of robust and self-recoverable antenna systems. Any malfunction of one or more radiating elements will deform the overall radiation pattern of the antenna array. The goal here is to be able to detect the elements that have failed and then compensate for their failure to restore the original radiation pattern.

Many papers have dealt with the issue of antenna array failure correction and compensation or recovery [1] [2] [3]. In this wiki, a Support Vector Machine (SVM)- based multi-class classifier is used to match any possible deformed radiation patterns with an exclusive spatial distribution of failed elements, which means that we can locate those faulty elements within the antenna array based on real-time measurements of the radiated field.

In our case, we assume that each element in the antenna array can have only one of two values, {1, 0}. i.e., the antenna element is “ON” or “OFF”. Even with this assumption, to diagnose a normal sized array in practice will end up with an extremely huge number of possible modes.

The content of this paper is as follows: In Section II, we introduce the array factor for a 2-dimenisonal array from which 2-D radiation patterns with randomly failed elements can be extracted. In Section III and IV, the basis of SVM theory and the mechanism of combining binary SVCs to form multi-class SVCs are presented. In Section V, we show how the multi-class SVC is applied to distinguish the different patterns when the antenna array has faulty elements.

Radiation Patterns for Partially Failed Planar Antenna Arrays

Let us consider an <math>8 \times 10</math> planar antenna array with element spacing is <math>0.25 \lambda</math> for both x and y directions (see Fig 1) and a progressive phase shift of <math>\beta_x=20^\circ </math> along x dimension and <math>\beta_y=0^\circ </math> along y.

Fig. 1 Spatial distribution of an 8 by 10 planar antenna array.

Using finite element analysis, the radiation pattern of the planar antenna array is shown below.

Fig. 2  3-D radiation pattern of Array Factor for the 8 by 10 planar.

The array factor for this antenna is given by:

<math> AF_{n \times m}(\theta,\phi)=I_0\sum_{m=1}^{M}e^{j(m-1)(kd_xsin\theta cos\phi+\beta_x)}\sum_{n=1}^{N}e^{j(n-1)(kd_ysin\theta sin\phi +\beta_y)} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(1) </math>

where dx and dy are the distance of separation between the elements along x and y, respectively. Equation one can also be rewritten as:

<math> AF_{n \times m}(\theta,\phi)=I_0\{\frac{sin(\frac{M}{2}\Psi_x)}{Msin(\frac{\Psi_x}{2})}\}\{\frac{sin(\frac{N}{2}\Psi_y)}{Nsin(\frac{\Psi_y}{2})}\} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(2) </math>

where

<math> \Psi_x=kd_x sin\theta cos\phi +\beta_x \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(3) </math>

<math> \Psi_y=kd_y sin\theta sin\phi + \beta_y \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(4) </math>

The idea is to put the AF in the form of an N by M matrix and then expressed the AF of a partially failed as an inner product between this AF matrix and a fail mode matrix. Here we assume isotropic radiating elements for simplicity. The 2-D patterns (polar and rectangular) for different phi plane curs are shown below.

Fig. 3  2-D polar radiation patterns (elevation plane) of the Array.
Fig. 4  2-D rectangular radiation patterns of the Array.

Next, we randomly pick some radiating elements within this array to fail, as shown below. The empty spots in the figure below (Fig. 5) denote the antenna elements that have failed.

Fig. 5 Spatial distribution of a partially failed 8 by 10 planar antenna array.

If we use “1” to represent the elements that still radiate “0” for the failed elements, then we can have a matrix, labeled the fail mode matrix, expressed as:

<math> fail\_mod=\begin{bmatrix}

1&  1&  1&  1&  1&  1&  0&  0&  0& 0\\ 
1&  1&  1&  1&  1&  1&  0&  0&  0& 0\\ 
1&  1&  1&  1&  1&  1&  0&  0&  0& 0\\ 
1&  1&  1&  1&  1&  1&  0&  0&  0& 0\\ 
0&  0&  0&  0&  0&  0&  0&  1&  1& 1\\ 
0&  0&  0&  0&  0&  0&  0&  1&  1& 1\\ 
1&  1&  1&  1&  1&  1&  1&  1&  1& 1\\ 
1&  1&  0&  0&  0&  0&  0&  1&  0& 0

\end{bmatrix} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(5) </math>

The inner product between this fail mode matrix and the one mention in equation (2) is used to determine the radiation patterns of this new antenna array with the failed elements. Both, the 3-D and 2-D radiation patterns in this case are shown below:

Fig. 6    3-D radiation pattern of Array Factor for the 8 by 10 partially failed planar array.
Fig. 7   2-D Elevation plane polar radiation pattern of the Array Factor for partially failed 8 by 10 array.
Fig. 8  2-D Elevation plane radiation pattern of the  Array Factor for partially failed 8 by 10.

One of course can use other radiating elements such as dipoles, monopoles, etc.The above example and patterns shed some light on how the faulty radiating elements in an array will change the over-all radiation pattern. Next, the Support Vector Classifier is introduced.

Support Vector Classifier

The simplest type of an SVC problem is the 2-class classification problem. All the input data in this case exclusively belong to one of two classes, either class I or class II, which in our particular case (On or Off elements) means by diagnosing the detected radiation field of each element (input data), we can judge the current status (output data) of this element to be either “working” (class I) or “failed” (class II). Next we express the input data in n-dimensional vectors <math>\overline{x_i}(i=1,2,3...)</math>, and assign each input vector a label <math>y_i \in \{+1,-1\}</math> to indicate which class does the <math>ith</math> input belongs to. For the case of linearly separable data, providing a group of verified input-output data pairs, indicated by <math>\{(x_1,y_1),(x_2,y_2),...,(x_N,y_N)\}</math>, we need to make use of these known data pairs to construct (or we can say “train”) a classifier function "<math>f(\overline{x})</math>", by which we can map the new coming iid (Independent and Identically Distributed) input vectors <math>\overline{x_i}</math> into the space of labels, <math>\{+1,-1\}</math>, with minimized error.

<math>f(\overline{x})</math> is actually a hyperplane in n-dimensional space, <math>f(\overline{x})>0</math> if the input <math>\overline{x}</math> belongs to positive class, and <math>f(\overline{x})<0</math> if the input <math>\overline{x}</math> belongs to negative class. Thus <math>f(\overline{x})</math> can be made as a decision function having the following form:

<math> f(\overline{x})=sgn(<\overline{w},\overline{x}>+b)=sgn(\sum_iw_ix_i+b) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(6) </math>

where <math>\overline{w}</math> is an n-dimensional vector and <math>b</math> is a scalar. Note that, <math><\overline{w},\overline{x_i}></math> represents a dot product between the vector <math>\overline{w}</math> and <math>\overline{x_i}</math>. The weighting vector <math>\overline{w}</math> defines the direction of the separation hyperplane <math>f(\overline{x})</math> and the bias <math>b</math> defines the hyperplane's distance from the origin.

Fig. 9  Optimal Hyperplane (Support Vectors are in bold).

From Fig. 9, one can see that among all hyperplanes separating the data, there exist one unique optimal hyperplane, distinguished by the maximum margin of separation between any training points and the hyperplane. It is the solution of:

<math> min\{\left \| \overline{x}-\overline{x_i} \right \|\ | \overline{x} \in H, <\overline{w},\overline{x}>+b=0,i=1,...,N\} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(7) </math>

The capacity of the class of separating hyperplanes will decrease with increasing margin. The fact is that we always need to restrict the capacity of our function class to be small enough (compared with the available amount of training data) to obtain a good generalization for our predictions [4]. Thus to construct the optimal hyperplane, we have to solve:


<math> Minimize \; \tau(\overline{w})=\frac{1}{2}\left \| \overline{w} \right \|^2 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(8) </math>

<math> Subject \;to\; y_i(<\overline{w},\overline{x_i}>+b) \geq 1 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(9) </math>

The function <math>\tau</math> is called the objective function and equation (9) represents the inequality constraints. Together, they form the so-called constrained optimization problem. Problems of this kind are dealt by introducing Lagrange multipliers <math>\alpha_i \geq 0</math> and

<math> L(\overline{w},b,\overline{\alpha)}=\frac{1}{2}\left \| \overline{w} \right \|^2-\sum_{i=1}^{N}\alpha_i(y_i(<\overline{x_i},\overline{w}>+b)-1)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(10) </math>

then our original problem is transformed into minimizing <math>L</math> with respect to primal variables <math>\overline{w}</math> and <math>b</math>, and simultaneously, maximizing <math>L</math> with respect to the dual variables <math>\alpha_i</math>. At the saddle point, we have:

<math> \frac{\partial}{\partial b}L(\overline{w},b,\overline{\alpha})=0 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(11) </math>

<math> \frac{\partial}{\partial \overline{w}}L(\overline{w},b,\overline{\alpha})=0 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(12) </math>

which yields:

<math> \sum_{i=1}^{N}\alpha_iy_i=0 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(13) </math>

<math> \overline{w}=\sum_{i=1}^{N}\alpha_iy_i\overline{x_i} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(14) </math>

According to the Karush-Kuhn-Tucker (KKT) complementarity conditions of optimization theory [5] equation (14) means the solution vector <math>\overline{w}</math> has an expansion in terms of a subset of the training data, namely those inputs with non-zero <math>\alpha_i</math>, called Support Vectors (SV). By substituting equations (13) and (14) into the Lagrangian equation (10), we can eliminate the primal variables <math>\overline{w}</math> and <math>b</math>, and obtain the so-called dual optimization problem, which is the one usually solved in practice:

<math> Maximize \;\; \sum_{i=1}^{N}\alpha_i-\frac{1}{2}\sum_{i,j=1}^{N} \alpha_i \alpha_j y_i y_j <x_i,x_j> \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(15) </math>

<math> Subject\;to\;\; \alpha_i \geq 0 \;for\;all\;i=1,...,N\;and\;\sum_{i=1}^{N}\alpha_iy_i=0 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(16) </math>

Solving (16) for <math>\overline{\alpha}=(\alpha_1,...\alpha_N)</math>, yields the support vectors for class II and I. Then the optimal separating hyperplane is placed at equal distances from the support vectors for the two classes. The hyperplane decision function can thus be written as:

<math> f(\overline{x})=sgn(\sum_{i=1}^{m}y_i\alpha_i<\overline{x},\overline{x_i}>+b) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(17) </math>

where the bias <math>b</math> can be computed as:

<math> b=-\frac{1}{2}\sum_{i=1}^{N}y_i\alpha_i(<\overline{s_1},\overline{x_i}>+<\overline{s_2},\overline{x_i}>) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(18) </math>

<math>\overline{s_1}</math> and <math>\overline{s_2}</math> are arbitrary support vectors for class I and class II, respectively.

So far, our discussion has been based on the assumption that the training data is linearly separable, while, this is not always so. For the cases where the training data cannot be linearly separated, we need to introduce the Kernel Trick to nonlinearly transform the input data <math>x_1,...,x_m \in X</math> into a high dimensional feature space, by using a map <math>\Phi:x_i \to \overline{x_i}</math>; we can then achieve a linear separation there. An example is given in Fig.10.

Fig. 10 Linearly inseparable case in <math>R^2</math> becomes separable after non-linear transform <math>\Phi</math>.

After doing the non-linear transformation <math>\Phi</math>, our decision function becomes:

<math> f(\overline{x})=sgn(\sum_{i=1}^{N}\alpha_iy_i<\Phi(\overline{x}),\Phi(\overline{x_i})>+b) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(19) </math>

In SVM, the actual form of this non-linear mapping <math>\Phi</math> is not necessary to be known. What we really require is the computation of dot products <math><\Phi(\overline{x}),\Phi(\overline{x_i})></math> in a higher dimensional space. These calculations are reduced significantly by applying a positive definite kernel <math>k</math>, such that

<math> <\Phi(\overline{x}),\Phi(\overline{x_i})>=k(\overline{x},\overline{x_i}) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(20) </math>

and developing the decision function into the form of:

<math> f(\overline{x})=sgn(\sum_{i=1}^{N}\alpha_iy_ik(\overline{x},\overline{x_i})+b) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(21) </math>

Then the learning process in the feature space does not involve anything related to <math>\Phi</math>. The new coming iid input data can be classified as Class I if <math>f(\overline{x})=+1</math>, Class II if <math>f(\overline{x})=-1</math>. The detailed analysis of how to construct the optimal separating hyperplane for linearly inseparable cases can be found in [5]

Multi-class SVC

So far we have talked about binary classification, where the class labels can only take two values: +1 or –1. In fault diagnostics of antenna arrays, there usually exist too many faulty classes and the simple 2-class classifier is not sufficient to solve the problem in this case. In this case need a multi-class classifier. Here we show how a majority voting approach can be applied to combine several 2-class SVCs into one multi-class SVC.

First, we construct several binary pairwise classifiers, each of them is used to separate only two faulty classes at a time, then for total <math>M</math> faulty classes (including the healthy class), we end up with an <math>\frac{(M-1)M}{2}</math> pairwise classifiers to cover all pairs of classes. Each classifier is trained on a subset of the training data, which contains only the training examples involved with those two particular classes. When the number of faulty classes is large, it may end up with a large number of binary SVCs, however, the individual problems that we need to deal with are significantly smaller.

The final classification result for this M-class problem is based on all the results of the <math>\frac{(M-1)M}{2}</math> 2-class classifications, constructed based on which of the classes get the highest number of votes (A vote for a class is defined as a classifier putting the input into that class), this is called majority voting approach.

We define the decision function for class i against class j to be:

<math> f_{ij}(\overline{x})=<\overline{w_{ij}},\overline{x}>+b_{ij} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(22) </math>

note we also define:

<math> f_{ij}(\overline{x})=-f_{ji}(\overline{x}) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(23) </math>

When the class wins the maximum number of votes among all classifiers it becomes the final decision for that class. For class i we get the decision function:

<math> f_i(\overline{x})=\sum_{j=1}^Msgn(f_{ij}(\overline{x})) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(24) </math>

and the final decision is made in the following way:

<math> class=arg(max_{i=1,..,M}f_i(\overline{x})) \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(25) </math>

Applying the Multi-class SVC to Faulty Pattern Diagnosis

As discussed earlier, sensors arranged in the radiation far field around the antenna array measure the real time radiation intensity, thus we can construct the training set for the multi-class SVC by manually setting the status of array and measure the field intensity, which consists of pair wise data for different patterns correspond to the radiating elements status. This training set will originally appear in the form of a three dimensional matrix, which is impractical to be fed into the SVC and train the machine, however, a properly chosen Kernel function can map all these inseparable measurements from the input space into a higher dimensional feature space, where the data becomes separable and an optimal hyperplane can be found by applying the algorithm mentioned in last two sections. The whole process is shown in Fig. 11 below:

Fig. 11  The process for multi-class SVC diagnosis of.

The capacity of the training set will be huge even if an excellent kernel function has been selected, however, the training of the multi-class SVC can be executed by the training of binary SVCs respectively, which simplifies the problem.

As a simple example we can train one of the binary SVCs: suppose we have a training set for one binary SVC consisting of 100 data pairs, all the data are iid and obey a normal distribution with mean zero. If we choose a RBF (radial basis function) kernel with the penalty factor for the SVM soft-margin formulation to be 10, then we will have the following visible result in the figure below:

Fig. 12  Classification result of a binary SVC using RBF Kernel, Matlab Toolbox provided by [6].

Conclusions

A multi-class SVC was applied to fault diagnostics of planar antenna arrays. The antenna patterns of arrays with failed elements are derived by multiplying the array factor by the this fail mode matrix that includes the exact position of all failed elements in the array. One of the most important and challenging problems is how to find an appropriate kernel function to map the input space into a higher order feature space, by which we can increase the margin between the different classes best. Examples and MATLAB code are also presented.

Student Project

Run the MATLAB code below for an 4 by 4 element array using finite dipoles as the radiating elements.

• Plot the 2-D and 3-D patterns for the array antenna as shown in the example above.

• Now assume several elements not operating (Off) in the antenna array.

• Plot the new 2-D and 3-D radiation patterns and compare.

• Using the methodology explained above and the radiation patterns that you have produced to determine the elements that have failed

Wiki Assessment

What is the function of a support vector classifier?

What is the difference between a 2-class classifier and a multi-class classifier ?

Why do we need a multi-class classifier for antenna array problems?

What is the Kernel Trick and why do we need it?

What is the difference between linear and non-linear SVC’s ?

What is the majority decision vote?

What other methods can one use to determine the number and position of failed elements in an antenna array?

References and Resources

  1. Redvik Jonatan, “Simulated annealing optimization applied to antenna arrays with failed elements”, Antennas and Propagation Society International Symposium, 1999. IEEE, Volume: 1 , 11-16 July 1999.
  2. Levitas, M.; Horton, D.A.; Cheston, T.C. “Practical failure compensation in active phased arrays” Antennas and Propagation, IEEE Transactions on, Volume: 47 , Issue: 3 , March 1999.
  3. Castaldi, G.; Pierro, V.; Pinto, I.M. “Neural net aided fault diagnostics of large antenna arrays”, Antennas and Propagation Society International Symposium, 1999. IEEE ,Volume: 4 , 11-16 July 1999.
  4. Bernhard Scholkopf, Alexander J. Smola, “Learning with Kernels, Support Vector Machines, Regularization, Optimization and Beyond”, Chapter 1, The MIT Press, Cambridge, Massachusetts.
  5. 5.0 5.1 Bernhard Scholkopf, Alexander J. Smola, “Learning with Kernels, Support Vector Machines, Regularization, Optimization and Beyond”, Chapter 6,Chapter 7, The MIT Press, Cambridge, Massachusetts.
  6. SVM_toolbox software provided by Fernando Pérez-Cruz http://www.tsc.uc3m.es/~fernando/.

Appendix: Matlab Program

%====================================================================================

% This program can plot a 2-D and 3-D far field radiation pattern of any scale planar antenna

% array with user defined faulty element within this array (assume On-Off Fault, means

% faulty element doesn't radiate at all), and also, user can choose the type of

% radiation element to be one of the three dipoles according to user defined element

% size:

% 1. Infinitessimal Dipole (L<=0.02Lamda)

% 2. Small Dipole (0.02Lamda<=L<=0.1Lamda)

% 3. Finite Dipole (0.1Lamda<=L<=Lamda).

%

%=====================================================================================


clear all;


N_x=input('Number of elements in x=??');

N_y=input('Number of elements in y=??');

d_x=input('Element spacing(in terms of wavelength) in x=??');

d_y=input('Element spacing(in terms of wavelength) in y=??');

beta_x_d=input('Progressive phase shift in x=??(in degree)');

beta_y_d=input('Progressive phase shift in y=??(in degree)');

L=input('Enter the length of each array element in terms of wavelength, between 0 and 1:');

%========input parameters setup============================================================


I0=1;%maximum value of current, uniform magnitude

beta_x=beta_x_d*pi/180; %transform "degree" into "rad"

beta_y=beta_y_d*pi/180; %transform "degree" into "rad"

eta=120*pi; %intrinsic impedence

R=100*L; %the distance between observation point(far field) and array element

M=300; %Number of segments to devide theta angle (from 0 to pi)

K=2*M; %Number of segments to devide Phi angle (from 0 to 2*pi)

%alltogether M and K consists a (M by K)mesh to describe the radiation pattern

%===========value the constants and variables exchange========================


fail_mode=ones(N_x,N_y); % the map of spatial distribution of faulty element

while input('Any more element in the array failed?(1---Yes, 0---N0)')==1

x_pos=input('input the x_index of failed element:');

y_pos=input('input the y_index of failed element:');

fail_mode(x_pos,y_pos)=0;

end

fprintf('The antenna array with faulty elements is:');

fail_mode

%================User defined array with failed element=======================


for theta_=0:M; %devide theta into M levels (0:pi) to generate plotting mesh

if L<=0.02 %decide the type of element to be "infinitessimal dipole"

EF=eta*2*pi*L*I0.*sin(theta_/M*pi)./(4*pi*R);%infinitessimal dipole element factor

elseif (0.02<L) & (L<=0.1) %decide the type of element to be "small dipole"

EF=eta*2*pi*L*I0.*sin(theta_/M*pi)./(8*pi*R);%small dipole element factor

elseif (0.1<L) & (L<=1) %decide the type of element to be "finite dipole"

EF=eta*I0.*((cos(pi*L*cos(theta_/M*pi))-cos(pi*L))./(sin(theta_/M*pi)+(1.9763e-323)))/(2*pi*R);%finite dipole element factor

%=======if the elements are all along z direction, then the EF are exactly like above,

%=====if elements are put perpendicular to z axis, then we only need to change the sin(theta_/M) by cos(theta_/M)

else

fprintf('*********** ERROR **********\n');

fprintf('Warning: Please enter the correct length of element!\n');

break

end % end if

%=============deciding element type====================================================

for phi_=0:K,%devide phi into K levels (0:2*pi) to generate plotting mesh


thi_x=2*pi*d_x*sin(theta_/M*pi)*cos(2*phi_/K*pi)+beta_x;

thi_y=2*pi*d_y*sin(theta_/M*pi)*sin(2*phi_/K*pi)+beta_y;


for n=1:N_x

X_AF(n)=exp(i*(n-1)*thi_x);

end


for n=1:N_y

Y_AF(n)=exp(i*(n-1)*thi_y);

end


Total_AF=sum(sum((((X_AF)')*(Y_AF)).*fail_mode));% Array Factor

radint=abs(abs(Total_AF)*EF); %Total radiation intensity by applying Pattern Multiplication


r=theta_+1;%Row index of coordinate

c=phi_+1; %Column index of coordinate


m=radint.*sin(theta_/M*pi).*cos(2*phi_/K*pi);

n=radint.*sin(theta_/M*pi).*sin(2*phi_/K*pi);

t=radint.*cos(theta_/M*pi);

X(r,c)=m;Y(r,c)=n;Z(r,c)=t;

%======Transfer spherical coordinates into rectangular coordinates=================


phi(r,c)=2*phi_/K*pi;

rho(r,c)=radint;

theta(r,c)=theta_/M*pi;

%=======Record phi, rho and theta to construct polar plot afterward================

end

end

r1=max(max(rho)); %find the maximum value to normalize rho in polar plotting

X1=X'; Y1=Y'; Z1=Z';


surfl(X/r1,Y/r1,Z/r1);grid;shading flat;

axis equal

colormap(gray); hold on;

surfl(X1/r1,Y1/r1,Z1/r1);grid;shading flat;

axis equal

colormap(gray); hold off

title('3-D radiation pattern of the planar array with the consideration of failed elements, element type is specified by user');

xlabel('x-axis');ylabel('y-axis');zlabel('z-axis');

%3-D plotting of the total radiation pattern


%========================================================================

%====polar plot of AF for phi=pi/2, 3*pi/4, pi and 5*pi/4 respectively

%======================and also add fail mode into account===============

figure(2);

for jj=2:5

phi=(jj)*pi/4;

for I=1:300;

THETA(I)=-pi+(I-1)*2*pi/299;

end

if L<=0.02 %decide the type of element to be "infinitessimal dipole"

EF=eta*2*pi*L*I0.*sin(THETA)./(4*pi*R);%infinitessimal dipole element factor

elseif (0.02<L) & (L<=0.1) %decide the type of element to be "small dipole"

EF=eta*2*pi*L*I0.*sin(THETA)./(8*pi*R);%small dipole element factor

elseif (0.1<L) & (L<=1) %decide the type of element to be "finite dipole"

EF=eta*I0.*((cos(pi*L*cos(THETA))-cos(pi*L))./(sin(THETA)+(1.9763e-323)))/(2*pi*R);%finite dipole element factor

%=======if the elements are all along z direction, then the EF are exactly like above,

%=====if elements are put perpendicular to z axis, then we only need to change the sin(theta_/M) by cos(theta_/M)

end % end if

thi_x=2*pi*d_x*sin(THETA)*cos(phi)+beta_x; % thi for x dimension

thi_y=2*pi*d_y*sin(THETA)*sin(phi)+beta_y;% thi for y dimension

for n=1:N_x

for j=1:300

AF_x(1,n,j)=exp(i*(n-1)*thi_x(j));

end

end

%============construct the Array Factor exponential series for x dimension========

for m=1:N_y

for j=1:300

AF_y(1,m,j)=exp(i*(m-1)*thi_y(j));

end

end

%============construct the Array Factor exponential series for x dimension========

AF=zeros(N_x,N_y,300);

for j=1:300

AF_x_transpose(:,:,j)=AF_x(:,:,j)';

AF(:,:,j)=(AF_x_transpose(:,:,j)*AF_y(:,:,j)).*fail_mode;

AF_final(j)=abs(sum(sum(AF(:,:,j))));

end

%=======calculate the 2-dimensional array factor by multiplying AF_x and AF_y====

%according to the theory of: assume A is a vector of (N*1), B is vector of (1*M)

%then A*B forms a matrix of (N*M), and we have: sum(A)*sum(B)=sum(sum(A*B))

%in the above, we firstly figure out the matrix of ((AF_x')*AF_y), then do the inner product

% with fail_mode matrix, finally obtain the final AF with failed elements in the array.

RAD_INTENSITY=abs(AF_final.*EF); % pattern multiplication

subplot(2,2,(jj-1))

polar(THETA,RAD_INTENSITY/max(RAD_INTENSITY))

title(['2-D radiation pattern of the phi=',num2str(jj),'*pi/4 plane'])

end

%===========================================================================

%x-y coordinates plot of AF for phi=pi/2, 3*pi/4, pi and 5*pi/4 respectively

%======================and also add fail mode into account==================

figure(3);

for jj=2:5

phi=(jj)*pi/4;

for I=1:300;

THETA(I)=-pi+(I-1)*2*pi/299;

end

if L<=0.02 %decide the type of element to be "infinitessimal dipole"

EF=eta*2*pi*L*I0.*sin(THETA)./(4*pi*R);%infinitessimal dipole element factor

elseif (0.02<L) & (L<=0.1) %decide the type of element to be "small dipole"

EF=eta*2*pi*L*I0.*sin(THETA)./(8*pi*R);%small dipole element factor

elseif (0.1<L) & (L<=1) %decide the type of element to be "finite dipole"

EF=eta*I0.*((cos(pi*L*cos(THETA))-cos(pi*L))./(sin(THETA)+(1.9763e-323)))/(2*pi*R);%finite dipole element factor

%=======if the elements are all along z direction, then the EF are exactly like above,

%=====if elements are put perpendicular to z axis, then we only need to change the sin(theta_/M) by cos(theta_/M)

end % end if

thi_x=2*pi*d_x*sin(THETA)*cos(phi)+beta_x; % thi for x dimension

thi_y=2*pi*d_y*sin(THETA)*sin(phi)+beta_y;% thi for y dimension

for n=1:N_x

for j=1:300

AF_x(1,n,j)=exp(i*(n-1)*thi_x(j));

end

end

%============construct the Array Factor exponential series for x dimension========

for m=1:N_y

for j=1:300

AF_y(1,m,j)=exp(i*(m-1)*thi_y(j));

end

end

%============construct the Array Factor exponential series for x dimension========

AF=zeros(N_x,N_y,300);

for j=1:300

AF_x_transpose(:,:,j)=AF_x(:,:,j)';

AF(:,:,j)=(AF_x_transpose(:,:,j)*AF_y(:,:,j)).*fail_mode;

AF_final(j)=abs(sum(sum(AF(:,:,j))));

end

%=======calculate the 2-dimensional array factor by multiplying AF_x and AF_y====


RAD_INTENSITY=abs(AF_final.*EF); % pattern multiplication

subplot(2,2,(jj-1))

plot(THETA,RAD_INTENSITY/max(RAD_INTENSITY))

title(['2-D radiation pattern of the phi=',num2str(jj),'*pi/4 plane'])

grid on;

end

%=======================2-D plot===========================