next up previous contents
Next: Interpretation of Scattering Mechanisms Up: Target Decomposition Previous: The Vorticity Concept   Contents

Power Reflection Matrices Based Decompositions

In the previous section we discussed deterministic scatterers which are described completely by a single constant scattering matrix [S] or scattering vector $ \vec{k}$. For remote sensing applications the assumption of pure deterministic scatterers is not always valid, since, the resolution cell is usually bigger than the wavelength, e.g., natural terrain surfaces contain many spatially distributed scattering centers, the measured [S]-matrix for a given resolution cell consists of the coherent integration of all scattering contributions related to the particular resolution cell. To deal with statistical scattering effects and the analysis of extended scatterers, the concept of power reflection matrices is traditionally preferred. The classical and first decomposition related to this class of matrices was presented by Huynen [2] in terms of an average single target component matrix and a distributed residue component matrix. However, in recent years, approaches based on the coherence and covariance matrices have been preferred because of their more compact form. The coherence and covariance matrices as well as the Müller/Kennaugh matrices belong to this type of power reflection matrices. They hold identical information only in a different representation. Therefore, these representations can be mutually transfered from one into the other without loss of information. Depending on the desired information the one or the other representation may yield some advantages. However, for radar remote sensing applications coherence matrix representation is usually preferred, due to the fact, that the analysis of this matrix by means of the Cloude decomposition theorem is straight forward. Therefore, we shall be focus on the coherence matrix and the Cloude decomposition theorem in the following.

The Cloude decomposition theorem is capable of covering a whole range of scattering mechanisms is based on the eigenvector analysis of the coherence matrix [T] and was first presented by Cloude [Cloude85], [Cloude92]. This approach yields the important advantage, that it is automatically basis invariant due to the invariance of the eigenvalue problem under unitary transformations. Due to the fact that the coherence matrix [T] is a hermitian, positive, semidefinite matrix, it can always be diagonalized, using unitary similarity transformations [Cloude97]. That is, the coherency matrix may be given as

$\displaystyle \left<[T]\right>=[U_3][\Lambda][U_3]^{\dagger}=[U_3]
\left[ \begi...
... & 0 & \lambda_3
\end{array} \right]
[U_3]^{\dagger} \qquad where \qquad \qquad$      
$\displaystyle \left[U_3 \right] = \left[ \begin{array}{ccc}
\cos(\alpha_1) &
\c...
..._2)e^{i\gamma_2} &
\sin(\alpha_3)\sin(\beta_3)e^{i\gamma_3}
\end{array} \right]$     (4.17)

where $ [\Lambda]$ is the diagonal eigenvalue matrix of [T], $ \lambda_1 \ge \lambda_2 \ge
\lambda_3 \ge 0$ are the real eigenvalues and $ [U_3]$ is a unitary matrix whose columns correspond to the orthonormal eigenvectors $ \vec{e}_1,
\vec{e}_2,\vec{e}_3$ of [T].
Thus the coherence matrix [T] can be decomposed into a sum of 3 coherence matrices $ [T_n]$, each weighted by its corresponding eigenvector.

$\displaystyle [T] = \sum_{n=1}^3 \lambda_n [T_n] = \lambda_1 (\vec{e}_1 \cdot \...
...c{e}_2 \cdot \vec{e}_2^\dagger) + \lambda_3 (\vec{e}_3 \cdot \vec{e}_3^\dagger)$ (4.18)

Each matrix $ [T_n]$ is a unitary scattering matrix representing a deterministic scattering contribution. The amount of the contributions is given by the eigenvalues $ \lambda_n$ while the type of scattering is related to the eigenvectors.

Two important physical features arising from this decomposition where introduced by Cloude and Pottier [Cloude97] The first one is the polarimetric scattering entropy H which is a global measure of the distribution of the components of the scattering process and is defined as

$\displaystyle H=-P_1 \log_{3} P_1 - P_2 \log_{3} P_2 - P_3 \log_{3} P_3 \quad where  P_i=\frac{\lambda_{i}}{\sum_{j=1}^{3} \lambda_j}$ (4.19)

where $ P_i$ may be interpreted as the relative intensity of a scattering process 'i'. Due to this definition H is restricted to the interval 0 $ \leq$ H $ \leq$ 1, where H = 0 indicates that [T] has only one nonzero eigenvalue and represents one deterministic scattering process, while H = 1 means that all $ \lambda_n$ are equal.

Since the entropy is mainly an indicator for the relationship between $ \lambda _1$ and the two other eigenvalues $ \lambda_2$, $ \lambda_3$, no direct information about the relation between the latter can be extracted from the entropy. The second feature, introduced by Pottier [Pottier98a], the polarimetric anisotropy A,

$\displaystyle A=\frac {\lambda_2 - \lambda_3}{\lambda_2 + \lambda_3}$ (4.20)

closes this gap. For a high entropy the anisotropy yields no additional information since in that case the eigenvalues are nearly equal ( $ H \cong 1 \rightarrow \lambda_1 \cong \lambda_2 \cong \lambda_3$). For very low entropy we find, that the minor eigenvalues $ \lambda_2$ and $ \lambda_3$ are close to zero. For low or medium entropy ( $ \lambda_1 > \lambda_2 , \lambda_3$), however, the entropy yields no information about the relation between the two minor eigenvalues $ \lambda_2 , \lambda_3$. In that case the anisotropy contains additional information. A medium entropy means that more than one scattering mechanism contributes to the signal, but not if one or two additional scattering mechanisms are present. A high anisotropy states that only the second scattering mechanism is important, while a low anisotropy yields that also the third scattering mechanism plays a role. Possible applications of these features for classification purposes are given in [Pottier98b] and shall be discussed in the following chapter.
next up previous contents
Next: Interpretation of Scattering Mechanisms Up: Target Decomposition Previous: The Vorticity Concept   Contents