next up previous
Next: Bibliography Up: processing Previous: Processing of SAR-data
Main level: Epsilon Nought - Radar Remote Sensing

Subsections

Exercise

In order to obtain some practice in SAR processing, in this chapter a SAR raw data set should be processed. The provided data are already range compressed, therefore only the azimuth focussing step is necessary. In this exercise we use the Mount Britney raw data set, which can be found in the 'data' section of Epsilon Nought (8.7MB Download). The data are simulated data with parameters which are typical for an airborne sensor, i.e. a relatively low flight level and velocity. Nevertheless, for simplification, range-cell-migration was not simulated and has not to be taken into account during the processing. All programming examples are given in the language IDL, but I would be happy if someone provides an alternative formulation in C.

Warning: To process the demo data set with the presented programs, you might need around 100MB of memory. But of course, more memory efficient programming is possible.

Reading and visualizing the raw data

The first step is of course to read the data correctly from the disk and to have a first look at it. A common technique to store complex data in an efficient format is to use an interleaved I/Q (real/imaginary) byte format. In this format for every complex value one byte for its real part and one byte for its imaginary part is stored. This consumes only 2 bytes instead of 8 or 16 bytes per complex value. For compatibility between different platforms, we always use the XDR (IEEE) data representation.

The data file starts by a header of two long integer values (4 bytes), describing the dimensions of the raw data in range and azimuth. We start by opening the raw data file, defining the two long interger variables size_rg and size_az, read them and output their values.

   openr,lun,'mt_britney.dat',/xdr,/get_lun
   size_rg = 0l
   size_az = 0l 
   readu,lun,size_rg
   readu,lun,size_az
   print,'Range   size :',size_rg
   print,'Azimuth size :',size_az

If everything is correct, you should get size_rg = 2048 and size_az = 4096. With this information we now can generate a complex array to hold the raw data, as well as a temporary byte array to read a line of the interleaved byte data from the disk. It has two times the length of size_rg because it needs to pick up the real part as well as the imaginary part of the raw data.

   line = bytarr(2*size_rg)
   data = complexarr(size_rg,size_az)

With a for-loop we can now read line by line the data and put it into the complex array data. The problem here is to separate correctly the interleaved real and imaginary parts to construct the complex values. To do so we first define two index arrays index_q and index_i which contain the index values for the real and imaginary bytes, respectively.

        index_q = findgen(size_rg)*2
        index_i = findgen(size_rg)*2+1

In a descriptive way, index_q contains [0,2,4,...] while index_i contains [1,3,5,...]. Additionally, the I/Q offset of the data of 128 has to be substracted from both real and imaginary part. This is because with bytes only positive values are possible, and in order to convert the measured raw data to bytes, this offset has been added.

   re   = line[index_q]-128.0
   im   = line[index_i]-128.0
   data = complex(re,im)

Finally, here is the whole program to read the raw data file into the varible data:

   openr,lun,'mt_britney.dat',/xdr,/get_lun
   size_rg = 0l
   size_az = 0l 
   readu,lun,size_rg
   readu,lun,size_az
   line = bytarr(2*size_rg)
   data = complexarr(size_rg,size_az)
   index_q = findgen(size_rg)*2
   index_i = findgen(size_rg)*2+1
   for i=0,size_az-1 do begin
      readu,lun,line
      re = line[index_q]-128.0
      im = line[index_i]-128.0
      data[*,i] = complex(re,im)
   endfor
   free_lun,lun

Now we can have a look at the data. As it is complex data, it is appropriate to take the absolute value before visualizing. An image can be derived with tv,bytscl(rebin(abs(roh),512,512)). The result is depicted in Fig. 4.1. You should get something like this! Obviously, not a lot of structures can be recognized yet in the image. Of course, as the data is completely unfocused in azimuth with a resolution of only slighly below one kilometer. In range direction the image has already a high resolution of 1.5 meters. This can be recognized on some linear structures along azimuth.

\includegraphics [scale=0.5]{raw_brit.eps}

Figure 4.1: Amplitude of range-compressed raw data

Azimuth focussing

Now the data should be compressed in azimuth. To do so, some sensor/imaging parameters are necessary:

wavelength

0.056

[m]

sensor velocity

100

[m/s]

sensor heigth

3000

[m]

range distance

4000

[m]

PRF

500

[Hz]

range sampling

$1\cdot 10^8$

[Hz]

azimuth resolution

0.5

[m]

With this information, first of all, we should calculate the length of the synthetic aperture in meters and pixels, which is necessary to obtain the given azimuth resolution:

   lambda = 0.056
   r      = 4000.0
   v      = 100.0   
   prf    = 500.0
   res_az = 0.50        
   len_sa = lambda*r/2/res_az
   nr_sa  = floor(len_sa/v*prf)

The next step is to generate the correct reference fuction. It can be calculated out of the imaging geometry. The azimuth positions $x$ of every pixel inside the synthetic aperture are:

   x = (findgen(nr_sa)-nr_sa/2)*len_sa/nr_sa

As we know the range distance, the true sensor-scatterer distance for every azimuth position is obtained by:

  
   dr = shift(sqrt(r^2+x^2),nr_sa/2)

This distance corresponds to a phase (real and complex representation) of:

  
   phase = 4*!pi/lambda*dr
   cpha  = exp(complex(0,phase))

The function cpha already represents the desired reference function. But as we want to use the FOURIER-domain representation for efficiency reasons, we have to perform a so-called zero-padding to expand this function to the full azimuth length of the data:

  
   ref = complexarr(size_az)
   ref[0:nr_sa/2-1] = cpha[0:nr_sa/2-1]
   ref[size_az-nr_sa/2:size_az-1] = cpha[nr_sa/2:*]

The rest is quite simple, we just have to muliply the spectra of every azimuth line with the spectra of the reference function and to perform after an inverse FOURIER-transform:

  
   for i=0,size_rg-1 do begin
      data[i,*] = fft(fft(data[i,*],-1)*conj(fft(ref,-1)),1)    
   endfor

Finally, again the whole program:

   lambda = 0.056
   r      = 4000.0
   v      = 100.0   
   prf    = 500.0
   res_az = 0.50        
   len_sa = lambda*r/2/res_az
   nr_sa  = floor(len_sa/v*prf)

   x = (findgen(nr_sa)-nr_sa/2)*len_sa/nr_sa
   dr = shift(sqrt(r^2+x^2),nr_sa/2)
   phase = 4*!pi/lambda*dr
   cpha  = exp(complex(0,phase))        
   ref = complexarr(size_az)
   ref[0:nr_sa/2-1] = cpha[0:nr_sa/2-1]
   ref[size_az-nr_sa/2:size_az-1] = cpha[nr_sa/2:*]             

   for i=0,size_rg-1 do begin
      data[i,*] = fft(fft(data[i,*],-1)*conj(fft(ref,-1)),1)    
   endfor

The data should be readily focused right now, so we should have a look at it. It is still complex data, and the image information should be contained in its absolute. So, just generate an image window by:

   tva,rotate(rebin(abs(data),512,512),1),/m

The program tva.pro used here is not part of IDL. It can be downloaded in the IDL section of Epsilon.Nought. The /m keyword performs a scaling of the data from 0 to 2.5 times the average of the data array. This is not too important for this simulated data set, but is absolutely necessary when visualizing real SAR data, as single bright points are not taken into account during the brightness scaling process.

The new result is depicted in Fig. 4.2. The image has now clearly some structures in azimuth, but still appears a bit defocused. In the next section we will address this problem. Another feature of SAR data can already observed: The characteristic speckle. The image does not appear smooth but kind of noisy. As already discussed in this tutorial, this is a result of the coherent imaging technique, and has nothing to do with low resolution.

\includegraphics [scale=0.5]{raw_brit2.eps}

Figure 4.2: Image focussed with a fixed reference function.

Range adaptation

So what we did wrong? When calculating the reference function the range distance of 4000m was used. In fact, this is only correct for the first azimuth line. The following lines have a growing distance to the sensor. The spacing between two adjacent lines can be obtained from the range sampling rate of the sensor and the speed of light (compare Eq. ):

   rg_sam = 1.0e8
   c  = 2.99e8
   dist_r = c/2 / rg_sam

The resulting value of dist_r = 1.5 shows, that every 'range bin' has the size of 1.5 meters.

The growing distance to the target has two main effects. The chirp rate of the reference function changes, as well as the length of the synthetic aperture in azimuth. In order to obtain a high quality image, our SAR processor has to adapt these two parameters with range. With the experience we have now, this should be no problem. We just have to include the calculation of them in the main processing loop.

One important thing to take care of is that the length of the synthetic aperture has an even amount of pixels. This is because of the splitting of the reference function in two parts (needed for the FOURIER-domain formulation), and can be solved by a simple if-condition.

Finally, the whole modified processing:

   rg_sam = 1.0e8
   c      = 2.99e8
   dist_r = c/2 / rg_sam
   for i=0,size_rg-1 do begin
      r_real = r+i*dist_r
      len_sa = lambda*r_real/2/res_az
      nr_sa  = floor(len_sa/v*prf)
      if nr_sa mod 2 eq 1 then nr_sa=nr_sa+1
      x = (findgen(nr_sa)-nr_sa/2)*len_sa/nr_sa
      dr = shift(sqrt(r_real^2+x^2),nr_sa/2)
      phase = 4*!pi/lambda*dr
      cpha  = exp(complex(0,phase))     
      ref = complexarr(size_az)
      ref[0:nr_sa/2-1] = cpha[0:nr_sa/2-1]
      ref[size_az-nr_sa/2:size_az-1] = cpha[nr_sa/2:*]          
      data[i,*] = fft(fft(data[i,*],-1)*conj(fft(ref,-1)),1)    
   endfor
   tva,rotate(rebin(abs(data),512,512),1),/m

The final result is depicted in Fig. 4.3. And yes, the image looks now much better, doesn't it? Well, maybe you even already know Mt. Britney.

I apologize for the speckled nature of the image, but SAR is like that. To make everybody happy, we'll approach this problem in the next section and try to get the image a bit smoother.

\includegraphics [scale=0.5]{raw_brit3.eps}

Figure 4.3: Image focussed with an adaptive reference function.

If you want, you can play a bit with the parameters to see what happens if you use lower or higher resolution, wrong forward velocities and so on. Britney will do all you want....

For the lasy ones: Download here the full program sar_proz.pro

Hint: Have also a look at "Britney's Introduction to Semiconductor Physics".

Multilooking

Coming soon.....


next up previous
Next: Bibliography Up: processing Previous: Processing of SAR-data
Main level: Epsilon Nought - Radar Remote Sensing

Andreas Reigber
2001-05-24