With microwave remote sensing, the terms associated with optical remote sensing like georeferencing, digitization would become obsolete. The extraction of information would purely, be based on analysis of the pattern in which microwaves interact with the objects of interest. Performing such an analysis warrants a clear understanding of the train of steps performed to arrive at the microwave image; the way microwave interacts with surface objects and the reasons for doing so. The three part lecture series is aimed to serve as an introduction to microwaves. In this lucid document, the reader would be taken from the generation of microwaves, step by step to the characteristics of a radar image.
Microwaves are electromagnetic waves with wavelengths ranging from 1mm - 1m, or frequencies between 0.3 GHz and 300 GHz.
The existence of electromagnetic waves was predicted by James Clerk Maxwell in 1864 from his equations. In 1888, Heinrich Hertz was the first to demonstrate the existence of
electromagnetic waves by building an apparatus that produced and detected microwaves in the UHF region. The design necessarily used horse-and-buggy materials, including a horse trough, a wrought iron point spark, Leyden jars, and a length of zinc gutter whose parabolic cross-section worked as a reflection antenna. In 1894 J. C. Bose publicly demonstrated radio control of a bell using millimeter wavelengths, and conducted research into the propagation of microwaves.
Microwaves can be generated using vacuum tube devices that operate on the ballistic motion of electrons in a vacuum under the influence of controlling electric or magnetic fields.
Few examples include the magnetron, klystron, traveling-wave tube (TWT), and gyrotron.
Microwave frequency bands, as defined by the Radio Society of Great Britain (RSGB), are shown in the table below. The frequency when given in GHz can be quickly converted to wavelength using the formula
Interesting uses of microwaves
Once microwaves are generated using the above mentioned devices, an antenna or aerial is required to focus the waves into a beam and propagate it toward the target of interest. The size, shape and type of antenna vary with the wavelength, gain factor and the purpose it is intended to. A portable FM radio uses a half-dipole antenna to receive radio signals. The other half of the dipole is attached to the radio casing and
Microwave bands and frequencies
An antenna works best when its physical size corresponds to the wavelength of the radio waves being sent or received. When the length of an antenna is a major fraction of the corresponding wavelength (a quarter-wavelength or half-wavelength is often used), the radio waves oscillating back and forth along the antenna will encounter each other in such a way that the wave crests do not interfere with one another.
One of the simplest types of antennas is called a dipole. A dipole is made of two lengths of metal, each of which is attached to one of two wires leading to a radio or other communications device. The two lengths of metal are usually arranged end to end, with the cable from the transmitter or receiver feeding each length of the dipole in the middle. The dipoles can be adjusted to form a straight line or a V-shape to enhance reception. Each length of metal in the dipole is usually a quarter-wavelength long, so that the combined length of the dipole from end to end is a half-wavelength. The familiar “rabbit-ear” antenna on top of a television set is a dipole antenna. Satellites and radar telescopes use microwave signals. Microwaves have extremely high frequencies and, thus, very short wavelengths (less than 30 cm). Microwaves travel in straight lines, much like light waves do. Dish antennas are often used to collect and focus microwave signals. The dish focuses the microwaves and aims them at a receiver antenna in the middle of the dish.
The dish is generally a parabolic surface and the feed / receptor is placed at one of its focus. Parallel microwaves impinge on the dish and get reflected and focused on to the feed. In transmission mode, diverging rays from the feed impinge on the parabolic surface and get reflected into a parallel beam. Such beams can travel long distances without suffering attenuation due to divergence. In the picture above, a scanning radar sends out microwaves to track the heading, velocity and speed of aircrafts in London airport. A non continuous parabolic surface is sufficient to focus the returns to the receptor here since the distance of the target in the order of a few kilometers unlike in earth station receptors. Further this structure can resist the blows of strong winds and storms (because the gaps between the bars reduce the resistance) and becomes light, suitable for rotation while in the scanning mode. The vertical bars indicate that the radar
sends out vertically polarized microwaves.
Terrestrial radar systems use a single large antenna that stays in one place, but can rotate on a base to change the direction of the radar beam. However, in satellite platforms this is not possible owing to loss of stability involved in rotating the huge antenna. To overcome this, a phased array antenna is used. This radar antenna actually comprises many small separate antennas, each of which is pointed at different angles. The system combines the signals gathered from all the small antennas. The receiver can change the way it combines the signals from the antennas to change the direction of the beam. A huge phased-array radar antenna can change its beam direction electronically many times faster than any mechanical radar system can. This is a picture of SIR (shuttle imaging radar) phased array antenna. The antenna onboard RISAT-1 is similar to this. [1]
A dielectric, or insulator, is poor conductor of electricity and that will sustain the force of an electric field passing through it. This property is not exhibited by conducting substances.
In most instances the properties of a dielectric are caused by the polarization of the substance. When the dielectric is placed in an electric field, the electrons and protons of its constituent atoms reorient themselves, and in some cases molecules become similarly polarized.
In the classical approach to the dielectric model, a material is made up of atoms. Each atom consists of a cloud of negative charge bound to and surrounding a positive point charge at its centre. Because of the comparatively huge distance between them, none of the atoms in the dielectric material interact with one another. In the presence of an electric field the charge cloud is distorted, as shown in the top right of the figure.This can be reduced to a simple dipole using the superposition principle. A dipole is characterized by its dipole moment, a vector quantity shown in the figure as the blue arrow labeled M. It is the relationship between the electric field and the dipole moment that gives rise to the behavior of the dielectric. When the electric field is removed, the atom returns to its original state. The time required to do so is the so-called relaxation time; an exponential decay.
As a result of this polarization, the dielectric is under stress, and it stores energy that becomes available when the electric field is removed. The polarization of a dielectric resembles the polarization that takes place when a piece of iron is magnetized. As in the case of a magnet, a certain amount of polarization remains when the polarizing force is removed. A dielectric composed of a wax disk that has hardened while under electric stress will retain its polarization for years. Such dielectrics are known as electrets.[2]
Dielectric Constant (ε) is a number relating the ability of a material to carry alternating current to the ability of vacuum (whose value is 1) to carry alternating current. The values of
this constant for usable dielectrics vary from slightly more than 1 for air up to 100 or more for certain ceramics containing titanium oxide. Glass, mica, porcelain, and mineral oils, often used as dielectrics, have constants ranging from about 2 to 9. [3] The dielectric value is analogous to radiance and dielectric constant is analogous to reflectance in optical images.
The dielectric constant of pure (distilled water) is 80 while that of sand is 3 to 4. Thus in radar images, water absorbs microwaves and appears dark. As dielectric constant represent the amount of absorption, materials with higher dielectric constants appear dark in radar images. This characteristic gives microwave remote sensing an undue advantage to map soil moisture, differentiate snow from ice etc.
In real world, the surface water is rarely pure. It contains salts and other dissolved minerals making it either acidic or basic. Thus water in such cases is a solution and can conduct electricity by virtue of ions; this would increase the transmittance of microwaves through such waters. Thus, the dielectric constant can vary with water bodies. Brackish water should appear differently from a clear water body.
Permittivity
Permittivity is a physical quantity that describes how an electric field affects, and is affected by a dielectric medium, and is determined by the ability of a material to polarize in response to the field, and thereby reduce the total electric field inside the material. Thus, permittivity relates to a material's ability to transmit (or "permit") an electric field. [4] Permittivity is analogous to transmittance in optical remote sensing.
Food for thought
Water is supposed to have high dielectric constant by virtue of which makes it a good absorber. Then how could clouds be transparent to microwaves?
This question can be answered with the question “Water is transparent to optical rays, if so, how could clouds be opaque? Glass is transparent, if so, why does powdered glass glitter? Diamond is transparent, if so, why does it glitter?” The answer is, in clouds, water molecules are isolated and behave as objects. They are objects to sizes as small as the wavelength of optical waves (few micrometers) and they scatter off the light from their surface. This scattered light undergoes multiple internal reflection before reaching either the satellite flying above it, or to the observer on the ground. Similar effect takes place from pulverized glass and in diamonds. The size of objects and their scattering is a function of the wavelength. Thus, shorter wavelengths get potentially scattered by the water molecules in clouds as it happens with optical rays.
But the same molecules are too small to stop and scatter EMW of long wavelengths like the microwaves. Thus, clouds become transparent to microwaves and they allow longer wavelengths to penetrate them.
Polarized light consists of individual photons whose electric field vectors are all aligned in the same direction. Ordinary light is depolarized because the photons are emitted in a random manner, while laser light is polarized because the photons are emitted coherently. When light passes through a polarizing filter, the electric field interacts more strongly with molecules having certain orientations. This causes the incident beam to separate into two beams, whose electric vectors are perpendicular to each other. A horizontal filter, such as the one shown, absorbs photons whose electric vectors are vertical. The remaining photons are absorbed by a second filter turned 90° to the first. At other angles the intensity of transmitted light is proportional to the square of the cosine of the angle between the two filters. In the language of quantum mechanics, polarization is called state selection [1]. The electric field vector of a plane wave may be arbitrarily divided into two perpendicular components labeled x and y (with z indicating the direction of travel). For a simple harmonic wave, where the amplitude of the electric vector varies in a sinusoidal manner in time, the two components have exactly the same frequency. However, the two components may not have the same amplitude and may not have the same phase that is they may not reach their maxima and minima at the same time.
The figures show electric field vector (blue), with time (the vertical axes), at a particular point in
space, along with its x and y components (red/left and green/right), and the path traced by the tip of the vector in the plane (purple). In the leftmost figure above, the two orthogonal (perpendicular) components are in phase. In this case the ratio of the strengths of the two components is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarization. The direction of this line depends on the relative amplitudes of the two components.
In the middle figure, the two orthogonal components have exactly the same amplitude and are exactly ninety degrees out of phase. In this case one component is zero when the other component is at maximum or minimum amplitude. There are two possible phase relationships that satisfy this requirement: the x component can be ninety degrees ahead of the y component or it can be ninety degrees behind the y component. In this special case the electric vector traces out a circle in the plane, so this special case is called circular polarization. The direction the field rotates in, depends on which of the two phase relationships exists. These cases are called right-hand circular polarization and left-hand circular polarization, depending on which way the electric vector rotates.
In all other cases, where the two components are not in phase and either do not have the same amplitude and/or are not ninety degrees out of phase, the polarization is called elliptical polarization because the electric vector traces out an ellipse in the plane (the polarization ellipse). This is shown in the above figure on the right.[5]
Light reflected by shiny transparent materials is partly or fully polarized, except when the light is normal (perpendicular) to the surface. A polarizing filter, such as a pair of polarizing sunglasses, can be used to observe this effect by rotating the filter while looking through it at the reflection off of a distant horizontal surface. At certain rotation angles, the reflected light will be reduced or eliminated.
The effect of a polarizer on reflection from mud flats is shown in the picture. In the picture on the left, the polarizer is rotated to transmit the reflections; by rotating the polarizer by 90° (picture on the right) almost all specularly reflected sunlight is blocked.
Polarization by scattering is observed as light passes through the atmosphere. The scattered light
produces the brightness and color in clear skies. This partial polarization of scattered light can be used to darken the sky in photographs, increasing the contrast. This effect is easiest to observe at sunset, on the horizon at a 90° angle from the setting sun. Another easily observed effect is the drastic reduction in brightness of images of the sky and clouds reflected from horizontal surfaces which is the main reason polarizing filters are often used in sunglasses.[5]
Thus from these examples, it is apparent that, polarization is dependent on the orientation of the reflecting surface with respect to the flat earth and the observer. Hence, by comparing the brightness of objects in two different polarizations would give information about the general orientation of the object.
The appearance of a feature in a radar image is dependent on 3 independent properties – surface roughness, dielectric constant and orientation.
The first factor to consider is surface roughness. Roughness is a relative term that varies with the wavelength of EMW we image with. If the surface is smooth, no matter what the dielectric constant or orientation be, the incident rays would never reach the side looking sensor as they would undergo a specular reflection. Thus for objects with similar dielectric constant and orientation, with increase in surface roughness, the brightness increases.
Dielectric constant decides what portion of the incident microwave energy gets absorbed. A low value indicates lesser absorption, implying either high reflection or transmission.
The orientation of the object decides which polarization would have maximum intensity.
Phase is the unfinished cycle (portion) of a wave. Consider in figure, the wave completes one oscillation and a portion of it is left incomplete. This portion sweeps an angle π/2.
Thus, the phase of this portion is π/2. The total phase of the entire wave is (2π + π/2 = 5π/2).If the total phase of a wave, its velocity and wavelength is known, the distance travelled by it can be found using the relationship
where R is the range travelled, λ is the wavelength and Φ is the phase angle.
A practical usage of this relationship is found in EDM’s (Electronic Distance Measurement) used as survey instruments. Further, the phase difference between the waves arriving from two different objects or two different position of the same object along the same plane is proportional to the distance between the objects itself. This displacement (between the two objects or object positions) can be found using the formula
derived from the earlier mentioned relationship.
Phase is a frequency domain or Fourier transform domain concept, and as such, can be readily understood in terms of simple harmonic motion. Simple harmonic motion is a displacement that varies cyclically, as depicted in the waveform in the picture. It is described by the formula:
where A is the amplitude of oscillation, f is the frequency, t is the elapsed time, and θ is the phase of the oscillation. The phase determines or is determined by the initial displacement at time t = 0.[7].
When plotted along time ‘t’ and displacement ‘y’ in the coordinate axes, the plot takes the shape of a sinusoidal curve called a simple harmonic curve. The rest of our discussion on interference and coherence would be based on these curves.
Interference is the interaction between two or more waves and the nature of the resultant wave formed. It is the addition or superposition of waves to result in a new wave pattern. To understand better, let us plot two waves using the simple harmonic wave formula and observe how they interact.
The plot on the left displays two waves with equal amplitude, frequency and initial phase. They overlap each other and cannot be differentiated. When they interfere, they produce a wave whose amplitude is the algebraic sum of their amplitudes at every instant. Thus the amplitude of the interference wave is twice as that of the individual wave. Here both the waves are in phase, or, in other words, their phase difference is 0. Thus the interference wave formed is a construction of their amplitudes and the process is called constructive interference.
When the individual waves have different amplitude, they appear as shown in the next figure to the left. The amplitude of blue wave is 1, that of green is 0.5. Hence, the maximum amplitude of the constructive interference is 1.5 which is lesser than the one show in the earlier plot.
It should be noted with caution that, interference of two waves does not produce a new or third wave. Instead, it is the manifestation of the two waves together (resultant) as a single interference wave.
Now in the formula for simple harmonic motion, if the initial phase is changed, we observe a transition from constructive to destructive interference.
In the next plot to the right, the amplitudes and frequency are maintained constant. The green wave is advanced (made leading) by an initial phase of π/2. The initial phase of the blue wave is 0. Hence the phase difference between the two interfering waves is π/2. Thus although the amplitudes of the two waves are equal and 1, this phase difference causes a decrease in the amplitude of the resultant wave. This interference is still constructive as the combined amplitude is greater than the individual amplitudes
When the initial phase of the green wave is increased to π keeping other factors constant, there occurs complete destructive interference. When the blue wave reaches its crest (maxima in the positive direction), the leading green wave has already reached its trough (maxima in the negative direction). This causes the algebraic sum of their amplitudes to become 0 at all instances as can be seen from the red wave. This process is called destructive interferences. When two light waves of equal amplitude and wavelength interfere with a phase difference of π, there is complete darkness due to destructive interference.
As the phase difference varies from 0 to π, the interference changes from constructive to destructive. The intermediate values have corresponding amplitudes varying between the sum of maximum amplitudes to 0.
The phenomenon of interference is observed at several occasions in our day-to-day life. It forms an integral part in optics, astronomy, and in remote sensing. In microwave remote sensing, interference fringes / pattern is used largely to prepare topographic maps and to study relative displacement in heights. As a precursor, it is instructive to study the experiment conducted by Dr. Thomas Young. The objective of the experiment is to develop two wave trains of same amplitude, wavelength, phase and to observe them interfere with each other. Getting two waves start with the same phase is the most difficult task, hence, as a workaround, he used one light source collimated through a pin hole (a), stopped by a plane with two slits (b and c). Since the waves emerge from the same source, they reach the slits at the same time. Thus the waves emerging from the slits start at the same time, in other words, have the same phase. To achieve temporal coherency, he used a monochromatic light source at (a). (Coherency is explained in the following section).
The waves emerging the two slits were expected to form two bright strips corresponding to the slits. On the contrary, a pattern with alternating bright and dark bands were observed. Today we call this as the celebrated interference pattern.
On the screen, at the midpoint between the two slits, there was a bright white band. Since this spot is equidistant from the two slits, the waves reaching were at the same phase. Hence, they constructively interfered and formed a bright band. Adjacent to it were dark bands, because for one slit the region was closer and for the other it was far. Thus due to differences in ranges, a phase difference was present and when the difference reached π the waves interfered destructively causing a dark band. Following a dark band, bright bands were present which are explained to have a phase difference of 2 π causing the waves to interfere constructively. This pattern repeats on either sides from the centre.
Between two bright bands, there was gradually decreasing brightness until the midpoint where it was completely dark. The gradual change is attributed to interference changing gradually from constructive to destructive following a corresponding decrease in the amplitude of the interference wave for phase differences changing from 0 to π.
This experiment was fundamental to all further studies on interference. Today in industry, about 30 different interferometers exist for a wide variety of applications. One that is of pertinence to us is the IFSAR – Interferometric SAR (more on this would be explained in the forthcoming classes).
Two waves are said to be coherent if their phase difference remains constant with time. Thus, we can predict the value of the second wave from the first wave as their phase retains a fixed relationship over time. In practical sense, coherence enables two waves to interfere. Coherence can be measured as the extent to which the waves can undergo complete destructive interference. The coherence of two waves follows from how well correlated the waves are as quantified by the cross-correlation function. Thus, highly coherent waves have a higher degree of cross-correlation between them.[8]
Coherence is classified into three types – Temporal, spatial, spectral coherence.
Temporal coherence is the measure of the average correlation between the value of a wave at any pair of times, separated by delay τ. Temporal coherence tells us how monochromatic a source is. In other words, it characterizes how well a wave can interfere with itself at a different time. The delay over which the phase or amplitude wanders by a significant amount (and hence the correlation decreases by significant amount) is defined as the coherence time τc. At τ=0 the degree of coherence is perfect whereas it drops significantly by delay τc. The coherence length Lc is defined as the distance the wave travels in time τc. In the plot on the right, the waves red and green vary by a frequency of 0.1Hz. Although they appear to start in phase, their phase difference increases until after 5 seconds when they are out of phase. Thus the interference beginning to be constructive, looses out to being destructive for a constant amplitude, frequency and initial phase value. This is a classic example explaining why white light is said to be incoherent. The different wavelengths in it, loose out in temporal coherence unlike in a monochromatic source.
Further, temporal coherence is used as a measure of monochromaticity of a source. When a wave is copied and started after a known phase delay, the original and the copied would predictably interfere at any point in time if their frequencies remain constant, in other words, if their phase difference remains constant till that point in time.
The ability of two waves separated on a 2D plane to remain in phase. Two points equidistant from the same source but at different locations should be in phase and this property is called spatial coherence. This takes special relevance in the double slit experiment. For the bands to be equally spaced, there should exist absolute spatial coherence. Thus it is the average correlation over a 2D area.
The average correlation of multiple frequencies is given by spectral coherence. When a composite light source containing multiple frequencies produce waves that are in phase but may be of different frequencies and amplitude, there exists a spectral coherence. By virtue of this, the constituent waves interfere and from a pulse. Absence of this coherence would result in absence of a predictable interference pattern.
In the plot on left, 7 waves with equal amplitude, initial phase but of different frequency are created. As observed, they appear to start in phase but at time progresses, there appears no direct relationship between them. When plotted over an extent of time (100 sec), the interference wave shows a pattern as shown in the next plot.
At fixed intervals of time, the amplitude of the resultant wave takes a boom – the sum of all the amplitudes. At these positions, all the constituent waves are in phase and thus they interfere constructively.
This pattern is similar the famous experiment with simple pendulums of different string length. The apparatus consists of a longitudinal bar
from which several simple pendulums are suspended. A lever allows to raise all of them together and release at the same time. When released, they appear to be in phase at least for the first oscillation and thereafter go chaotic. But amidst this chaos, at fixed intervals of time, all would come back to equilibrium position at the same time – in phase. After that instant, the chaos returns. This would repeat until the oscillations are dampened due to air resistance and other frictional effects. The point is it is possible to predict the time at which those pendulums would arrive at phase for the equilibrium position, because at that position, each of the pendulum would have completed a whole number of oscillations. The minimum time required for a pendulum to complete a whole number of oscillations is a function of its time period. Thus, each pendulum would have different time periods owing to differences in their string length causing them to oscillate at different rates. The least of all time periods when all the pendulums would have completed a whole number of oscillations (LCM of individual time periods) gives this equilibrium time.
This principle can be extended to our wave theory as a simple pendulum is a simple harmonic oscillator. A shorter string length corresponds to a shorter wavelength and as expected, a shorter pendulum would oscillate faster – higher frequency.
The direction of the aircraft flight line is called the azimuth direction. The radar mounted on the belly of the aircraft always images in the direction perpendicular to azimuth called
the range direction. This illumination extends from the near range (nearest point illuminated to the aircraft) to far range (farthest point illuminated) in the range direction and extends in the azimuth direction as the aircraft flies along.
The angle the illuminating radar beams make with respect to the horizontal extending from the aircraft fuselage is called the depression angle (γ). The angle between the incident radar pulse and the normal to the surface of earth at the point of contact is called the incident angle (θ). For a flat terrain, the depression and incident angles are complementary. Thus θ + γ = 90°. However, when the terrain is undulating, the normal to that surface is not perpendicular to the horizontal form the aircraft fuselage. The incident angle measured to that local normal is called the local incident angle. Under most cases such terrain effects are ignored and calculations are performed using the incident angle. Both depression and incident angles vary from near to far range.[6]
Image geometry
Radar in essence is a ranging device. It measures the time taken for a pulse to return from an object and the intensity of the pulse received. As the radar is side looking, the image formed by it is in the slant range geometry. In the picture, objects A and B are of the same size in real world, but in the slant range image formed by the radar, A is compressed compared to B. [6] This happens because the time difference between the radar pulses returning from the ends of A is much lesser than the time difference between pulses returning from the ends of B as A is in the near range and B is in the far range.Hence it maps A to be shorter than B. Thus, a radar image is compressed in the near range. The slant range geometry of a radar image has to be brought to ground range geometry if measurements are to be made. The relation
be used for the conversion.[6]
The plot on the right shows the variation of slant range with respect to ground range. It can be seen that, in the near range, slant range
increases only mildly making the features appear compressed. While in the far range, it increases disproportional to ground range making the features look stretched.
To determine the resolution of a radar image, it is required to compute the resolution in range and azimuth directions.
Range resolution
The range resolution is proportional to the pulse length (duration for which the radar sends out an illuminating pulse). Pulse length is a function of the speed of light (c) multiplied by the duration of illumination (τ). Radar sends out bursts of pulses just like the flash in an optical camera. The formula to measure the Rr (Range resolution) is given by
The resolution is divided by 2 as the pulse travels to and fro from the object of interest and the time taken is actually doubled. It is divided by cosγ to scale the resolution to ground
range. The range resolution would be refined by reducing the pulse length. But a too short pulse would result in a return that is too weak for the antenna to resolve. Thus, a balance is achieved to maintain a healthy SNR. Typical pulse length is in the order of 0.4 to 1 microsecond (10-6 s) which equates to a pulse length of 8 – 210m.
As Rr is a function of depression angle, it varies from the near range to far range. Consider a hypothetical situation as shown in figure. Towers 1 and 2 and 3 and 4 are separated by an equal distance of 30m. When illuminated by a radar beam of fixed pulse length 0.1µs, the Rr in near range for a γ = 65° becomes 35.5m while the Rr in far range for a γ = 40° is 19.58m. Thus identical tower group in near range is resolved as a single long object while the same in far range is resolved as two individual objects.[6]
This is because, in near range, the time difference between the arrival of the return pulses is much lesser than that from the far range. Hence, tower group 1,2 is considered to be one long object.
Azimuth resolution
Azimuth resolution Ra is a function of the radar beam width illuminating the terrain. The radar beam is lobe shaped, being narrow in the near range and broad in the far range. The angular beam width of the lobe is proportional to the wavelength of the radar pulse sent. A shorter wavelength would result in a narrower beam. But, a too short λ would encounter scattering by atmosphere and clouds resulting in lesser penetration capability. Further, the beam width is inversely proportional to antenna length (L). A longer antenna would facilitate a greater focusing of the beam, making it narrower. Thus the azimuth resolution is given by the formula
Where R is the slant range, λ is the wavelength and L is the antenna length. Ra can be found in ground range by converting R to (H/sinγ). As expected, the Ra is finer in the near range (since the footprint is narrow) and coarser in the far range. This is apparent as Ra varies inversely with the sin of depression angle in the formula.
Consider another hypothetical situation, where objects 1,2 and 3, 4 in the near and far ranges are separated by an equal distance of 200m. For an X band radar (λ = 3cm) with 5m
antenna at a slant range of 20km and 40km in the near and far ranges,
Ra in near range = (20km x 3cm)/5m
Ra in near range is 120m and that in far range is 240m. Thus tanks 1,2 are resolved as two different objects, while 3,4 are imaged to be one long target.[6]
Unfortunately, there is a limit to the size of antenna that could be flown in airborne and space borne platforms, limiting the azimuth resolution. Yet, by a technique of combining the Doppler shifted signals from various successive lobes, an extremely long antenna can be virtually synthesized. This principle is used in SAR (Synthetic Aperture Radar) to achieve fine azimuth resolutions.
The plot on the left maps the variation of the resolutions from near to far range. As expected, the range resolution is a decreasing curve as a function of sine. The azimuth on the other hand is an increasing curve, and is almost linear. While range is in the order of tens of meters, azimuth is in the order of hundreds of meters.Further, the range resolution is independent of the altitude of sensor while azimuth resolutions cannot be achieved using RAR in space borne platforms. This is another reason why synthetic aperture radar is required on space borne platforms. Care should be taken while analyzing radar images as the pixels are rectangular due to different resolutions in the X and Y directions.
Geometric distortions exist in all radar images. When the terrain is flat, the relationship between slant and ground ranges is straight forward. But when the terrain is undulating,
there is an incorrigible relief displacement.
Radar is a ranging device, it measures the distance between the terrain and the antenna. The signal from the peak (B) of a hill reaches the antenna quicker than it were to be if B were to be on flat ground. This causes the fore slope (AB) of a hill to be shortened. Similarly, the aft slope of the hill is elongated as the aft foot of the hill is much distant from the antenna when compared to the peak. The foreshortening factor Ff is given by the formula
Here θ is the incident angle, α is the slope of the terrain. α is positive when the slope is inclined toward the antenna and negative when inclined away from the antenna.[6] As expected, the Ff is worse in the near range and lesser in the far range as it varies as a sin function of incident angle. Unlike in aerial photographs where the relief displacement is two dimensional and extends radially outwards from the principal point, in radar images, relief is displaced only in range direction but towards the antenna. [6]
Layover is an extreme case of foreshortening where the fore slope is so high that the backscattered signal from the peak reaches the antenna much before the signal from the fore foothill reaches. Thus in the slant range image, the peak appears ahead of the foot. Layover occurs when α+ > θ. Similar to foreshortening, layover effects are worse in the near range compared to the far range.[6]
Since the radar is side looking, the rays from the antenna may not reach the aft slope of a hill if the slope is steep. Thus, as shown in figure, when α- > γ, radar shadows occur. Radar shadows give useful information about the relief of the terrain under study. But unlike optical imagery, where objects in shadow get diffused irradiance, objects in radar shadow are completely dark. The regions lost in radar shadow can be recovered by imaging the terrain from the other side of the hill. Radar shadows increase in the far range and reduce in the near range.
The following table lists how different factors vary from near to far range in a radar image.
[1] – Microsoft Encarta encyclopedia.
[2] – www.en.wikipedia.org/wiki/dielectric
[3] - www.clippercontrols.com/info/dielectric_constants.html a website containing table of different compounds and their dielectric constants measured at standard temperatures.
[4] - www.en.wikipedia.org/wiki/permittivity
[5] – www.en.wikipedia.org/wiki/Polarization
[6] – Remote sensing of the environment, 2000, Pearson Education, John R Jensen, pp 310 – 318