25• Electromagnetic Subsurface Remote Sensing
25• Geoscience and Remote Sensing
Electromagnetic Subsurface Remote Sens...

Author:
John G. Webster (Editor)

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

25• Electromagnetic Subsurface Remote Sensing

25• Geoscience and Remote Sensing

Electromagnetic Subsurface Remote Sensing Abstract | Full Text: PDF (219K) Geographic Information Systems Abstract | Full Text: PDF (97K) Geophysical Signal and Image Processing Abstract | Full Text: PDF (766K) Information Processing for Remote Sensing Abstract | Full Text: PDF (339K) Meteorological Radar Abstract | Full Text: PDF (182K) Microwave Propagation and Scattering for Remote Sensing Abstract | Full Text: PDF (321K) Microwave Remote Sensing Abstract | Full Text: PDF (436K) Microwave Remote Sensing Theory Abstract | Full Text: PDF (726K) Oceanic Remote Sensing Abstract | Full Text: PDF (160K) Remote Sensing by Radar Abstract | Full Text: PDF (704K) Remote Sensing Geometric Corrections Abstract | Full Text: PDF (195K) Visible and Infrared Remote Sensing Abstract | Full Text: PDF (753K)

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT....Electromagnetic%20Subsurface%20Remote%20Sensing.htm17.06.2008 15:35:15

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3602.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Electromagnetic Subsurface Remote Sensing Standard Article S. Y. Chen1 and W. C. Chew1 1University of Illinois at Urbana-Champaign Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3602 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (219K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Borehole EM Methods Ground Penetrating Radar Magnetotelluric Methods Airborne Electromagnetic Methods About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3602.htm17.06.2008 15:35:27

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

474

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

does not exist. Even though each system has its own characteristics, they still share some common features. In general, each system has a transmitter, which can be either natural or artificial, to send out the electromagnetic energy that serves as an input signal. A receiver is needed to collect the response signal. The underground can be viewed as a system, which is characterized by the material parameters and underground geometry. The task of subsurface EM methods is to derive the underground information from the response signal. The EM transmitter radiates the primary field into the subsurface, which consists of conductive earth material. This primary field will induce a currents, which in turn radiates a secondary field. Either the secondary field or the total field will be detected by the receiver. After the data interpretation, one can obtain the underground information. One of the most challenging parts of subsurface EM methods is interpretation of the data. Since the incident field interacts with the subsurface in a very complex manner, it is never easy to subtract the information from the receiver signal. Many definitions, such as apparent conductivity, are introduced to facilitate this procedure. Data interpretation is also a critical factor in evaluating the effectiveness of the system. How good the system is always depends on how well the data can be explained. In the early development of subsurface EM systems, data interpretation largely depended on the personal experience of the operator, due to the complexity of the problem. Only with the aid of powerful computers and improvements in computational EM techniques is it possible to analyze such a complicated problem in a reasonable time. Computer-based interpretation and inversion methods are attracting more and more attention. Nevertheless, data interpretation is still ‘‘an artful balance of physical understanding, awareness of the geological constraints, and pure experience’’ (1). In the following sections, we will use several typical applications to outline the basic principles of subsurface EM methods. Physical insight is emphasized rather than rigorous mathematical analysis. Details of each method can be found in the references.

BOREHOLE EM METHODS

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING Subsurface electromagnetic (EM) methods are applied to obtain underground information that is not available from surface observations. Since electrical parameters such as dielectric permittivity and conductivity of subsurface materials may vary dramatically, the response of electromagnetic waves can be used to map the underground structure. This technique is referred to as geological surveying. Another major application of subsurface EM methods is to detect and locate underground anomalies such as mineral deposits. Subsurface EM methods include a variety of techniques depending on the application, surveying method, system, and interpretation procedure, and thus a ‘‘best’’ method simply

Borehole EM methods are an important part of well-logging methods. Since water is conductive and oil is an insulator, resistivity measurements are good indicators of oil presence. Water has an unusually high dielectic constant, and permittivity measurement is a good detector of moisture content. Early borehole EM methods consist of mainly electrical measurements using very simple low-frequency electrodes like the short and the long normal. Then more sophisticated electrode tools were developed. Some of these tools are mounted on a mandrel, which performs measurements centered in a borehole. These tools are called mandrel tools. Alternatively, the sensors can be mounted on a pad, and the corresponding tool is called a pad tool. One of the most successful borehole EM methods is induction logging. Since Doll published his first paper in 1949 (2), this technique has been used widely with confidence in the petroleum industry. Extensive research work has been done in this area. The systems in use now are so sophisticated that many modern electrical techniques are involved. Neverthe-

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

less, the principles still remain the same and can be understood by studying a simple case. The induction logging technique, as proposed by Doll, makes use of several coils wound on an isolating mandrel, called a sonde. Some of the coils, referred to as transmitters, are powered with alternating current (ac). The transmitters radiate the field into the conductive formation and induce a secondary current, which is nearly proportional to the formation conductivity. The secondary current radiates a secondary field, which can be detected by the receiver coils. The receiver signal (voltage) is normalized with respect to the transmitter current and resprented as an apparent conductivity, which serves as an indication of underground conductivity. To obtain information from the apparent conductivity, we need to understand how apparent conductivity and true conductivity are related. According to Doll’s theory, the relation in cylindrical coordinates is given by σa =

+∞

dz −∞

+∞ 0

dρ gD (ρ , z )σ (ρ , z )

(1)

where a is the formation conductivity. The kernel gD(, z) is the so-called Doll’s geometrical factor, which weights the contribution of the conductivity from various regions in the vicinity of sonde. We notice that gD(, z) is not a function of the true conductivity and hence is only determined by the tool configuration. The interpretation of the data would be simple if Doll’s theory were exact. Unfortunately, this is rarely the case. Further studies show that Eq. (1) is true only in some extreme cases. The significance of Doll’s theory, however, is that it relates the apparent conductivity and formation conductivity, even though the theory is not exact. In the early development of induction logging techniques, tool design and data interpretation were based on Doll’s theory, and in most cases it gives reasonable answers. To establish a firm understanding of induction logging theory, we need to perform a rigorous analysis by using Maxwell’s equations as follows: E + J s + σE E ∇ × H = −iωE

(2)

H ∇ × E = iωµH

(3)

∇ ·H = 0

(4)

∇ ·D = ρ

(5)

475

After these simplifications, we have E = Js ∇ × H − σE

(6)

H=0 ∇ × E − iωµH

(7)

∇ ·H = 0

(8)

∇ ·E = 0

(9)

where we assume ⵜ ⭈ Js ⫽ i웆 ⫽ 0. For convenience, the auxiliary vector potential is introduced. Since ⵜ ⭈ H ⫽ 0 and ⵜ ⭈ (ⵜ ⫻ ) ⫽ 0, it is possible to define H ⫽ ⵜ ⫻ A. To specify the field uniquely, we choose E ⫽ i웆애A, which is only true when there is no charge accumulation. Substituting these expressions into Eq. (6), we have A = Js ∇ × ∇ × A − iωµσA

(10)

By using the vector identity, we have Js ∇ 2A + k2A = −J

(11)

k2 = iωµσ

(12)

where

To demonstrate how the apparent conductivity and formation conductivity are related, we first write down the solution of Eq. (11) in a homogeneous medium as follows (3,4): 1 J s (ρ , z , φ ) ikr A (ρ, z, φ) = e 1 dV (13) 4π V r1 where r1 = {(z − z )2 + ρ 2 + ρ 2 − 2ρρ cos(φ − φ )}1/2

(14)

The volume integration is evaluated over regions containing the impressed current sources and the coordinate system used in Eq. (13), as shown in Fig. 1. Usually, a small current Receiver

Z a

(ρ ′, Z′,φ ′)

r2 r2 rs

where ⵜ ⭈ Js ⫽ i웆. In the preceding equations, the time dependence e⫺i웆t is assumed, and Js corresponds to the impressed current source. Parameters 애, ⑀, are the magnetic permeability, dielectric permittivity, and electric conductivity, respectively. To simplify the analysis, we assume that both the impressed source and geometry of the problem are axisymmetric; consequently, all the field components are independent of the azimuthal angle. Furthermore, it can be shown that there is no stored charge under the preceding assumption. The working frequency of induction logging is about 20 kHz, so the displacement current ⫺i웆⑀E is very small compared to the conduction current E and hence is neglected in the following discussion.

rs r1 Transmitter

r1 a (ρ ′, Z′,φ ′)

Formation element

Y

X Figure 1. Induction logging tool transmitter and receiver coil pair used to explain the geometric factor theory. (Redrawn from Ref. 4.)

476

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

loop is used as an excitation, which implies that only A exists. Hence, Eq. (13) can be furthermore simplified as Aφ (ρ, z) =

1 4π

V

J φ (ρ , z ) cos(φ − φ )

eikr 1 dV r1

m ρ (1 − ikr1 )eikr 1 4π r31

(16)

where m ⫽ NTI(앟a2) is the magnetic dipole moment and NT is the number of turns wound on the mandrel. At the receiver point, the voltage induced on the receiver with NR turns can be represented as

V = 2πaNr Eφ =

eikL 2NT NR (πa2 )2 I iωµ(1 − ikL) 3 4π L

σs = σ − σa

(15)

When the radius of the current loop becomes infinitely small, it can be viewed as a magnetic dipole and thus the preceding integration can be approximated as Aφ =

be. The difference between true conductivity and apparent conductivity is defined as the skin effect signal,

(17)

The leading term of the imaginary part VX is not a function of true conductivity. In fact, it corresponds to the direct coupling field, which does not contain any formation information. What remains in VX is the so-called X signal. Since the direct term is much larger than the residual part including VR, it is difficult to separate the X signal. The importance of the X signal is seen by comparing Eqs. (19) and (20), from which we find that the X signal is the first-order approximation of the nonlinear term in VR, the R signal. This fact can be used to compensate for the skin effect. So far we have introduced the concept of apparent conductivity by studying the homogeneous case. In practice, the formation conductivity distribution is far more complicated. The apparent conductivity and formation conductivity are related through a nonlinear convolution. As a proof we derive the solution in an integral form, instead of directly solving the differential equations. To this end, we first rewrite Eq. (11) as

where

Js − Ji ∇ 2A = −J Eφ = iωµAφ (a, L)

(18)

and L is the distance between the transmitter and receiver. Since the voltage is a complex quantity, it can be separated into real and imaginary parts and expanded in powers of kL as follows (3):

VR = −Kσ 1 − VX = Kσ

δ2 L2

1−

2L + ··· 3δ

2 L2 + ··· 3 δ3

(19)

(20)

where K=

(ωµ)2 (πa2 )2 NT NR I 4π L

and δ=

(21)

(25)

where Ji ⫽ ⫺k2A is the induced current. The solution of Eq. (25) can be written in the integral form as 1 1 Js Ji A= dV + dV (26) 4π V rs 4π V r2 The first integral is evaluated over the regions containing the impressed sources, and the second one is performed over the entire formation. Under the same assumption as we have made in the preceding analysis, the receiver voltage can be written as (4) Jφ i2πaNR ωµ V = dV 4π V rs (27) σ (ρ , z )Aφ (ρ , z ) 2πaNR ω2 µ2 − dV 4π r2 V The vector potential can also be separated into real and imaginary parts:

2 ωµσ

VR ∼ 2L =σ 1− K 3δ

Aφ = AφR + iAφI

(22)

The quantity K is known as the tool constant and is totally determined by the configuration of the tool, and is the socalled skin depth, which describes the attenuation of a conductor in terms of the field penetration distance. The quantity VR is called the R signal. The apparent conductivity is defined as (3) σa = −

(24)

(23)

In the preceding analysis, there are some important facts that need to be mentioned. In Eq. (19), we see that the apparent conductivity is a nonlinear function of the true conductivity, even in a homogeneous medium. The lower the working frequency or lower the true conductivity, the more linear it will

(28)

Substituting Eq. (28) into Eq. (27) and separating out the real part of the receiver voltage, we have

−(ωµ)2 (2πaNR ) ∞ ∞ dz 4π −∞ 0 2π cos(φ − φ ) dρ σ (ρ , z )AφR dφ r2 0

VR =

(29)

Applying the same procedure, we obtain the apparent conductivity as

V σa = − R K ∞ = dρ 0

∞ −∞

dz σ (ρ , z )gP (ρ , z )

(30)

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

where gP =

2πLρ A (πa)3 NT I φR

2π 0

cos(φ − φ ) dφ r2

(31)

(πa2 )NT I ρ AφR ∼ e{(1 − ikr1 )eikr 1 } = 4π r 31

The integration with respect to ⬘ in Eq. (31) can also be performed for r2 Ⰷ a. The final result is ∞ ∞ σa = σ gD (ρ , z )e{(1 − ikr1 )eikr 1 }dρ dz (33) −∞

The function gP is the exact definition of the geometrical factor. In comparison with Doll’s geometrical factor, gP depends not only on the tool configuration, but also on the formation conductivity, since the vector potential depends on the formation conductivity. The integral-form solution does not provide any computational advantage, since the differential equation for the vector potential A,R must still be solved. But it is now clear from Eq. (30) that the apparent conductivity is the result of a nonlinear convolution. Equation (30) also represents the starting point of inverse filtering techniques, which make use of both the R and X signals to reconstruct the formation conductivity. Finding the vector potential A is still a challenge. Analytic solutions are available only for a few simple geometries. In most cases, we have to use numerical techniques such as the finite element method (FEM), finite difference method (FDM), numerical mode matching (NMM), or the volume integral equation method (VIEM). Interested readers may find Refs. 5 through 8 useful. Previously, we mentioned that Doll’s geometrical factor theory is only valid under some extreme conditions. In fact, it can be derived from the exact geometrical factor as a special case (4). In a homogeneous medium, the vector potential A,R can be calculated as (32)

477

0

where gD (ρ , z ) =

L ρ 3 2 r 31 r 32

(34)

It is now clear that Doll’s geometric factor and the exact geometric factor are the same when the medium is homogeneous and the wave number approaches zero. So far we have discussed the basic theory of induction logging. We now use a simple example to show some practical concerns and briefly discuss the solutions. In Fig. 2, we show an apparent resistivity (the inverse of apparent conductivity) response of a commercial logging tool 6FF40 (trademark of the Schlumberger Company) in the Oklahoma benchmark. The black line is the formation resistivity, and the red line is the unprocessed data of 6FF40. We notice that the apparent resistivity data roughly indicate the variation of the true resistivity, but around 4850 ft the apparent resistivity Ra is much higher than the true resistivity Rt, which results from the ‘‘skin effect’’ (9). From 4927 to 4955 ft, Ra is substantially lower than Rt, which is caused by the so-called shoulder effect. The shoulder effect arises when two adjacent low-resistance layers generate strong signals, even though the tool is not in these two regions. Around 5000 ft, there are a number of thin layers, but the tool’s response fails to indicate them. This failure results from the tool’s limited resolution, which is represented in terms of the smallest thickness that can be identified by the tool.

HRI LOG and 6FF40 LOG

103

True resistivity Raw data of 6FF40 With DC & SB HRI data

Resistivity (Ω ⋅ m)

102

101

100

10–1 4850

4900

4950 Depth (ft)

5000

Figure 2. Apparent resistivity responses of a different tool in the Oklahoma benchmark. The improvement of resolution ability of the HR1 tool is significant.

478

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

The blue line is the processed 6FF40 data after skin effect boosting and a three-point deconvolution. Skin effect boosting is based on Eq. (19), which is solved iteratively for the true conductivity from the apparent conductivity. The three-point deconvolution is performed under the assumption that the convolution in Eq. (30) is almost linear (10). These two methods do improve the final results to some degree, but they also cause spurious artifacts observed near 4880 ft, since the two effects are considered separately. The green curve is the response of the HRI (high-resolution induction) tool (trademark of Halliburton) (11). A complex coil configuration is used to optimize the geometrical factor. After the raw data are obtained, a nonlinear deconvolution based on the X signal is performed. The improvement in the final results is significant. Schlumberger Company recently released its AIT (array induction image tool), which uses eight induction-coil arrays operating at different frequencies (12). The deconvolutions are performed in both radial and vertical directions, and a quantitative two-dimensional image of formation resistivity is possible after a large number of measurements (13,14). The aforementioned data processing techniques are based on the inverse deconvolution filter, which is computationally effective and easily run in real time on a logging truck computer. An alternative approach is to use inverse scattering theory, which is becoming increasingly practical and promising with the development of high-speed computers (8,15). Besides the induction method, there are other methods, such as electrode methods and propagation methods. Induction methods are suitable for the fresh-water mud, oil-base mud, or air-filled boreholes, since the little or no conductivity in the borehole has a lesser effect on the measurement. If the mud is very conductive, it will generate a strong signal at the receiver and hence seriously degrade the tool’s ability to make a deep reading. In such a case, electrode methods are preferable, since the conductive mud places the electrodes into better electrical contact with the formation. In the electrode methods, very low frequencies (Ⰶ1000 Hz) are used and Laplace’s equation is solved instead of the Helmholtz equation. The typical tools are DLL (dual laterolog) and SFL (spherical focusing log), both from Schlumberger. The dual laterolog is intended for both deep and shallow measurements, while the SFL is for shallow measurements (16–19). In addition, there are many tools mounted on pads to perform shallow measurements on the borehole wall. These may be just button electrodes mounted on a metallic pad. Due to their small size, they have high resolution but a shallow depth of investigation. Their high resolution capability can be used to map out fine stratifications on the borehole wall. When four pads are equipped with these button electrodes, the resistivity logs they measure can be correlated to obtain the dip of a geological bed. An example of this is the SHDT (stratigraphic high-resolution dip meter tool), also from Schlumberger (20). When an array of buttons are mounted on a pad, they can be used to generate a resistivity image of the borehole wall for formation evaluation, such as dips, cracks, and stratigraphy. Such a tool is called an FMS (formation microscanner) and is available from Schlumberger (21). For oil-based mud the SHDT does not work well, and microinduction sensors have been mounted on a pad to dipping bed evaluation. Such a tool is known as the OBDT (oil-based

mud dip meter tool) and is manufactured by Schlumberger (22,23). Sometimes information is needed not only relating to the conductivity but also to the dielectric permittivity. In such cases, the EPT (electromagnetic wave propagation tool), from Schlumberger can be used. The working frequency of EPT can be as high as hundreds of megahertz to 1 GHz. At such high frequencies, the real part of ⑀⬘ is dominant, as follows: = + i

σ ω

(35)

EPT measurements provide information about dielectric permittivity and hence can better distinguish fresh water from oil. Water has a much higher dielectric constant (80⑀0) compared to oil (2⑀0). Phase delays at two receivers are used to infer the wave phase velocity and hence the permittivity. Interested readers can find materials on these methods in Refs. 24 and 25. Other techniques in electrical well logging include the use of borehole radar. In such a case, a pulse is sent to a transmitting antenna in the borehole, and the pulse echo from the formation is measured at the receiver. Borehole radar finds application in salt domes where the electromagnetic loss is low. In addition, the nuclear magnetic resonance (NMR) technique can be used to detect the percentage of free water in a rock formation. The NMR signal in a rock formation is proportional to the spin echos from free protons that abound in free water. An example of such a tool is the PNMT (pulsed nuclear magnetic resonance tool), from Schlumberger (26). GROUND PENETRATING RADAR Another outgrowth of subsurface EM methods is ground penetrating radar (GPR). Because of its numerous advantages, GPR has been widely used in geological surveying, civil engineering, artificial target detection, and some other areas. The GPR design is largely application oriented. Even though various systems have different applications and considerations, their advantages can be summarized as follows: (1) Because the frequency used in GPR is much higher than that used in the induction method, GPR has a higher resolution; (2) since the antennas do not need to touch the ground, rapid surveying can be achieved; (3) the data retrieved by some GPR systems can be interpreted in real time; and (4) GPR is potentially useful for organic contaminant detection and nondestructive detection (27–31). On the other hand, GPR has some disadvantages, such as shallow investigation depth and site-specific applicability. The working frequency of GPR is much higher than that used in the induction method. At such high frequencies, the soil is usually very lossy. Even though there is always a tradeoff between the investigation depth and resolution, a typical depth is no more than 10 m and highly dependent on soil type and moisture content. The working principle of GPR is illustrated in Fig. 3(a) (28). The transmitter T generates transient or continuous EM waves propagating in the underground. Whenever a change in the electrical properties of underground regions is encountered, the wave is reflected and refracted. The receiver R detects and records the reflected waves. From the recorded data, information pertaining to the depth, geometry, and material

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

dimensional map, which is called an echo sounder–type display. To locate objects or interfaces, we need to know the wave speed in the underground medium. The wave speed in a medium of relative dielectric permittivity ⑀r is

Distance TTotal

R Time

T

C Cs = √ 0 r

Interface (a)

T

R

T

R

T

479

R Lens

Cavity

where C0 ⫽ 3 ⫻ 108 m/s. Usually, the transmitter and the receiver are close enough and thus the wave’s path of propagation is considered to be vertical. The depth of the interface is approximated as D = 0.5 × (Cs × Ttotal )

Interface

(36)

(37)

(b)

Time

Distance

(c) Figure 3. Working principle of the GPR. (Redrawn from Ref. 20.)

type can be obtained. As a simple example, we use Figs. 3(b) and 3(c) to illustrate how the data are recorded and interpreted. The underground contains one interface, one cavity, and one lens. At a single position, the receiver signals at different times are stacked along the time axis. After completing the measurement at one position, the procedure is iterated at all subsequent positions. The final results are presented in a two-

Source and modulation

Transmit antenna

Ground (soil, water, ice, etc.)

where Ttotal is the total wave propagation time. A practical GPR system is much more complicated, and a block diagram of a typical baseband GPR system is shown in Fig. 4. Generally, a successful system design should meet the following requirements (27): (1) efficient coupling of the EM energy between antenna and ground; (2) adequate penetration with respect to the target depth; (3) sufficiently large return signal for detection; and (4) adequate bandwidth for the desired resolution and noise control. The working frequency of typical GPR ranges from a few tens of megahertz to several gigahertz, depending on the application. The usual tradeoff holds: The wider the bandwidth, the higher the resolution but the shallower the penetration depth. A good choice is usually a tradeoff between resolution and depth. Soil properties are also critical in determining the penetration depth. It is observed experimentally that the attenuation of different soils can vary substantially. For example, dry desert and nonporous rocks have very low attenuation (about 1 dBm⫺1 at 1 GHz) while the attenuation of sea water can be as high as 300 dBm⫺1 at 1 GHz. Some typical

Targets

Receive antenna

Signal sampling and digitization

Data storage

Signal processing

Display

Figure 4. Block diagram showing operation of a typical baseband GPR system. (Redrawn from Ref. 19.)

480

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

applications and preferred operating frequencies are listed in Table 1 (27). To meet the requirements of different applications, a variety of modulation schemes have been developed and can be classified in the following three categories: amplitude modulation (AM), frequency modulated continuous wave (FMCW), and continuous wave (CW). We will briefly discuss the advantages and limitations of each modulation scheme. There are two types of AM transmission used in GPR. For investigation of low-conductivity medium, such as ice and fresh water, a pulse modulated carrier is preferred (32,33). The carrier frequency can be chosen as low as tens of megahertz. Since the reflectors are well spaced, a relatively narrow transmission bandwidth is needed. The receiver signal is demodulated to extract the pulse envelope. For shallow and high-resolution applications, such as the detection of buried artifacts, a baseband pulse is preferred to avoid the problems caused by high soil attenuation, since most of the energy is in the low-frequency band. A pulse train with a duration of 1 to 2 ns, a peak amplitude of about 100 V, and a repetition rate of 100 kHz is applied to the broadband antenna. The received signal is downconverted by sampling circuits before being displayed. There are three primary advantages of the AM scheme: (1) It provides a real-time display without the need for subsequent signal processing; (2) the measurement time is short; and (3) it is implemented with small equipment but without synthesized sources and hence is cost effective. But for the AM scheme, it is difficult to control the transmission spectrum, and the signal-to-noise ratio (SNR) is not as good as that of the FMCW method. For the FMCW scheme, the frequency of the transmitted signal is continuously swept, and the receiver signal is mixed with a sample of transmitted signals. The Fourier transform of the received signal results in a time domain pulse that represents the receiver signal if a time domain pulse were transmitted. The frequency sweep must be linear in time to minimize signal degradation, and a stable output is required to facilitate signal processing. The major advantage of the

Table 1. Desired Frequencies for Different Applicationsa

Material Cold pure fresh-water ice Temperate pure ice Saline ice Fresh water Sand (desert) Sandy soil Loam soil Clay (dry) Salt (dry) Coal Rocks Walls a

Typical Desired Penetration Depth b

Approximate Maximum Frequency at Which Operation May Be Usefully Performed

10 km

10 MHz

1 km 10 m 100 m 5m 3m 3m 2m 1 km 20 m 20 m 0.3 m

2 50 100 1 1 500 100 250 500 50 10

MHz MHz MHz GHz GHz MHz MHz MHz MHz MHz GHz

Redrawn from from Ref. 19. The figures used under this heading are the depths at which radar probing gives useful information, taking into account the attenuation normally encountered and the nature of the reflectors of interest. b

FMCW scheme is easier control of the signal spectrum; the filter technique can be applied to obtain better SNR. A shortcoming of the FMCW system is the use of a synthesized frequency source, which means that the system is expensive and bulky. Additional data processing is also needed before the display (34,35). A continuous wave scheme was used in the early development of GPR, but now it is mainly employed in synthetic aperture and subsurface holography techniques (36–38). In these techniques, measurements are performed at a single or a few well-spaced frequencies over an aperture at the ground surface. The wave front extrapolation technique is applied to reconstruct the underground region, with the resolution depending on the size of the aperture. Narrowband transmission is used and hence high-speed data capture is avoided. The difficulty of the CW scheme comes from the requirement for accurate scanning of the two-dimensional aperture. The operation frequencies should be carefully chosen to minimize resolution degradation (27). Antennas play an important role in the system performance. An ideal antenna should introduce the least distortion on the signal spectrum or else one for which the modification can be easily compensated. Unlike the antennas used in the atmospheric radar, the antennas used in GPR should be considered as loaded. The radiation pattern of the GPR antenna can be quite different due to the strong interaction between the antenna and ground. Separate antennas for transmission and reception are commonly used, because it is difficult to make a switch that is fast enough to protect the receiver signal from the direct coupling signal. The direct breakthrough signals will seriously reduce the SNR and hence degrade the system performance. Moreover, in a separate-antenna system, the orientation of antennas can be carefully chosen to reduce further the cross-coupling level. Except for the CW scheme, other modulation types require wideband transmission, which greatly restricts the choice of antenna. Four types of antennas, including element antennas, traveling wave antennas, frequency independent antennas, and aperture antennas, have been used in GPR designs. Element antennas, such as monopoles, cylindrical dipoles, and biconical dipoles, are easy to fabricate and hence widely used in GPR system. Orthogonal arrangement is usually chosen to maintain a low level of cross coupling. To overcome the limitation of narrow transmission bandwidth of thin dipole or monopole antennas, the distributed loading technique is used to expand the bandwidth at the expense of reduced efficiency (39–42). Another commonly used antenna type is traveling wave antennas, such as long wire antennas, V-shaped antennas, and Rhombic antennas. The traveling wave antennas distinguish themselves from standing wave antennas in the sense that the current pattern is a traveling wave rather than a standing wave. Standing wave antennas, such as half-wave dipoles, are also referred to as resonant antennas and are narrowband, while traveling wave antennas are broadband. The disadvantage of traveling wave antennas is that half of the power is wasted at the matching resistor (43,44). Frequency-independent antennas are often preferred in the impulse GPR system. It has been proved that if the antenna geometry is specified only by angles, its performance will be independent of frequency. In practice, we have to truncate the antenna due to its limited outer size and inner feed-

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

ing region, which determine the lower bound and upper bound of the frequency, respectively. In general, this type of antenna will introduce nonlinear phase distortion, which results in an extended pulse response in the time domain (27,45). A phase correction procedure is needed if the antenna is used in a high-resolution GPR system. A wire antenna is a one-dimensional antenna that has a small effective area and hence lower gain. For some GPR systems, higher gain or a more directive radiation pattern is sometimes required. Aperture antennas, such as horn antennas, are preferred because of their large effective area. A ridge design is used to improve the bandwidth and reduce the size. Ridged horns with gain better than 10 dBm over a range of 0.3 GHz to 2 GHz and VSWR lower than 1.5 over a range of 0.2 GHz to 1.8 GHz have been reported (46). Since many aperture antennas are fed via waveguides, the phase distortion associated with the different modulation schemes needs to be considered. Generally, antennas used in GPR systems require broad bandwidth and linear phase in the operating frequency range. Since the antennas work in close proximity to the ground surface, the interaction between them must be taken into account. Signal processing is one of the most important parts in the GPR system. Some modulation schemes directly give the time domain data while the signals of other schemes need to be demodulated before the information is available. Signal processing can be performed in the time domain, frequency domain, or space domain. A successful signal processing scheme usually consists of a combination of several kinds of processing techniques that are applied at different stages. Here, we outline some basic signal processing techniques involved in the GPR system. The first commonly used method is noise reduction by time averaging. It is assumed that the noise is random, so that the noise can be reduced to 1/Nt by averaging N identical measurements spaced in time t. This technique only works for random noise but has no effects on the clutter. Clutter reduction can be achieved by subtracting the mean. This technique is performed under the assumption that the statistics of the underground are independent of position. A number of measurements are performed at a set of locations over the same material type to obtain the mean, which can be considered as a measure of the system clutter. The frequency filter technique is commonly used in the FMCW system. Signals that are not in the desired information bandwidth are rejected. Thus the SNR of FMCW scheme is usually higher than that of the AM scheme. In some very lossy soils, the return signal is highly attenuated, which makes interpretation of the data difficult. If the material attenuation information is available, the results can be improved by exponentially weighting the time traces to counter the decrease in signal level due to the loss. In practice, this is done by using a specially designed amplifier. Caution is needed when using this method, since the noise can also increase in such a system (27).

MAGNETOTELLURIC METHODS The basic idea of the magnetotelluric (MT) method is to use natural electromagnetic fields to investigate the electrical

481

conductivity structure of the earth. This method was first proposed by Tikhonov in 1950 (47). In his paper, the author assumed that the earth’s crust is a planar layer of finite conductivity lying upon an ideally conducting substrate, such that a simple relation between the horizontal components of the E and H fields at the surface can be found (48): iµ0 ωHx ∼ = Ey γ cosh(γ l)

(38)

γ = (iσ µ0 ω)(1/2)

(39)

where

The author used the data observed at Tucson (Arizona) and Zui (USSR) to compute the value of conductivity and thickness of the crust that best fit the first four harmonics. For Tucson, the conductivity and thickness were about 4.0 ⫻ 10⫺3 S/m and 1000 km, respectively. For Zui, the corresponding values are 3.0 ⫻ 10⫺1 S/m and 100 km. The MT method distinguishes itself from other subsurface EM methods because very low frequency natural sources are used. The actual mechanisms of natural sources have been under discussion for a long time, but now it is well accepted that the sources of frequency above 1 Hz are thunderstorms while the sources below 1 Hz are due to the current system in the magnetosphere caused by solar activity. In comparison with other EM methods, the use of a natural source is a major advantage. The frequencies used range from 0.001 Hz to 104 Hz, and thus investigation depth can be achieved from 50 m to 100 m to several kilometers. Installation is much simpler and has less impact on the environment. The MT method has also proved very useful in some extreme areas where conventional seismic methods are expensive or ineffective. The main shortcomings of the MT method are limited resolution and difficulty in achieving a high SNR, especially in electrically noisy areas (49). In MT measurements, the time-varying horizontal electrical and magnetic fields at the surface are recorded simultaneously. The data recorded in the time domain are first converted into frequency domain data by using a fast Fourier transform (FFT). An apparent conductivity is then defined as a function of frequency. To interpret the data, theoretical apparent conductivity curves are generated by the model studies. The model whose apparent conductivity curve best matches the measurement data is taken as an approximate model of the subsurface. Since it is more convenient and meaningful to represent the apparent conductivity in terms of skin depth, we first introduce the concept of skin depth by studying a simple case. The model we use is shown in Fig. 5, which consists of a hoO

X

Current sheet

Y

Conductivity: σ

Z Figure 5. Current sheet flowing on the earth’s surface, used to explain the magnetotelluric method.

482

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

Z=0

where h is the thickness of upper layer, and 1, 2 are the conductivities of the upper and lower layers, respectively. Matching the boundary conditions at z ⫽ h, we have √ √ σ1 − σ2 −ah(√σ + √σ ) 1 2 A= e (46) √ 2 σ1 √ √ σ1 + σ2 ah(√σ − √σ ) 1 2 B= e (47) √ 2 σ1

O First layer σ1

h Z=h

Second layer σ 2

Since we are interested in the ratio between the E and H field on the surface, Eq. (44) can be rewritten for z ⫽ 0 as Z

1 Ex M −i(π /4+φ+ψ ) e = √ Hy 2σ1 T N

Figure 6. Two-layer model of the earth’s crust, used to demonstrate the responses of the magnetotelluric method.

(48)

where M, N, , and satisfy the following equations:

Ix = cos ωt, Iy = Iz = 0

(40)

then the current density at depth z is Ix = e−z

√

2ωµσ /2

√ cos(ωt − z 2ωµσ ), Iy = Iz = 0

(41)

When z increases, we notice that the amplitude of the current decreases exponentially with respect to z; meanwhile the phase retardation progressively increases. To describe the amplitude attenuation, we introduce the skin depth p as (50) p=

2 ωµσ

(42)

where the current amplitude decreases to e⫺1 of the current at the surface. Since the unit in Eq. (42) is not convenient, some prospectors like to use the following formula: p=

1 √ 10ρT 2π

(43)

where T is the period in seconds, is the resistivity in ⍀/m, and the unit for p is km. The skin depth indicates the depth the wave can penetrate the ground. For example, if the resistivity of the underground is 10 S/m and the period of the wave is 3 s, the skin depth is 2.76 km. Subsurface methods seldom have such a great penetration depth. The data interpretation of the MT method is based on the model studies. The earth is modeled as a two- or three-layer medium. For a two-layer model as shown in Fig. 6, the general expression for the field can be written as (50) 0 ⱕ z ⱕ h:

Ez = Aea

√

σ1 z

+ be−a

√

σ1 z

√ √ √ Hy = eiπ /4 2σ1 T [−Aea σ 1 z + Be−a σ 1 z ]

(44a) (44b)

h ⱕ z ⱕ 앝:

Ex = e−a

√

σ2 z

√ √ Hx = e iπ /4 2σ2 Te−a σ z

(45a) (45b)

1

h p1 p11 h sinh M sin φ = p1 p1 1 h N cos ψ = sinh p1 p1 1 h cosh N sin ψ = p1 p1 M cos φ =

cosh

1 p2 1 + p2 1 + p2 1 + p2 +

h cos p1 h cosh sin p1 h cosh cos p1 h sinh sin p1 sinh

h p1 h p1 h p1 h p1

(49a) (49b) (49c) (49d)

where p1, p2 are the skin depths of upper and lower layers, respectively. For a multilayer medium, after applying the same procedure, we can obtain exactly the same relation between Ex and Hy as shown in Eq. (48) except that the expressions for M, N, , and are much more complicated. Because of this similarity, we have Ex M 1 1 = (50) √ H = √ N y 2σa T 2σ1 T where a is defined as the apparent conductivity. If the medium is homogeneous, the apparent conductivity is equal to the true conductivity. In a multilayer medium the apparent conductivity is an average effect of all layers. To obtain a better understanding of the preceding formulas, we first study two two-layer models and their corresponding apparent conductivity curves, as shown in Fig. 7 (51). At A

B

Conductive sediments ρ 1

Resistive basement Apparent resistivity

mogeneous medium with conductivity and a uniform current sheet flowing along the x direction in the xy plane. If the density of current at the ground (z ⫽ 0) is represented as (50)

Low

ρ1

ρ2 A B

ρ1 Frequency

High

Figure 7. Diagrammatic two-layer apparent resistivity curves for the models shown. (Redrawn from Ref. 43.)

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

Ex = Zxx Hx + Zxy Hy

(51a)

Ey = Zyx Hx + Zyy Hy

(51b)

Since Ex, Ey, Hx, and Hy are generally out of phase, Zi,j are complex numbers. It can also be shown that Zi,j have the following properties: Zxx + Zyy = 0

(52)

Zxy − Zyx = constant

(53)

Resistive sediments ρ 1 Conductive sediments ρ 2

Apparent resistivity

Resistive basement ρ 3

ρ3

Low

ρ1 ρ2 Frequency

High

Figure 8. Diagrammatic three-layer apparent resistivity curve for the model shown. (Redrawn from Ref. 43.)

Depth

Distance

ρ1

ρ 2 (> ρ 1 ) Model (a) E⊥

ρ2 ρ1

E⊥

Apparent resistivity

E (b)

Hz 1

Tipper

H⊥ 0 (c) 2 H⊥ 1 0

Relative H⊥

Depth

(d) 0 Current flow E⊥ (e) Depth

very low frequencies, the wave can easily penetrate the upper layer, and thus its conductivity has little effect on the apparent conductivity. Consequently, the apparent resistivity approaches the true resistivity of lower layer. As the frequency increases, less energy can penetrate the upper layer due to the skin effect, and thus the effect from the upper layer is dominant. As a result, the apparent resistivity is asymptotic to 1. Comparing the two curves, we note that both of them change smoothly, and for the same frequency, case A has lower apparent resistivity than case B, since the conductive sediments of case B are thicker. Our next example is a three-layer model as shown in Fig. 8 (51). The center layer is more conductive than the two adjacent ones. As expected, the curve approaches 1 and 2 at each end. The existence of the center conductive bed is obvious from the curve, but the apparent resistivity never reaches the true resistivity of center layer, since its effect is averaged by the effects from the other two layers. So far we have only discussed the horizontally layered medium, which is a one-dimensional model. In practice, two-dimensional or even three-dimensional structures are often encountered. In a 2-D case, the conductivity changes not only along the z direction but also along one of the horizontal directions. The other horizontal direction is called the ‘‘strike’’ direction. If the strike direction is not in the x or y direction, we obtain a general relation between the horizontal field components as (51)

483

0

Current flow E (f) Figure 9. Diagrammatic response curves for a simple vertical contact at frequency f. (Redrawn from Ref. 43.)

A simple vertical layer model and its corresponding curves are shown in Fig. 9 (51). In Fig. 9(b), the apparent resistivity with respect to E储 changes slowly from 1 to 2 due to the continuity of H⬜ and E储 across the interface. On the other hand, the apparent resistivity corresponding to E⬜ has an abrupt change across the contact, since the E⬜ is discontinuous at the interface. The relative amplitude of H⬜ varies significantly around the interface and approaches a constant at a large distance, as shown in Fig. 9(d). This is caused by the change in current density near the interface, as shown in Fig. 9(f). We also observe that Hz appears near the interface, as shown in Fig. 9(c). The reason is that the partial derivative of E储 with respect to ⬜ direction is nonzero. We have discussed the responses in some idealized models. For more complicated cases, their response curves can be obtained by forward modeling. Since the measurement data are in the time domain, we need to convert them into the frequency domain data by using a Fourier transform. In practice, five components are measured. There are four unknowns in Eqs. (51a) and (51b), but only two equations. This difficulty can be overcome by making use of the fact that Zi,j changes very slowly with frequency. In fact, Zi,j is computed as an average over a frequency band that contains several frequency sample points. A commonly used method is given in Ref. 52,

484

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

according to which Eq. (51a) is rewritten as ∗

∗

∗

Ex A = Zxx Hx A + Zxy Hy A

(54)

Ex B∗ = Zxx Hx B∗ + Zxy Hy B∗

(55)

and

where A* and B* are the complex conjugates of any two of the horizontal field components. The cross powers are defined as

AB∗ (ω1 ) =

1 ω

ω 1 +( ω/2)

AB∗ dω

(56)

ω 1 −( ω/2)

There are six possible combinations, and the pair (Hx, Hy) is preferred in most cases due to its greater degree of independence. Solving Eqs. (54) and (55), we have Zxx =

Ex A∗ Hy B∗ − Ex B∗ Hy A∗ Hx A∗ Hy B∗ − Hx B∗ Hy A∗

(57a)

Zxy =

Ex A∗ Hx B∗ − Ex B∗ HX A∗ Hy A∗ Hx B∗ − Hy B∗ Hx A∗

(57b)

and

Applying the same procedure to Eq. (51b), we have Zyx =

Ey A∗ Hy B∗ − Ey B∗ Hy A∗ Hx A∗ Hy B∗ − Hx B∗ Hy A∗

(57c)

Zyy =

Ey A∗ Hx B∗ − Ey B∗ Hx A∗ Hy A∗ Hx B∗ − Hy B∗ Hx A∗

(57d)

and

After obtaining Zi,j, they can be substituted into Eqs. (51a) and (51b) to solve for the other pair (Ex, Ey), which is then used to check the measurement data. The difference is due either to noise or to measurement error. This procedure is usually used to verify the quality of the measured data.

primary field to the secondary field becomes very small and thus the resolution of airborne EM methods is not very high. The operating frequency is usually chosen from 300 Hz to 4000 Hz. The lower limit is set by the transmission effectiveness, and the upper limit is set by the skin depth. Based on different design principles and application requirements, many systems have been built and operated all over the world since 1940s. Despite the tremendous diversity, most airborne EM systems can be classified in one of the following categories according to the quantities measured: phase component measuring systems, quadrature systems, rotating field systems, and transient response systems (54). For a phase component measuring system, the in-phase and quadrature components are measured at a single frequency and recorded as parts per million (ppm) of the primary field. In the system design, vertical loop arrangements are preferred, since they are more sensitive to the steeply dipping conductor and less sensitive to the horizontally layered conductor (55). Accurate maintenance of transmitter-receiver separation is essential and can be achieved by fixing the transmitter and receiver at the two wing tips. Once this requirement is satisfied, a sensitivity of a few ppm can be achieved (54). A diagram of the phase component measuring system is shown in Fig. 10 (55). A balancing network associated with the reference loop is used to buck the primary field at the receiver. The receiver signal is then fed to two phasesensitive demodulators to obtain the in-phase and quadrature components. Low-pass filters are used to reject very-high-frequency signals that do not originate from the earth. The data are interpreted by matching the curves obtained from the modeling. Some response curves of typical structures are given in Ref. 56.

Transmitting loop Transmitter

Preamplifier Balance network

Reference loop

AIRBORNE ELECTROMAGNETIC METHODS Airborne EM methods (AEM) are widely used in geological surveys and prospecting for conductive ore bodies. These methods are suitable for large area surveys because of their speed and cost effectiveness. They are also preferred in some areas where access is difficult, such as swamps or ice-covered areas. In contrast to ground EM methods, airborne EM methods are usually used to outline large-scale structures while ground EM methods are preferred for more detailed investigations (53). The difference between airborne and ground EM systems results from the technical limitations inherent in the use of aircraft. The limited separation between transmitter and receiver determines the shallow investigation depth, usually from 25 m to 75 m. Even though greater penetration depth can be achieved by placing the transmitter and receiver on different aircraft, the disadvantages are obvious. The transmitters and receivers are usually 200 ft to 500 ft above the surface. Consequently, the amplitude ratio of the

Receiving loop

Filter

Filter

Amplifier

Amplifier

90° phase shifter

Amplifier

Bucking loop

Amplifier

Demodulator

Demodulator

Integrator and filter

Integrator and filter

Amplifier

Amplifier

Recorder

Recorder

Figure 10. Block diagram showing operation of a typical phase component measuring system. (Redrawn from Ref. 43.)

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

Receiver

Flight direction

π 2 Output

Transmitter

π 2

485

duced PUlsed Transient) (59), which was designed by Barringer during the 1950s. In the INPUT system, a large horizontal transmitting coil is placed on the aircraft and a vertical receiving coil is towed in the bird with the axis aligned with the flight direction. The working principle of INPUT is shown in Fig. 12 (60). A half sine wave with a duration of about 1.5 ms and quiet period of about 2.0 ms is generated as the primary field, as shown in Fig. 12(a). If there are no conducting zones, the current in the receiver is induced only by the primary field, as shown in Fig. 12(b). In the presence of conductive anomalies, the primary field will induce an eddy current. After the primary field is cut off, the eddy current decays exponentially. The duration of the eddy current is proportional to the conductivity anomalies, as shown in Fig. 12(c). The higher the conductivity, the longer the duration time. The decay curve in the quiet period is sampled successively in time by six channels and then displayed on a strip, as shown in Fig. 13. As we can see, the distortion caused by a good conductor ap-

Figure 11. Working principle of the rotary field AEM system. (Redrawn from Ref. 50.)

The quadrature system employs a horizontal coil placed on the airplane as a transmitter and a vertical coil towed behind the plane as a receiver. The vertical coil is referred to as a ‘‘towed bird.’’ Since only the quadrature component is measured, the separation distance is less critical. To reduce the noise further, an auxiliary horizontal coil, powered with a current 90 degrees out of phase with respect to the main transmitter current, is used to cancel the secondary field caused by the metal body of the aircraft. Since the response at a single frequency may have two interpretations, two frequencies are used to eliminate the ambiguity. The lower frequency is about 400 Hz and the higher one is chosen from 2000 Hz to 2500 Hz. The system responses in different environments can be obtain by model studies. Reference 57 gives a number of curves for thin sheets and shows the effects of variation in depth, dipping angle, and conductivity. In an airborne system, it is hard to control the relative rotation of receiver and transmitter. The rotating field method is introduced to overcome this difficulty. Two transmitter coils are placed perpendicular to each other on the plane, and a similar arrangement is used for the receiver. The two transmitters are powered with current of the same frequency shifted 90 degrees out of phase, so that the resultant field rotates about the axis, as shown in Fig. 11 (58). The two receiver signals are phase shifted by 90 degrees with respect to each other, and then the in-phase and quadrature differences at the two receivers are amplified and recorded by two different channels. Over a barren area, the outputs are set to zero. When the system is within a conducting zone, anomalies in the conductivity are indicated by nonzero outputs in both the in-phase and quadrature channels. The noise introduced by the fluctuation of orientation can be reduced by this scheme, but it is relatively expensive and the data interpretation is complicated by the complex coil system (58). The fundamental problem of airborne EM systems is the difficulty in detecting the relatively small secondary field in the presence of a strong primary field. This difficulty can be alleviated by using the transient field method. A well-known system based on the transient field method is INPUT (IN-

Primary current (and magnetic field) 2.0 ms Time 1.5 ms (a)

Current induced in R by primary field alone

(b)

Total field induced in R

Decay curve due to good conductors Channel sample periods 1

3

5

Decay curve due to poor conductor (c) Figure 12. Working principle of the INPUT system. (Redrawn from Ref. 52.)

486

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING 12. T. D. Barber and R. A. Rosthal, Using a multiarray induction tool to achieve high resolution logs with minimum environmental effects, Trans. SPE, paper SPE 22725, 1991.

Poor conductor anomalies appear on early channels Channel Channel Channel Channel Channel Channel

1 2 3 4 5 6

Good conductor anomalies appear on early channels Figure 13. Responses of different anomalies appearing on different channels. (Redrawn from Ref. 52.)

pears in all the channels, while the distortion corresponding to a poor conductor only registers on the early channels. Since the secondary field can be measured more accurately in the absence of the primary field, transient systems provide greater investigation depths, which may reach 100 m under favorable conditions. In addition, they can also provide a direct indication of the type of conductor encountered (58). On the other hand, this system design gives rise to other problems inherent in the transient method. Since the eddy current in the quiet period becomes very small, a more intense source has to be used in order to obtain the same signal level as that in continuous wave method. The circuitry for the transient system is much more complicated, and it is more difficult to reject the noise due to the wideband property of the transient signal. BIBLIOGRAPHY 1. C. M. Swift, Jr., Fundamentals of the electromagnetic method, in M. N. Nabighian (ed.), Electromagnetic Methods in Applied Geophysics: Theory, Tulsa, OK: Society of Exploration Geophysicists, 1988. 2. H. G. Doll, Introduction to induction logging and application to logging of wells drilled with oil based mud, J. Petroleum Tech., 1: 148–162, 1949. 3. J. H. Moran and K. S. Kunz, Basic theory of induction logging and application to study of two-coil sondes, Geophysics., 27 (6): 829–858, 1962. 4. S. J. Thandani and H. E. Hall, Propagated geometric factors in induction logging, Trans. SPWLA, 2: paper WW, 1981. 5. B. Anderson, Induction sonde response in stratified medium, The Log Analyst, XXIV (1): 25–31. 6. W. C. Chew, Response of a current loop antenna in an invaded borehole, Geophysics, 49: 81–91, 1984. 7. J. R. Lovell, Finite Element Method in Resistivity Logging, Ridgefield, CT: Schlumberger Technology Corporation, 1993. 8. W. C. Chew and Q. H. Liu, Inversion of induction tool measurements using the distorted Born iterative method and CG-FFHT, IEEE Trans. Geosci. Remote Sens., 32 (4): 878–884, 1994. 9. S. Gianzero and B. Anderson, A new look at skin effect, The Log Analyst, 23 (1): 20–34, 1982. 10. L. C. Shen, Effects of skin-effect correction and three-point deconvolution on induction logs, The Log Analyst, July–August issue, pp. 217, 1989. 11. R. Strickland et al., New developments in the high resolution induction log, Trans. SPWLA, 2: paper ZZ, 1991.

13. G. P. Grove and G. N. Minerbo, An adaptive borehole correction scheme for array induction tools, presented at the 32nd Ann. SPWLA Symposium, Midland, TX, 1991. 14. Schlumberger Educational Services, AIT Array Induction Image Tool, 1992. 15. R. Freedman and G. N. Minerbo, Maximum entropy inversion of induction log data, Trans. SPE, 5: 381–394, paper SPE 19608, 1989. 16. S. J. Grimaldi, P. Poupon, and P. Souhaite, The daul laterologRxo tool, Trans. SPE, 2: 1–12, paper SPE 4018, 1972. 17. R. Chemali et al., The sholder bed effect on the daul laterolog and its variation with the resistivity of the borehole fluid, Trans. SPWLA, paper UU, 1983. 18. Q. Liu, B. Anderson, and W. C. Chew, Modeling low frequency electrode-type resistivity tools in invaded thin beds, IEEE Trans. Geosci. Remote Sens., 32 (3): 494–498, 1994. 19. B. Anderson and W. C. Chew, SFL interpretation using high speed synthetic computer generated logs, Trans. SPWLA, paper K, 1985. 20. Y. Chauvel, D. A. Seeburger, and C. O. Alfonso, Application of the SHDT stratigraphic high resolution dipmeter to the study of depositional environments, Trans. SPWLA, paper G, 1984. 21. A. R. Badr and M. R. Ayoub, Study of a complex carbonate reservoir using the Formation MicroScanner (FMS) tool, Proc 6th Middleeast Oil Show, Bahrain, March, 1989, pp. 507–516. 22. R. L. Kleinberg et al., Microinduction sensor for the oil-based mud dipmeter, SPE Formation Evaluation, vol. 3, pp. 733–742, December 1988. 23. W. C. Chew and R. L. Kleinberg, Theory of microinduction measurements, IEEE Trans. Geosci. Remote Sens., 26: 707–719, 1988. 24. W. C. Chew and S. Gianzero, Theoretical investigation of the electromagnetic wave propagation tool, IEEE Trans. Geosci. Remote Sens., GE-19: 1–7, 1981. 25. W. C. Chew et al., An effective solution for the response of electrical welllogging tool in a complex environment, IEEE Trans. Geosci. Remote Sens., 29: 303–313, 1991. 26. D. D. Griffin, R. L. Kleinberg, and M. Fukuhara, Low-frequency NMR spectrometer measurement, Science & Technology, 4: 968– 975, 1993. 27. D. J. Daniels, D. J. Gunton, and H. F. Scott, Introduction to subsurface radar, IEEE Proc., 135: 278–320, 1988. 28. D. K. Butler, Elementary GPR overview, Proc. Government Users Workshop on Ground Penetrating Radar Application and Equipment, pp. 25–30, 1992. 29. W. H. Weedon and W. C. Chew, Broadband microwave inverse scattering for nondestructive evaluation, Proc. Twentieth Annu. Review Progress Quantitative Nondestructive Evaluation, Drunswick, ME, 1993. 30. F. C. Chen and W. C. Chew, Time-domain ultra-wideband microwave imaging radar system, Proc. IEEE Instrum. Meas. Technol. Conf., St. Paul, MN, pp. 648–650, 1998. 31. F. C. Chen and W. C. Chew, Development and testing of the timedomain microwave nondestructive evaluation system, Review of Progress in Quantitative Evaluation, vol. 17, New York: Plenum, 1998, pp. 713–718. 32. M. Walford, Exploration of temperate glaciers, Phys. Bull., 36: 108–109, 1985. 33. D. K. Hall, A review of the utility of remote sensing in Alaskan permafrost studies, IEEE Trans. Geosci. Remote Sens., GE-20: 390–394, 1982.

ELECTROMAGNETIC WAVE SCATTERING 34. A. Z. Botros et al., Microwave detection of hidden objects in walls, Electron. Lett., 20: 379–380, 1984. 35. P. Dennis and S. E. Gibbs, Solid-state linear FM/CW radar systems—their promise and their problems, Proc. IEEE MTT Symp., 1974, pp. 340–342. 36. A. P. Anderson and P. J. Richards, Microwave imaging of subsurface cylindrical scatters from cross-polar backscatter, Electron. Lett., 13: 617–619, 1977. 37. K. Lizuka et al., Hologram matrix radar, Proc. IEEE, 64: 1493– 1504, 1976. 38. N. Osumi and K. Ueno, Microwave holographic imaging method with improved resolution, IEEE Trans. Antennas Propag., AP-32: 1018–1026, 1984. 39. M. C. Bailey, Broad-band half-wave dipole, IEEE Trans. Antennas Propag., AP-32: 410–412, 1984. 40. R. P. King, Antennas in material media near boundaries with application to communication and geophysical exploration, IEEE Trans. Antennas Propag., AP-34: 483–496, 1986. 41. M. Kanda, A relatively short cylindrical broadband antenna with tapered resistive loading for picosecond pulse measurements, IEEE Trans. Antennas Propag., AP-26: 439–447, 1978. 42. C. A. Balanis, Antenna Theory, chapter 9, New York: Wiley, 1997. 43. P. Degauque and J. P. Thery, Electromagnetic subsurface radar using the transient radiated by a wire antenna, IEEE Trans. Geosci. Remote Sens., GE-24: 805–812, 1986. 44. C. A. Balanis, Antenna Theory, chapter 10, New York: Wiley, 1997. 45. C. A. Balanis, Antenna Theory, chapter 11, New York: Wiley, 1997. 46. J. L. Kerr, Short axial length broadband horns, IEEE Trans. Antennas Propag., AP-21: 710–714, 1973. 47. A. N. Tikhonov, Determination of the electrical characteristics of the deep strate of the earth’s crust, Proc. Acad. Sci., USSR, 83 (2): 295–297, 1950. 48. J. R. Wait, Theory of magneto-telluric field, J. Res. Natl. Bur. Stand.-D, Radio Propagation, 66D: 509–541, 1962. 49. K. Vozoff, The magnetotelluric method, in M. N. Nabighian (ed.), Electromagnetic Methods in Applied Geophysics: Application, Tulsa, OK: Society of Exploration Geophysicists, 1988. 50. L. Cagniard, Basic theory of the magneto-telluric method of geophysical prospecting, Geophysics., 18: 605–635, 1952. 51. K. Vozoff, The magnetotelluric method in the exploration of sedimentary basins, Geophysics., 37: 98–141, 1972. 52. T. Madden and P. Nelson, A defense of Cagniard’s magnetotelluric method, Geophysics Reprinted Series No. 5: Magnetotelluric Methods, Tulsa, OK: Society of Exploration Geophysicists, 1985, pp. 89–102. 53. G. J. Palacky and G. F. West, Airborne electromagnetic methods, in M. N. Nabighian (ed.), Electromagnetic Methods in Applied Geophysics: Theory, Tulsa, OK: Society of Exploration Geophysicists, 1988. 54. J. C. Gerkens, Foundation of Exploration Geophysics, New York: Elsevier, 1989. 55. G. V. Keller and F. Frischknecht, Electrical Methods in Geophysical Prospecting, New York: Pergamon, 1966. 56. D. Boyd and B. C. Roberts, Model experiments and survey results from a wing tip-mounted electromagnetic prospecting system, Geophys. Pro., 9: 411–420, 1961. 57. N. R. Patterson, Experimental and field data for the dual frequency phase-shift method of airborne electromagnetic prospecting, Geophysics, 26: 601–617, 1961. 58. P. Kearey and M. Brooks, An Introduction to Geophysical Exploration, Boston, MA: Blackwell Scientific, 1984.

487

59. A. R. Barringer, The INPUT electrical pulse prospecting system, Min. Cong. J., 48: 49–52, 1962. 60. A. E. Beck, Physical Principles of Exploration Methods, New York: Macmillan, 1981.

S. Y. CHEN W. C. CHEW University of Illinois at UrbanaChampaign

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3604.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Geographic Information Systems Standard Article J. Raul Ramirez1 1The Ohio State University, Columbus, OH Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3604 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (97K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Hardware and its Use Software and its Use Using GIS Quality and its Impact in GIS The Future of GIS About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3604.htm17.06.2008 15:35:47

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

320

GEOGRAPHIC INFORMATION SYSTEMS

GEOGRAPHIC INFORMATION SYSTEMS A Geographic Information System (GIS) is a set of computerbased tools that collects, stores, retrieves, manipulates, displays, and analyzes geographic information. Some definitions of GIS include institutions and people besides the computerbased tools and the geographic data. These definitions refer more to a total GIS implementation than to the technology. Here, computer-based tools are hardware (equipment) and software (computer programs). Geographic information describes facts about the earth’s features, for example, the location and characteristics of rivers, lakes, buildings, and roads. Collection of geographic information refers to the process of gathering, in computer-compatible form, facts about features of interest. Facts usually collected are the location of features given by sets of coordinate values (such as latitude, longitude, and sometimes elevation), and attributes such as feature type (highway), name (Interstate 71), and unique characteristics (the northbound lane is closed). Storing of geographic information is the process of electronically saving the collected information in permanent computer memory (such as a computer hard disk). Information is saved in structured computer files. These files are sequences of only two characters, 0 and 1, called bits, organized into bytes (eight bits) and words (16–64 bits). These bits represent information stored in the binary system. Retrieving geographic information is the process of accessing the computer-compatible files, extracting sets of bits and translating them into information we can understand (for example, information given in our national language). Manipulation of geographic data is the process of modifying, copying, and removing from computer permanent memory selected sets of information bits or complete files. Display of geographic information is the process of generating and making visible a graphic (and sometimes textual) representation of the information. Analysis of geographic information is the process of studying, computing facts from the geographic information, and asking questions (and obtaining answers from the GIS) about features and their relationships. For example, what is the shortest route from my house to my place of work?

HARDWARE AND ITS USE The main component is the computer (or computers) on which the GIS run. Currently, GIS systems run on desktop computers to mainframes (used as a stand-alone or as a network configuration). In general, GIS operations require handling large amounts of information (fifty megabytes or larger file sizes are common), and in many cases, GIS queries and graphic displays must be generated very quickly. Therefore, important characteristics of computers used for GIS are processing speed, quantity of random access memory (RAM), size of permanent storage devices, resolution of display devices, and speed of communication protocols. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

GEOGRAPHIC INFORMATION SYSTEMS

Several peripheral hardware components may be part of the system: printers, plotters, scanners, digitizing tables, and other data collection devices. Printers and plotters are used to generate text reports and graphics (including maps). High-speed printers with graphics and color capabilities are commonplace today. The number and sophistication of the printers in a GIS organization depend on the amount of text reports to be generated. Plotters allow the generation of oversized graphics. The most common graphic products of a GIS system are maps. As defined by Thompson (1), ‘‘Maps are graphic representations of the physical features (natural, artificial, or both) of a part or the whole of the earth’s surface. This representation is made by means of signs and symbols or photographic imagery, at an established scale, on a specified projection, and with the means of orientation indicated.’’ As this definition indicates, there are two different types of maps: (1) line maps, composed of lines, the type of map we are most familiar with, usually in paper form, for example a road map; and (2) image maps, which are similar to a photograph. Plotters able to plot only line maps are usually less sophisticated (and less expensive) than those able to plot high-quality line and image maps. Plotting size and resolution are other important characteristics of plotters. With some plotters it is possible to plot maps with a size larger than one meter. Higher plotting resolution allows plotting a greater amount of details. Plotting resolution is very important for images. Usually, the larger the map size needed, and the higher the plotting resolution, the more expensive the plotter. Scanners are devices that sense and decompose a hardcopy image or scene into equal-sized units called pixels and store

321

each pixel in computer-compatible form with corresponding attributes (usually a color value per pixel). The most common use of scanning technology is in fax machines. They take a hardcopy document, sense the document, and generate a set of electric pulses. Sometimes, the fax machine stores the pulses to be transferred later; other times they are transferred right away. In the case of scanners used in GIS, these pulses are stored as bits in a computer file. The image generated is called a raster image. A raster image is composed of pixels. Generally, pixels are square units. Pixel size (the scanner resolution) ranges from a few micrometers (for example, five) to hundreds of micrometers (for example, 100 micrometers). The smaller the pixel size the better the quality of the scanned images, but the larger the size of the computer file and higher the scanner cost. Scanners are used in GIS to convert hardcopy documents to computer-compatible form, especially paper maps. Some GIS cannot use raster images to answer geographic questions (queries). Those GIS that can are usually limited in the types of queries they can perform (they can perform queries about individual locations but not geographic features). Most queries need information in vector form. Vector information represents individual geographic features (or parts of features) and is an ordered list of vertex coordinates. Figure 1 shows the differences between raster and vector. Digitizing tables are devices that collect vector information from hardcopy documents (especially maps). They consist of a flat surface on which documents can be attached and a cursor or puck with several buttons, used to locate and input coordinate values (and sometimes attributes) into the computer. The result of digitizing is a computer file with a list of coordinate values

Raster

Area covered

Feature

00000000000 01100000100 01100000010 00110000001 00011000000 00001100000 00001100000 00001100000 00001110000 00000111000 00000011000 00000000000 Data stored

(a)

(b)

(c)

Finite number of fixed area and dimension pixels

Vector 1

1’ 2

Infinite number of dimensionless arealess geometric points

2’ 3 4 5

X 1, X 2, X 3, X 4, X 5,

Y1 Y2 Y3 Y4 Y5

X’1, Y’1 X’2, Y’2

Area covered

Feature

Data stored

(a)

(b)

(c)

Figure 1. The different structures of raster and vector information, feature representation, and data storage.

322

GEOGRAPHIC INFORMATION SYSTEMS

and attributes per feature. This method of digitizing is called ‘‘heads-down digitizing.’’ Currently, there is a different technique to generate vector information. This method uses a raster image as a backdrop on the computer terminal. Usually, the image has been georeferenced (transformed into a coordinate system related in some way to the earth). The operator uses the computer mouse to collect the vertices of a geographic feature and to attach attributes. As in the previous case, the output is a computer file with a list of coordinate values and attributes for each feature. This method is called ‘‘heads-up digitizing.’’ SOFTWARE AND ITS USE Software, as defined by the AGI dictionary (2), is the collection of computer programs, procedures, and rules for the execution of specific tasks on a computer system. A computer program is a logical set of instructions, which tells a computer to perform a sequence of tasks. GIS software provides the functions to collect, store, retrieve, manipulate, query and analyze, and display geographic information. An important component of software today is a graphical user interface (GUI). A GUI is set of graphic tools (icons, buttons, and dialogue boxes) that can be used to communicate with a computer program to input, store, retrieve, manipulate, display, and analyze information and generate different types of output. Pointing with a device such as a mouse to select a particular software application operates most GUI graphic tools. Figure 2 shows a GUI. GIS software can be divided into five major components (besides the GUI): input, manipulation, database management system, query and analysis, and visualization. Input software allows the import of geographic information (location and attributes) into the appropriate computer-compatible for-

Empty

Order taken

Drinks served

Out 1

Kitchen

2

;;

11

15

16

17

20

5 9

12

Reset

In

4

3

14

21

Shortest Food & drinks route served

Food ready

8

6 7

10 Bar

18

Food ready Food & drink served

;; 19

Shortest route to and from table 18

Drinks served Order taken Empty table

Figure 2. A graphic user interface (GUI) for a GIS in a restaurant setting and the graphic answers to questions about table occupancy, service, and shortest route to Table 18.

mat. Two different issues need to be considered: how to transform (convert) analog (paper-based) information into digital form, and how to store information in the appropriate format. Scanning, and heads-down and heads-up digitizing software with different levels of automation, transforms paper-based information (especially graphic) into computer-compatible form. Text information (attributes) can be imported by a combination of scanning and character recognition software, and/ or by manual input using a keyboard and/or voice recognition software. In general, each commercial GIS software package has a proprietary format, used to store locations and attributes. Only information in that particular format can be used in that particular GIS. When information is converted from paper into digital form using the tools from that GIS, the result is in the appropriate format. When information is collected using other alternatives, then a file format translation needs to be made. Translators are computer programs that take information stored in a given format and generate a new file (with the same information) in a different format. In some cases, translation results in information loss. Manipulation software allows changing the geographic information by adding, removing, modifying, or duplicating pieces or complete sets of information. Many tools in manipulation software are similar to those in word-processors: create, open, and save a file; cut, copy, paste, undo graphic and attribute information. Many other manipulation tools allow drafting operations of the information, such as: draw a parallel line, square, rectangle, circle, and ellipse; move a graphic element, change color, line width, line style. Other tools allow the logical connection of different geographic features. For example, geographic features that are physically different and unconnected, can be grouped as part of the same layer, level, or overlay (usually, these words have the same meaning). By doing this, they are considered part of a common theme (for example, all rivers in a GIS can be considered part of the same layer: hydrography). Then, one can manipulate all features in this layer by a single command. For example, one could change the color of all rivers of the hydrography layer from light to dark blue by a single command. Database management system (DBMS) is a collection of software for organizing information in a database. This software performs three fundamental operations: storage, manipulation, and retrieval of information from the database. A database is a collection of information organized according to a conceptual structure describing the characteristic of the information and the relationship among their corresponding entities (2). Usually, in a database there are at least two computer files or tables and a set of known relationships, which allows efficient access to specific entities. Entities in this concept are geographic objects (such as a road, house, and tree). Multipurpose DBMS are classified into four categories: inverted list, hierarchical, network, and relational. Healy (3) indicates that for GIS, there are two common approaches to DBMS: the hybrid and the integrated. The hybrid approach is a combination of a commercial DBMS (usually, relational) and direct access operating system files. Positional information (coordinate values) is stored in direct access files and attributes, in the commercial DBMS. This approach increases access speed to positional information and takes advantage of DBMS functions, minimizing development costs. Guptill (4) indicates that in the integrated approach the Standard Query Language (SQL) used to ask questions about the database is

GEOGRAPHIC INFORMATION SYSTEMS

replaced by an expanded SQL with spatial operators able to handle points, lines, polygons, and even more complex structures and graphic queries. This expanded SQL sits on top of the relational database. This simplifies geographic information queries. Query and analysis software provides new explicit information about the geographic environment. The distinction between query and analysis is somewhat unclear. Maguire and Dangermond (5) indicate that the difference is a matter of emphasis: ‘‘Query functions are concerned with inventory questions such as ‘Where is . . .?’ Analysis functions deal with questions such as ‘What if . . .?’.’’ In general, query and analysis use the location of geographic features, distances, directions, and attributes to generate results. Two characteristic operations of query and analysis are buffering and overlay. Buffering is the operation that finds and highlights an area of user-defined dimension (a buffer) around a geographic feature (or a portion of a geographic feature), and retrieves information inside the buffer, or generates a new feature. Overlay is the operation that compares layers. Layers are compared two at a time by location and/or attributes. Query and analysis are the capabilities that differentiate GIS from other geographic data applications such as computer-aided mapping, computer-aided drafting (CAD), photogrammetry, and mobile mapping. Visualization in this context refers to the software for visual representation of geographic data and related facts, facilitating the understanding of geographic phenomena, their analysis, and interrelations. The term visualization in GIS encompasses a larger meaning. As defined by Buttenfield and Mackaness (6), ‘‘visualization is the process of representing information synoptically for the purpose of recognizing, communicating, and interpreting pattern and structure. Its domain encompasses the computational, cognitive, and mechanical aspects of generating, organizing, manipulating, and comprehending such representation. Representation may be rendered symbolically, graphically, or iconically and is most often differentiated from other forms of expression (textual, verbal, or formulaic) by virtue of its synoptic format and with qualities traditionally described by the term ‘Gestalt.’ ’’ It is the confluence of computation, cognition, and graphic design. Visualization is accomplished through maps, diagrams, and perspective views. A large amount of information is abstracted into graphic symbols. These symbols are endowed with visual variables (size, value, pattern, color, orientation, and shape) that emphasize differences and similarities among those facts represented. The joint representation of the facts shows explicit and implicit information. Explicit information can be accessed by other means such as tables and text. Implicit information requires, in some cases, performing operations with information such as computing the distance between two points on a road. In other cases, by looking at the graphic representation, we can access implicit information. For example, we can find an unexpected relationship between the relief and erosion that is not obvious from the explicit information. This is the power of visualization!

USING GIS GIS is widely used. Users include national, state, and local agencies; private business (from delivery companies to restau-

323

rants, from engineering to law firms); educational institutions (from universities to school districts, from administrators to researchers); and private citizens. As indicated earlier, the use of GIS requires software (that can be acquired from a commercial vendor), hardware (which allows running the GIS software), and data (with the information of interest). As indicated by Worboys (7), ‘‘data are only useful when they are part of a structure of interrelationships that form the context of the data. Such a context is provided by the data model.’’ Depending on the problem of interest, the data model may be simple or complex. In a restaurant, information about seating arrangement, seating time, drinks, and food are well defined and easily expressed by a simple data model. Fundamentally, you have information for each table about its location, the number of people it seats, and the status of the table (empty or occupied). Once a table is occupied, additional information is recorded: how many people occupy the table; when it was occupied; what drinks were ordered; what food was ordered; the status of the order (drinks are being served, food is being prepared, etc.). Questions such as, What table is empty? How many people can be seated at a table? What table seats seven people? Has the food ordered by table 11 been served? How long before table 11 is free again? are easily answered from the above information with a simple data model (see Figure 2). Of course, a more sophisticated data model will be required if more complex questions are asked of the system. For example, What is the most efficient route to reach a table based on the current table occupancy? If alcoholic drinks are ordered at a table, how much longer will it be occupied than if nonalcoholic drinks are ordered? How long will it be before food is served to Table 11 if the same dish has been ordered nine times in the last few minutes? Many problems require a complex data model. A nonexhaustive list of GIS applications that require complex models is presented next. This list gives an overview of many fields and applications of GIS: Siting of a new business. Find the best location in this region for a new factory, based on natural and human resources. Network analysis. Find the shortest bus routes to pick up students, for a given school. Utility services. Find the most cost-efficient way to extend the electric service to a new neighborhood. Land Information System. Generate an inventory of the natural resources of a region and the property-tax revenue, using land parcels as the basic unit. Intelligent car navigation. What are the recommended speeds, geographic coordinates of the path to be followed, street classification, and route restrictions to go from location A to location B? Tourist Information System. What is the difference in driving time to go from location A to location B, following the scenic route instead of the business route? And where, along the scenic route, are the major places of interest located? Political campaigns. Set the most time-efficient schedule to visit the largest possible number of cities where undecided voters could make the difference during the last week of a political campaign.

324

GEOGRAPHIC INFORMATION SYSTEMS

Marketing branch location analysis. Find the location and major services to be offered by a new bank branch, based on population density and consumer preferences. Terrain analysis. Find the most promising site in a region for oil exploration, based on topographic, geological, seismic, and geomorphological information. QUALITY AND ITS IMPACT IN GIS The unique advantage of GIS is the capability to analyze and answer geographic questions. If no geographic data is available for a region, of course, it is not possible to use GIS. On the other hand, the validity of the analysis and quality of the answers in GIS are closely related to the quality of the geographic data used. If poor quality or incomplete data were used, the query and analysis would provide poor or incomplete results. Therefore, it is fundamental to know the quality of the information in a GIS. Of course, the quality of the analysis and query capabilities of a GIS is also very important. Perfect geographic data used with poor-quality analysis and query tools generates poor results. Quality is defined by the U.S. National Committee Digital Cartographic Data Standard—NCDCDS (8) as ‘‘fitness for use.’’ This definition states that quality is a relative term: Data may be fit to use in a particular application but unfit for another. Therefore, we need to have a very good understanding of the scope of our application to judge the quality of the data to be used. The same committee identifies five quality components in the context of GIS in the Spatial Data Transfer Standard (SDTS): lineage, positional accuracy, attribute accuracy, logical consistency, and completeness. SDTS is the U.S. Federal Information Processing Standard—173 and states that ‘‘Lineage is information about the sources and processing history of the data.’’ Positional accuracy is ‘‘the correctness of the spatial (geographic) location of features.’’ Attribute accuracy is ‘‘the correctness of semantic (nonpositional) information ascribed to spatial (geographic) features.’’ Logical consistency is ‘‘the validity of relationships (especially topological ones) encoded in the data,’’ and completeness is ‘‘the mapping and selection rules and exhaustiveness of feature representation in the data.’’ The International Cartographic Association (ICA) has added two more quality components: semantic accuracy and temporal information. As stated by Guptill and Morrison (9), ‘‘semantic accuracy describes the number of features, relationships, or attributes that have been correctly encoded in accordance with a set of feature representation rules.’’ Guptill and Morrison (10) also state that ‘‘temporal information describes the date of observation, type of update (creation, modification, deletion, unchanged), and validity periods for spatial (geographic) data records.’’ Most of our understanding about the quality of geographic information is limited to positional accuracy, specifically, point positional accuracy. Schmidley (11) has conducted research in line positional accuracy. Research in attribute accuracy has been done mostly in the remote sensing area, and some in GIS [see Chapter 4 of (9)]. Very little research has been done in the other quality components [see (9)]. To make the problem worse, because of limited digital geographic coverage worldwide, GIS users combine, many times, different sets of geographic information, each set of a different

quality level. Most GIS commercial products have no tools to judge the quality of the data used: Therefore, it is up to the GIS user to judge and keep track of information quality. Another limitation of GIS technology today is the fact that GIS systems, including analysis and query tools, are sold as ‘‘black boxes.’’ The user provides the geographic data, and the GIS system provides results. In many cases the methods, algorithms, and implementation techniques are considered proprietary and there is no way for the user to judge their quality. More and more users are starting to recognize the importance of quality GIS data. As a result, many experts are conducting research in the different aspects of GIS quality. THE FUTURE OF GIS GIS is in its formative years. All types of users have accepted the technology and it is a worldwide multibillion-dollar industry. This acceptance will create a great demand for digital geographic information in the near future. Commercial satellites and multisensor platforms generating high-resolution images, mobile mapping technology, and efficient analog-todigital data conversion systems, are some of the promising approaches to the generation of geographic data. GIS capabilities are improving. This is a result of the large amount of ongoing research. This research includes the areas of visualization, user interfaces, spatial relation languages, spatial analysis methods, geographic data quality, three-dimensional and spatio-temporal information systems, and open software design. These efforts will result in better, reliable, faster, and more powerful GIS. BIBLIOGRAPHY 1. M. M. Thompson, Maps for America, 2nd ed. Reston, VA: U.S. Geological Survey, 253, 1981. 2. Association for Geographic Information, AGI GIS Dictionary, 2nd ed., http://www.geo.ed.ac.uk/agidexe/term638, 1993. 3. R. G. Healey, Database management systems, in D. J. Maguire, M. F. Goodchild, and D. W. Rhind (ed.), Geographical Information Systems, Harlow, UK: Logman Scientific Group, 1991. 4. S. C. Guptill, Desirable characteristics of a spatial database management system, Proc. AUTOCARTO 8, ASPRS, Falls Church, VA, 1987. 5. D. J. Maguire and J. Dangermond, The functionality of GIS, D. J. Maguire, M. F. Goodchild, and D. W. Rhind (ed.), Geographical Information Systems, Harlow, UK: Logman Scientific Group, 1991. 6. B. P. Buttenfield and W. A. Mackaness, Visualization, in D. J. Maguire, M. F. Goodchild, and D. W. Rhind (ed.), Geographical Information Systems, Harlow, UK: Logman Scientific Group, 1991. 7. M. F. Worboys, GIS: A Computing Perspective, London: Taylor & Francis, 1995 p. 2. 8. Digital Cartographic Data Standard Task Force, The proposed standard for digital cartographic data, The American Cartographer, 15: 9–140, 1988. 9. Reprinted for S. C. Guptill and J. L. Morrison, Elements of Spatial Data Quality, Copyright 1995, with kind permission from Elsevier Science Ltd, The Boulevar, Langford Lane, Kidlington 0X5 1GB, UK, p. 10, 1995. 10. Ref. 9, p. 11.

GEOMETRIC PROGRAMMING 11. R. W. Schmidley, Framework for the Control of Quality in Automated Mapping, unpublished dissertation, Ohio State University, Columbus, OH: 1966.

J. RAUL RAMIREZ The Ohio State University

GEOMETRIC CORRECTIONS FOR REMOTE SENSING. See REMOTE SENSING GEOMETRIC CORRECTIONS.

325

sign parameter) and its cross-sectional area t (an independent design variable or decision variable). In particular, then, the capital cost is CLt, where C (a design parameter) is the cost per unit volume of the material making up the line. Also, suppose the operating cost is simply proportional to the power loss, which is known to be proportional to both L and the line resistivity R (a design parameter) as well as to the square of the carried current I (a design parameter) while being inversely proportional to t. In particular, then, the operating cost is DLRI2 /t, where the proportionality constant D (a design parameter) is determined from the predicted lifetime of the line as well as the present and future unit power costs

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3603.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Geophysical Signal and Image Processing Standard Article Marwan A. Simaan1 1University of Pittsburgh, Pittsburgh, PA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3603 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (766K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Seismic Data Generation and Acquisition Seismic Wave Propagation Determination of Seismic Propagation Velocity Stacking and Velocity Analysis Seismic Deconvolution Conclusion About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3603.htm17.06.2008 15:36:06

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

GEOPHYSICAL SIGNAL AND IMAGE PROCESSING The science of geophysics is concerned with the application of principles from physics to the study of the earth. Exploration geophysics involves the investigation of properties of the subsurface layers of the earth by taking measurements at or near the earth’s surface. Processing and analysis of these measurements may reveal how the physical properties of the earth’s interior vary vertically and laterally. Information of this type is extremely important in the search for hydrocarbons, minerals, and water in the earth. This article summarizes some of the basic results that deal with the application of signal and image processing techniques to the ﬁeld of exploration geophysics, speciﬁcally as it relates to the search for hydrocarbons. Hydrocarbons are typically found in association with sedimentary sequences in major sedimentary basins in the earth. Thus scientiﬁc methods for hydrocarbon exploration depend heavily on our ability to image the earth’s subsurface geological structures down to about 12,000 m. Potential hydrocarbon deposits, in the form of petroleum or natural gas, are often associated with certain geological formations such as faults, anticlines, salt domes, stratigraphic traps, and others (Fig. 1). Such formations may be detected on a seismic image, also called a seismic section, only if sophisticated data acquisition and processing methods are used to generate this image. One of the most popular and successful methods for imaging the earth’s subsurface is the seismic exploration method. This method involves generating a disturbance of the surface of the earth by means of the detonation of an explosive charge placed either on the ground, in the case of land exploration, or in water, in the case of offshore marine exploration (Fig. 2). The resulting ground motion propagates downwards inside the earth, gets reﬂected at the various interfaces of the geological strata, and ﬁnally is recorded as a time series, or trace, by sensors placed at some distance from the source of the disturbance. An example of many such traces placed side by side is shown in Fig. 3. Geophysical signal processing is a ﬁeld which deals primarily with computer methods for analyzing and ﬁltering a large number of such time series for the purpose of extracting the information necessary to develop an image of the subsurface layers (or geology) of the earth (1–12). In order to give the reader an idea about the volume of geophysical data available for processing, it is estimated that in the 1990s, on the average, approximately 2 million traces are recorded everyday for the purpose of exploring for petroleum and natural gas. Thus careful processing of this enormous amount of data must take advantage of the state of the art in computer technology and make full use of the most advanced techniques in digital signal and image processing.

SEISMIC DATA GENERATION AND ACQUISITION The ﬁrst step in any application of a geophysical dataprocessing method is to understand the process by which

the signals to be processed have been generated and recorded. Seismic data for imaging the subsurface layers of the earth down to possibly 12,000 m, are typically generated by a source of seismic energy and recorded by an array of sensors placed on the surface of the earth at some distance from the source. There are several types of seismic sources. The most common are dynamite explosives or vibroseis for land data, or air guns for offshore marine data. Land dynamite explosives and marine air guns, which inject a bubble of highly compressed air into the water, are short-duration (about 0.5 s or less) sources, usually referred to as wavelets (see Fig. 4). Vibroseis land sources, on the other hand, are long-duration (typically 8 s or more) low-amplitude sinusoidal waveforms whose frequency varies continuously from about 10 Hz to 80 Hz. The sweep signal is matched ﬁltered to give the equivalent of a short duration correlation function for the outgoing signal. The waveform created by the seismic source propagates downward into the earth, gets reﬂected, and returns to the surface carrying information about the subsurface structure. Each sensor on the receiving array is a device that transforms the reﬂected seismic energy into an electrical signal. In the case of land data, recording is done by a geophone, which typically measures particle velocity. In the case of marine data, recording is done by a hydrophone, which typically measures pressure. The recording of every sensor (geophone or hydrophone) is a time series called a seismic trace. Each trace may consist of 1,000 to 12,000 samples of data representing 4 s to 6 s of earth motion sampled at periods which could vary anywhere between 0.5 ms and 8 ms. Typical spacing between the sensors is 10 m to 200 m. The recorded traces can then be sorted so that all traces corresponding to some criterion, such as commonshot-point or common-midpoint, are displayed side-byside to form what is known as a shot gather. Several source–receiver conﬁgurations for recording/sorting seismic traces are illustrated in Fig. 5. A sample commonmidpoint (CMP) shot gather is shown in Fig. 3. A seismic survey typically consists of a large number of shot gathers collected by moving the combination of source and array of receivers along speciﬁed survey lines and repeating the data collection process (see Fig. 2). The ultimate goal of geophysical signal processing is to extract information on the physical properties and construct an image of the subsurface structure of the earth by processing these recorded data. Until about the mid-1980s, emphasis was mostly on 2-D imaging along speciﬁed survey lines. However, the advent of very powerful data acquisition, storage, and processing capabilities has made it possible to construct 3-D images of subsurface structures (12). Most of the geophysical signal processing work in the mid-1990s and beyond has been focused on 3-D imaging. Surveys that yield 3-D data, however, are more complicated and costly than those that yield 2-D data. For example, 3-D surveys normally utilize methods in which seismic sensors are distributed along several parallel lines and the seismic sources along other lines, hence building a dense array of seismic data. A typical 3-D survey could involve collecting anywhere between several hundred thousands to a few millions traces (12).

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.

2

Geophysical Signal and Image Processing

Figure 1. Some geological formations associated with hydrocarbon deposits (2, 5).

Figure 2. A typical land seismic data acquisition experiment (9).

SEISMIC WAVE PROPAGATION A seismic wave propagates outwards from the source at a velocity that is typically determined by the physical properties of the propagation medium (surrounding rocks). Seismic rays are thin pencils of seismic energy traveling along raypaths that are perpendicular to the wavefronts. At the interface between two rock layers, there is a change in the physical properties of the media, which results in a change in the propagation velocity. When encountering such an interface, the energy of an incident seismic wave is partitioned into a reﬂected wave and a transmitted wave. The relative amplitudes of these waves are determined by the

velocities and densities of the two layers. Thus, in order to understand the nature of the recorded seismic data, it is essential to understand the propagation mechanism of a seismic signal through a multilayered medium. Note that, as mentioned earlier, in the case of land seismic data, a geophone records vertical displacement velocity, while in the case of marine data, a hydrophone records water pressure. Consider a simple model of the earth, which consists of a horizontally layered medium (8) where the seismic propagation velocity α(z) and medium density ρ(z) vary only as a function of depth z. In the case of land data, it is known that the vertical stress component σ(z, t) is related to vertical displacement velocity v(z, t) by the standard equation

Geophysical Signal and Image Processing

3

Figure 5. Various source–receiver conﬁgurations.

Figure 3. An example of a common depth point (CDP) shot gather.

In the Fourier transform domain, these expressions are written as

of motion

and Hooke’s law where S(z, ω) and V(z, ω) are the Fourier transforms of σ(z, t) and v(z, t), respectively. The propagating waveform can be decomposed into its upcoming and downgoing components

Figure 4. An example of a marine, single air gun, wavelet.

4

Geophysical Signal and Image Processing

Figure 6. Reﬂection and transmission coefﬁcients at interface zk .

receiver. If one knows the velocity distribution as a function of depth between the surface and reﬂecting planes, the travel times information can be transformed into information on the depths of the reﬂecting boundary planes. In order to illustrate how this is done, consider the simple model of a single horizontal reﬂector at a depth z beneath a homogeneous layer with constant velocity V, as shown in Fig. 7. This simple case will probably never occur in practice, but its understanding will make it easier to understand the more complicated cases that are encountered in real situations. Using simple triangle geometry, the travel time from the source, to the reﬂector, to a receiver at a distance x from the source can be easily computed as

U and D using the linear transformation

This expression can be rewritten in the form where λ(z) = ρ(z)α(z) is the acoustic impedance. Combining Eq. (5) with Eqs. 4, it can be shown that U and D must satisfy the equations

In the above expressions γ(z) is the reﬂectivity function, which is related to the acoustic impedance λ(z) by the expression:

The above equations can be used to derive synthetic seismograms at any depth. As such, they are necessary prerequisites for solving the inverse problem where the requirement is to determine the reﬂectivity function from the available seismogram. If one applies the boundary conditions at the interface between the kth and k + 1st layers, as illustrated in Fig. 6, the reﬂectivity coefﬁcient ck and transmission coefﬁcient tk at the interface zk can be expressed as:

where t0 = 2z/V is the two-way travel time obtained from Eq. (10) by setting x = 0. This is called the zero-offset travel time and corresponds to a hypothetical source–receiver combination placed exactly at the common-midpoint (CMP). The above expression shows that the relationship between tx and x is hyperbolic, as illustrated in Fig. 8. The difference in travel time between a ray path arriving at an offset distance x and one arriving at zero-offset is called normal move-out (NMO). For cases where offset is much smaller than depth (i.e., x < z), which is normally the case in practice, Eq. (11) can be approximated as follows:

and NMO can be expressed as:

The above expression can be rearranged as follows: where λk is the acoustic impedance above the interface and λk+1 is the acoustic impedance below the interface. The seismic trace s(t) recorded at the surface is often modeled as a convolution of the source waveform w(t) and a reﬂectivity function r(t),

The reﬂectivity function r(t) is related to λ(z) by observing that two-way travel time t is related to depth z through the seismic propagation velocity. DETERMINATION OF SEISMIC PROPAGATION VELOCITY The recorded seismic trace can be processed to estimate the travel times of the reﬂected ray paths from the source to the

which means that the velocity of the medium above the reﬂector can be computed from knowledge of the NMO tx and the zero-offset travel time t0 . In practice, however, this calculation is made using a large number of reﬂected ray paths to obtain a statistical average of the velocity. As mentioned earlier, once t0 is known, the depth of the reﬂector can be computed as z = Vt0 /2. The above analysis may be easily extended to the case of a multilayer medium, as illustrated in Fig. 9. In this case, it can be shown that the two-way travel time of the ray path reﬂected from the nth interface at a depth z is given by

Geophysical Signal and Image Processing

5

Figure 7. Travel time versus offset.

Figure 9. Ray path from source to receiver in a multilayered medium.

STACKING AND VELOCITY ANALYSIS

Figure 8. Travel time versus offset.

where t0,n is the zero-offset two-way travel time down to the nth layer, and Vrms,n is the root-mean-square velocity of the section of the earth down to the nth layer. The expression for Vrms,n is

where Vi is the interval velocity of the ith layer and τ i is the one-way travel time of the reﬂected ray through the ith layer. As in the single reﬂector case, the NMO for the nth reﬂector can be approximated as

and this expression can be used to compute the rms velocity value of the layers above the reﬂector. Once, the rms velocities down to different reﬂectors have been determined, the interval velocity Vn of the nth layer can be computed using the formula (known as Dix’ formula)

where Vrms,n and Vrms,n−1 are the rms velocities for layers n and n − 1, respectively, and tn and tn−1 are the corresponding two-way zero-offset travel times (5).

As mentioned earlier, the most common conﬁguration for collecting and arranging seismic data is the commonmidpoint (CMP) reﬂection proﬁling shown in Fig. 8. A typical CMP gather is shown in Fig. 3. It is important to point out that a CMP gather represents the best possible datasorting conﬁguration for using Eq. (15) to estimate seismic velocities from the effects of NMO. To illustrate how this is done, assume that one wishes to estimate the velocity at a given zero-offset time t0,j (where j refers to a sample position on the zero-offset trace). For each value of rms velocity Vrms,j that may be guessed, there is a hyperbola deﬁned by Eq. (15). The sum (stack) of the trace samples falling on this hyperbola can therefore be computed and a measure of coherent energy (the square of the sum) can be determined as illustrated in Fig. 10. The hyperbola that produces the maximum coherent energy represents the best ﬁt to the data and the corresponding velocity represents the best estimate of the rms velocity at time t0,j . This velocity, denoted by Vs,j , is called the stacking velocity at time t0,j . If this process is repeated for all possible sample locations j, a three-dimensional plot of coherent energy as a function of velocity and time on the zero-offset trace can then be produced. An example of such a plot is shown in Fig. 11. The peaks on this, what is often referred to as “velocity spectrum” are used to determine a stacking velocity proﬁle versus two-way travel time. This velocity proﬁle can be used to perform NMO corrections for all times t0,j . Interval velocities can then be calculated from the stacking velocities by means of Dix’ formula [Eq. (18)]. Once accurate velocity information is available, it becomes possible to correct for the effects of NMO. This is achieved by shifting the samples on each trace (i.e., ﬂatten the hyperbola that corresponds to the stacking velocity) to obtain an estimate of a corresponding zero-offset trace at the CMP of the gather. If done correctly, all reﬂections coming from the same horizontal reﬂector will line-up at the same zero-offset time on the corrected traces, as illus-

6

Geophysical Signal and Image Processing

Figure 10. Computation of stacking velocity from a CDP gather.

Figure 12. NMO corrected traces of data in Fig. 10.

line of the survey that produced the section. SEISMIC DECONVOLUTION Figure 11. An example of a velocity spectrum plot.

trated in Fig. 12. Other coherent events such as multiples, which have different rms velocities, and random noise, will not be aligned. These traces can therefore be summed algebraically to produce one trace, that corresponds to the CMP, in which the reﬂected aligned events have been reinforced and the other effects reduced. This process is called stacking and the output is called a stacked trace. When a large number of stacked traces corresponding to successive common-midpoints are placed side-by-side, the resulting image is called a stacked seismic section. An example of a stacked seismic section is shown in Fig. 13. A stacked section represents an image showing geologic formations that would be exposed if the earth were to be sliced along the

A very important step in geophysical signal processing that is often (but not always) performed prior to stacking is deconvolution. Deconvolution is a process by which the effect of the source waveform is compressed so as to improve temporal resolution. In order to understand the concept of deconvolution, go back to the basic model described earlier in Eq. (9), which represents a seismic trace s(t) as a convolution of a source waveform w(t) and a reﬂectivity sequence r(t); that is, s(t) = r(t)*w(t). Note that, for the sake of brevity, the effects of random noise, which are almost always present, have not been included in this model. The reﬂectivity sequence r(t) is also called the earth impulse response. It represents what would be recorded if the source waveform were purely an impulse function δ(t) (a spike). Recall that the reﬂectivity sequence r(t) contains informa-

Geophysical Signal and Image Processing

7

Figure 13. An example of a stacked seismic section. Note the folded and thrust-faulted structure (9).

tion about the subsurface characteristics of the earth. The source waveform w(t) is therefore a blurring (or smearing) function that makes it difﬁcult to recognize the reﬂectivity sequence by directly observing the trace s(t). If it were possible to generate a source waveform that corresponds to an impulse function δ(t), then, except for the effects of random noise, each trace will indeed be a recording of the reﬂectivity sequence. Generating a seismic source that is a close approximation of the impulse function (i.e., where most of the energy is concentrated over a very short interval of time) has been a dilemma that has, for years, received considerable attention in the geophysical industry and related literature. Estimating the reﬂectivity sequence r(t) from s(t) = r(t)*w(t) has probably been one of the most studied problems in geophysical signal processing and much research effort has been devoted to the development of methods for carrying out this operation. Among the most popular such methods are the optimum Wiener ﬁltering, predictive deconvolution, spiking deconvolution, homomorphic deconvolution, and numerous others (15). For the sake of conciseness, the Wiener ﬁltering method will be discussed in some detail, while the others will be brieﬂy summarized.

be the n- and m-dimensional vectors of these samples, respectively. The actual output vector which is the convolution of the input sequence and the ﬁlter coefﬁcients can be expressed in matrix form as:

or

where X is an (n + m − 1) × m lower diagonal matrix whose entries are derived from the input sequence, and y is the (n + m − 1) vector of actual output samples. The error vector can now be deﬁned as

where d is the vector of desired output samples. The optimum ﬁlter is derived by minimizing the norm of the error, that is,

Wiener Filtering Method Assume that one has a signal x(t) and that one wishes to apply a ﬁlter f(t) to this signal in order to make it resemble a desired signal d(t). The Wiener ﬁltering method, illustrated in Fig. 14, involves designing the ﬁlter f(t) so that the leastsquares error between the actual output y(t) = x(t)*f(t) and the desired outputs d(t) is minimized. For simplicity, the steps for deriving the ﬁlter f(t) (for the deterministic case) will be carried out using discrete rather than continuous signals and with matrix notation. Assume that the input sequence has n samples x0 , x1 , . . . , xn−1 and the unknown ﬁlter has m samples f0 , f1 , . . . , fm−1 and let

It can be easily shown that the optimum vector f* that minimizes er is the solution of the linear matrix equation

Note that upon computing the entries of the m × m matrix X X, the above equation can be written as:

where φi are the autocorrelation lags of the input signal and gi are the crosscorrelations between the input signal

8

Geophysical Signal and Image Processing

and the desired output; that is,

It is important to mention that the autocorrelation matrix is Toeplitz in nature, and hence the optimum ﬁlter coefﬁcients can be calculated using the Levinson recursion algorithm (3). Also, note that in the above analysis, the only requirement for the derivation of the ﬁlter coefﬁcients is an a priori knowledge of the autocorrelation coefﬁcients of the input signal and the crosscorrelation coefﬁcients of the input signal with the desired output signal. Clearly, the ﬁlter length m needs to be speciﬁed a priori and cannot be changed during or after the computation of the ﬁlter coefﬁcients without repeating the entire computations.

which means that it would be possible to recover the reﬂectivity sequence r(t). The ﬁlter f(t), if it exists, is called the inverse ﬁlter of the seismic source w(t). The nature of this inverse ﬁlter can also be examined in the frequency domain. Taking the Fourier transform of both sides of Eq. (29), one obtains

where

From this, it follows that:

Prediction Error Filtering A special case of the above derivation is when the desired output signal is an advanced version of the input signal; that is, dk = xk+p . In such a case, the ﬁlter is called a p-step ahead predictor. That is, at sample k, it predicts xk+p from past values of the input. The derivation of such a ﬁlter is essentially the same as described above except that xk+p should be used in place of dk in Eq. (26). That is,

The desired output in this case is the predictable part of the input series. This, for example, could include events such as multiples. The error signal contains the unpredictable part, which is the uncorrelated reﬂectivity sequence that we are trying to extract from the measurements. Of special interest is the one step ahead predictor (p = 1). In that case, it can be shown that the minimum error can be added as an additional unknown to Eq. (24), which can then be solved for, along with the ﬁlter coefﬁcients. Also, the error series (or reﬂectivity sequence) can now be computed using Eq. (22) as:

where the vector h = [1 − f∗ 0 − f∗ 1 .. − f∗ m−1 ] . It should be noted that this (deconvolution) approach for estimating the reﬂectivity sequence, also known as predictive deconvolution, is based on two important assumptions. First, the reﬂectivity sequence represents a random series (i.e., no predictable patterns) and second the wavelet must be minimum phase.

This means that the amplitude spectrum of the inverse ﬁlter is the inverse of that of the seismic wavelet and the phase spectrum of the inverse ﬁlter is the negative of that of the seismic wavelet. A problem therefore will immediately arise if the amplitude spectrum of the wavelet has frequencies at which it is equal to zero. Clearly, at those frequencies the amplitude spectrum of the inverse ﬁlter becomes inﬁnite (or undeﬁned) and hence the ﬁlter will be unbounded in the time domain. Similar problems will also arise even if at some frequencies the values of the amplitude spectrum of w(t) are very small. Clearly, using Eq. (33) for calculating the inverse ﬁlter f(t) is not feasible in almost all realistic applications. Suppose instead, the Wiener ﬁltering approach is used and a ﬁlter f(t) is designed which, when applied to the source waveform w(t), will produce an output which is as close as possible to a desired impulse function (a spike). In other words, referring to the Wiener Filtering approach discussed earlier, suppose the source waveform has n samples, w0 , w1 , . . . , wn−1 , the unknown ﬁlter has m samples f0 , f1 , . . . , fm−1 , and the desired output is an impulse which has n + m −1 samples δ0 , δ1 , . . . , δn+m−1 and whose entries are all 0 except for 1 at one location; say the jth location. Then the ﬁlter coefﬁcients are determined so as to minimize the error function:

where

Spiking Deconvolution Method Assume that it is possible to ﬁnd a ﬁlter f(t) such that when applied to the seismic source waveform w(t), one gets the impulse function δ(t); that is,

Then, if one applies this ﬁlter to the seismic trace s(t) = r(t)*w(t), one gets:

As discussed in the derivation of the Wiener Filter, and using Eq. (24), the optimum vector that minimizes er is given by the expression:

Geophysical Signal and Image Processing

9

Figure 14. The Wiener ﬁltering method.

This can be written as:

where φi are the autocorrelation lags of the source waveform and gi are the crosscorrelations between the source waveform and the desired output:

Note that it is also possible to choose the optimal location j* of the spike in the δ vector in order to achieve the smallest possible error. This can be done by noting that, when Eq. (36) is substituted in Eq. (34), the minimum value of er will reduce to the quadratic expression emin = δ Mδ where M is an m × m matrix equal to M = I − W(WW)−1W . Given that the δ vector is all zeroes except for the number 1 in one location, emin will be smallest when j* is chosen to correspond to the location of the smallest term on the diagonal of the matrix M. Homomorphic Deconvolution In the late 1960s a class of nonlinear systems, called homomorphic systems (16, 17), which satisfy a generalization of the principle of superposition has been proposed. Homomorphic ﬁltering is essentially the use of a homomorphic system to remove an undesired component from a signal. Homomorphic deconvolution involves the use of homomorphic ﬁltering for separating two signals that have been convolved in the time domain. An important aspect of the theory of homomorphic deconvolution is that it can be represented as a cascade of three operations. The following will summarize how this theory can be used to separate the reﬂectivity sequence r(t) and the source waveform w(t) from the seismic trace s(t) = r(t)*w(t). The ﬁrst operation involves taking the Fourier transform of s(t). That is, S(ω) = R(ω)W(ω). The second operation involves taking the logarithm of S(ω):

Note that since the Fourier transform is a complex function, it is necessary to deﬁne the logarithm of a complex quantity. An appropriate such a deﬁnition for a complex function X(ω) is:

In the above expression, the real part, log|X(ω)|, causes no problem. Problems of uniqueness, however, arise in deﬁning the imaginary part since, x (ω) is deﬁned only to within ±π. One approach to dealing with this problem is to require that x (ω) be a continuous odd function of ω. That is, the phase function x (ω) must be unwrapped. It is important to point out that Eq. (39) shows that the multiplication operation in the frequency domain has now been changed to an addition operation in the log-frequency domain. The third operation is to take the inverse Fourier transform of log[S(ω)]. The resulting function is called the complex cepstrum of s(t). Now if the characteristics of the two signals r(t) and w(t) are such that they appear nonoverlapping in this domain, then they can be separated by an appropriate window function. This operation is essentially the ﬁltering operation. In general, it is very unlikely to have a seismic trace where there is complete nonoverlapping in the complex cepstrum of s(t). However, once this is achieved, the reverse process can be applied on each of the separated signals. That is, Fourier transform, followed by inverse logarithm, followed by inverse Fourier transform. The ﬁrst applications of homomorphic systems were in the area of speech processing (18). Homomorphic deconvolution of seismic data was introduced by Ulrych (19) in the early 1970s and later on extended by Tribolet (20). It should be pointed out that one of the major problems encountered in using homomorphic deconvolution is the problem of unwrapping of the phase. CONCLUSION The ﬁeld of geophysical signal processing deals primarily with computer methods for processing geophysical data collected for the purpose of extracting information about the subsurface layers of the earth. In this article, some of the main steps involved in acquiring, processing, displaying, and interpreting geophysical signals have been outlined. Clearly, many other important steps have not been covered and considerably more can be said about each of the steps that were covered. For example, ac-

10

Geophysical Signal and Image Processing

quisition of geophysical data also involves issues in geophone/hydrophone array design and placement, ﬁeld operations, noise control, and digital recording systems. Processing the data is typically an iterative process which also involves issues in static corrections, multiple suppression, numerous deconvolution applications, migration, imaging beneath complex structures, and F-K ﬁltering, to mention several. Displaying and interpreting geophysical data also involves issues in data demultiplexing and sorting, amplitude adjustments and gain applications, 2-D and 3-D imaging, geological modeling, as well as identiﬁcation of stratigraphic boundaries and structural features on the ﬁnal image. BIBLIOGRAPHY 1. A. I. Levorsen Geology of Petroleum, San Francisco: Freeman, 1958. 2. M. B. Dobrin Geophysical Prospecting, New York: McGrawHill, 1960. 3. J. F. Claerbout Fundamentals of Geophysical Data Processing, New York: McGraw-Hill, 1976. 4. E. A. Robinson S. Treitel Geophysical Signal Analysis, Englewood Cliffs, NJ: Prentice-Hall, 1980. 5. C. H. Dix Seismic Prospecting for Oil, Boston: IHRDC, 1981. 6. M. A. Simaan Advances in Geophysical Data Processing, Greenwich, CT: JAI Press, 1984. 7. E. A. Robinson T. S. Durrani Geophysical Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1986. 8. O. Yilmaz Seismic Data Analysis: Processing, Inversion, and Interpretation of Seismic Data, Investigations in Geophysics, No. 10, Tulsa: SEG Press, 2001. 9. L. R. Lines R.T. Newrick Fundamentals of Geophysical interpretation, Tulsa: SEG Press, 2004. 10. L. T. Ikelle L. Amundsen Introduction to Petroleum Seismology, Investigations in Geophysics, No. 12, Tulsa: SEG Press, 2005. 11. D. K. Butler Near Surface GeophysicsInvestigations in Geophysics, No. 13, Tulsa: SEG Press, 2005. 12. B. L. Biondi, 3D Seismic Imaging, Investigations in Geophysics, No. 14, Tulsa: SEG Press, 2006. 13. E. S. Robinson C. Coruh Basic Exploration Geophysics, New York: Wiley, 1988. 14. B. Ursin K. A. Bertussen Comparison of some inverse methods for wave propagation in layered media, Proc. IEEE, 3: 389–400, 1986. 15. V. K. Arya J. K. Aggarwal Deconvolution of Seismic Data, Stroudsburg, PA: Hutchinson & Ross, 1982. 16. A. V. Oppenheim R. W. Schafer T. G. Stockman, Jr. Nonlinear ﬁltering of multiplied and convolved signals, Proc. IEEE, 8: 1264–1291, 1968. 17. A. V. Oppenheim R. W. Schafer Discrete-Time Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1989. 18. L. R. Rabiner R. W. Schafer Digital Processing of Speech Signals, Englewood Cliffs, NJ: Prentice-Hall, 1989. 19. T. Ulrych Application of homomorphic deconvolution to seismology, Geophysics, 36 (4): 650–660, 1971. 20. J. M. Tribolet Seismic Applications of Homomorphic Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1979.

MARWAN A. SIMAAN University of Pittsburgh, Pittsburgh, PA

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3607.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Information Processing for Remote Sensing Standard Article James C. Tilton1, David Landgrebe2, Robert A. Schowengerdt3 1NASA’s Goddard Space Flight Center, Greenbelt, MD 2Purdue University, West Lafayette, IN 3University of Arizona, Tucson, AZ Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3607 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (339K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Feature Extraction Multispectral Image Data Classification Classification Using Neural Networks Image Segmentation Hyperspectral Data About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3607.htm17.06.2008 15:36:26

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

INFORMATION PROCESSING FOR REMOTE SENSING

INFORMATION PROCESSING FOR REMOTE SENSING Remote sensing is a technology through which information about an object is obtained by observing it from a distance.

83

This article is specifically concerned with obtaining information about Earth through remote sensing. Earth can be observed remotely in many ways. One of the earliest approaches to remote sensing was observing Earth from a hot air balloon using a camera, or just the human eye. Today, remotely sensed Earth observational data are routinely obtained from instruments onboard aircraft and spacecraft. These instruments observe Earth through various means, including optical telescopes and microwave devices at wavelengths from optical through microwave, including the visible, infrared, passive microwave, and radar. Other articles in this series discuss the most widely employed approaches for obtaining remotely sensed data. This article discusses methods for effectively extracting information from the data once they have been obtained. Most information processing of Earth remote sensing data assumes that Earth’s curvature and terrain relief can be ignored. In most practical cases, this is a good assumption. It is beyond the scope of this article to deal with the special cases where it is not, such as with a relatively low flying sensor over mountainous terrain or when the sensor points toward Earth’s horizon. This article deals with information processing of two-dimensional image data from down-looking sensors. Remotely sensed image data can have widely varying characteristics, depending on the sensor employed and the wavelength of radiation sensed. This variation can be very useful, as in most cases this variation corresponds to information about what is being sensed on Earth. A key task of information processing for remote sensing is to extract the information contained in the variations of remotely sensed image data with changes in spatial scale, spectral wavelength, and the time at which the data are collected. Data containing these types of variations are referred to as multiresolution or multiscale data, multispectral data, and multitemporal data, respectively. In some cases, Earth scientists may find useful a combined analysis of image data taken at different spatial scales and/ or orientations by separate sensors. Such analysis will become even more desirable over the next several years as the number and variety of sensors increase under such programs as NASA’s Earth Observing System. This type of analysis requires the determination of the correspondence of data points in one image to data points in the other image. The process of finding this correspondence and transforming the images to a common spatial scale and orientation is called image registration. More information on image registration can be found in REMOTE SENSING GEOMETRIC CORRECTIONS. Multispectral data are often collected by an instrument that is designed to collect the data in such a way that they are already registered. In other cases, however, small shifts in location need to be corrected by image registration. Multitemporal data must almost always be brought into spatial alignment using image registration, as must multiresolution data when obtained from separate sensors. Several approaches have been developed for analyzing registered multiscale/spectral/temporal data. Because most of these techniques were originally developed for analyzing multispectral image data, they will be discussed in terms of that context. However, many of these techniques can also be used in analyzing multiscale and/or multitemporal data. In the following discussion, each scale, spectral, or temporal

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

84

INFORMATION PROCESSING FOR REMOTE SENSING

manifestation of the image data is referred to as an image band. Figure 1 gives an example of remotely sensed multispectral image data. Sometimes important information identifying the observed ground objects is contained in the ratios between bands. Ratios taken between spectrally adjacent bands correspond to the discrete derivative of the spectral variation. Such band ratios measure the rate of change in spectral response and distinguish classes with a small rate of change in spectral response from those with a large rate of change. Other spectral ratios have been defined such that they relate to the amount of photosynthetic vegetation on the Earth’s surface.

These are called vegetation indices. Spectral ratios are also useful in the analysis of image data containing significant amounts of topographic shading. The process of spectral ratioing tends to reduce the effect of this shading. The data contained in each band of multispectral image data are often correlated with the data from some of the other bands. When desirable to do so, this correlation can be reduced by transforming the data in such a way that most of the data variation is concentrated in just a few transformed bands. Reducing the number of image bands in this way not only may make the information content more apparent but also serves to reduce the computation time required for analy-

(a)

(b)

(c)

(d)

Figure 1. An example of remotely sensed multispectral imagery data. Displayed are selected spectral bands from a seven-band Landsat 5 Thematic Mapper image of Washington, DC: (a) spectral band 2 (0.52–0.60 애m), (b) spectral band 4 (0.76–0.90 애m), (c) spectral band 5 (1.55–1.75 애m), and (d) spectral band 7 (2.08–2.35 애m).

INFORMATION PROCESSING FOR REMOTE SENSING

sis; it can be used, in effect, to ‘‘compress’’ the data by discarding transformed bands with low variation. There are many such transformations for accomplishing this concentration of variation. One is called Principal Component Analysis (PCA) or the Principal Component Transform (PCT). Other useful transforms are the Canonical Components Transform (CCT) and the Tasseled Cap Transform (TCT). The process of labeling individual pixels in the image data as belonging to a particular ground cover class is called image classification. (An image data vector from a particular spatial location is called an image picture element or pixel.) This labeling process can be carried out directly on the remotely sensed image data, on image features derived from the original image data (such as band ratios or data transforms), or on combinations of the original image data and derived features. Whatever the origin of the data, the classification feature space is the n-dimensional vector space spanned by the data vectors formed at each image pixel. The two main types of image classification are unsupervised and supervised. In unsupervised classification, an analysis procedure is used to find natural divisions, or clusters, in the image feature space. After the clustering process is complete, the analyst associates class labels with each cluster. Several clustering algorithms are available, ranging from the simple K-means algorithm, where the analyst must prespecify the number of clusters, to the more elaborate ISODATA algorithm, which automatically determines an appropriate number of clusters. In supervised classification, the first step is to define a description of how the classes of interest are distributed in feature space. Then each pixel is given the class label whose description is closest to its data value. Determining the description of how the classes of interest are distributed in feature space is the training stage of supervised classification. An approach commonly used in this stage is to identify small areas throughout the image data that contain image pixels of the classes of interest. This is usually done using image interpretation combined with ground reference information (e.g., a map of the locations of areas of classes of interest obtained through a ground survey, knowledge from a previous time, or other generalized knowledge about the area in question). Then the classes are characterized according to the model used for the next step: the classification stage. One of the simplest classification algorithms is the minimum-distance-to-means classifier. When this classifier is used, the vector mean value of each class is calculated in the training stage, and each data pixel is labeled as belonging to the closest class by some distance measure (e.g., the Euclidean distance measure). This classifier can work very well if all classes have similar variance and well-separated means. However, its performance may be poor when the classes of interest have a wide range of variance. A relatively simple classification algorithm that can account for differing ranges of variation of the classes is the parallelepiped classifier. When this classifier is used, the range of pixel values in each band is noted for each class from the training stage, and image data pixels that do not fall uniquely into the range values for just one class are labeled as ‘‘unknown.’’ This classifier gets its name from the fact that the feature space locations of pixels belonging to individual classes form parallelepiped-shaped regions in feature space. The number of pixels in the unknown class can be reduced

85

Table 1. Accuracy Comparison (Percent Correct Classification) Between Classifications of the Original and Presegmented Landsat Thematic Mapper Images [from (1)] Original Image (%)

Presegmented Image (%)

Water/marsh Forest Residential Agricultural and domestic grasses

73.7 74.8 54.4 81.9

79.3 75.6 64.9 83.4

Overall

79.2

80.9

Ground Cover Class

by modeling each class by a union of several parallelepipedshaped regions. One of the most commonly used classification algorithms for remotely sensed data is the Gaussian maximum likelihood classifier (also called the ML classifier). The ML classifier often performs very well in cases where the minimum-distanceto-means classifier or the parallelepiped classifier perform poorly. This is because the ML classifier not only accounts for differences in variance between classes but also accounts for differences in between-band correlations. An even more general classification approach is a neural network classifier. The flexibility of the neural network classifier comes from its ability to generate totally arbitrary feature space partitions. The analysis approaches discussed to this point have treated the data at each spatial location separately. This perpixel analysis ignores the information contained in the spatial variation of the image data. One approach that can exploit the spatial information content in the data is image segmentation. Image segmentation is a partitioning of an image into regions based on the similarity or dissimilarity of feature values between neighboring image pixels. An image region is defined as a collection of image pixels in which, for any two pixels in this collection, there exists a spatial path connecting these two pixels, which travels only through pixels contained in the region. After an image is segmented into regions, the image can be labeled region by region using one of the classification approaches mentioned previously. The combination of image segmentation and image classification often produces superior results to per-pixel image classification (see Table 1). A relatively recent development in remotely sensing instrumentation is imaging spectrometers, such as the Airborne Visible-InfraRed Imaging Spectrometer (AVIRIS). Imaging spectrometers produce hyperspectral data, consisting of hundreds of spectral bands taken at narrow and closely spaced spectral intervals. Two main types of specialized analysis approaches are currently under development for this type of data. One approach is an attempt to match laboratory or field reflectance spectra with remotely sensed imaging spectrometer data. The success of this approach depends on precise calibration of the remotely sensed data and careful compensation or corrections for atmospheric, solar, and topographic effects. The other approach depends on exploiting the unique mathematical characteristics of very high dimensional data. This approach does not necessarily require corrected data. FEATURE EXTRACTION The multispectral image data provided by a remote sensing instrument can be analyzed directly. However, in some cases,

86

INFORMATION PROCESSING FOR REMOTE SENSING

it may be beneficial to analyze features extracted from the original data. Such feature extraction commonly takes the form of subsetting and/or mathematically transforming the original data. It is used to compensate for one or more of the following problems often encountered with remotely sensed data: atmospheric effects, topographic shading effects, spectral band correlation, and lack of optimization for a particular application. Atmospheric Effects Most remote sensing data are collected from sensors on satellite platforms orbiting above Earth’s atmosphere. Earth’s atmosphere can have a significant effect on the quality and characteristics of such satellite-based remote sensing data. For this article, it is sufficient to introduce the following first-order model for the input radiance to an Earth-orbiting sensor (2): L(x, y, λ) =

1 Ts (λ)Tv (λ)E0 (λ) cos[θ (x, y)]ρ(x, y, λ) + Lh (λ) (1) π

The solar irradiance from the sun E0() provides the source radiation for the remote sensing process. This is the irradiance as it would be measured at the top of Earth’s atmosphere and is referred to as the exo-atmospheric solar irradiance. The atmosphere affects the signal received by the sensor on two paths: (1) between the top of the atmosphere and Earth’s surface (solar path) and (2) between the surface and the sensor (view path). The spectral transmittance of the atmosphere Ts() along the solar path or Tv() along the view path is generally high except in prominant molecular absorption bands attributable mainly to carbon dioxide and water vapor, as illustrated in Fig. 2. The cos[(x, y)] term is the spatial variation of irradiance at the surface resulting from the solar zenith angle and topography, which determine the angle at which the incident radiation strikes the surface. The spatial and spectral variations in diffuse surface reflectance are modeled by the function (x, y, ). A Lambertian, or perfectly

1

Transmittance

0.8 0.6 CO2 0.4

H 2O CO2

0.2

CO2, H 2O

CO2, H 2O

CO2

H 2O H O 2 0 0.4

0.8

1.2 1.6 Wavelength ( µ m)

2

2.4

Figure 2. Atmospheric transmittance for a nadir path as estimated with the atmospheric modeling program MODTRAN (3). The transmittance is generally over 50% throughout the visible to short-wave infrared (SWIR) spectral region, except for prominent absorption bands resulting from atmospheric molecular constituents. Remote sensing of the Earth is not possible at wavelengths corresponding to the strongest absorption bands. The relatively lower transmittance below about 0.6 애m results from Rayleigh scattering losses.

diffuse, reflecting surface is assumed in Eq. (1). The atmospheric view path radiance Lh() is additive and increases at shorter visible wavelengths as a result of Rayleigh molecular scattering. This is the effect that causes the clear sky to appear blue. A related, second-order effect from down-scattered radiation (skylight) that is subsequently reflected at the surface into the sensor view path is not included in Eq. (1). This effect allows the surface-related signal in shadowed areas to be recovered, although with a spectral bias toward shorter wavelengths. Correction for atmospheric effects requires modeling or measurement of the various independent terms in Eq. (1), namely, Ts(), Tv(), E0(), and Lh() and, given the remotely sensed data measurements, L(x, y, ), solution of Eq. (1) for the surface spatial and spectral variations (x, y, ). The cos[(x, y)] term is a topographic effect that is described in the next section. The path radiance term Lh() is primarily of concern at short, blue-green wavelengths, and the transmittance terms Ts() and Tv() are usually ignored for coarse multispectral sensing, such as with Landsat TM, where the bands are placed within atmospheric ‘‘windows’’ of relatively high and spectrally flat transmittance. For hyperspectral data, however, knowledge of and correction for transmittance is usually required if the data are to be compared to reflectance spectra measured in a laboratory. Topographic Effects Most areas of Earth have topographic relief. The irradiance from solar radiation is proportional to the cosine of the angle between the normal vector to the surface and the vector pointing to the sun. A surface element normal to the solar vector receives the maximum possible irradiance. Any element at some other angle will receive less. This spatially variant factor is the same in all solar reflective bands and, therefore, introduces a correlation across these bands. Spectral Band Ratios. The pixel-by-pixel ratio of adjacent spectral bands corresponds to the discrete derivative of the spectral function. It therefore measures the rate of change in spectral signature and distinguishes classes with a small rate of change from those with a large rate of change. For example, the ratio of a near infrared (NIR) band to a red band will show a high value across the vegetation edge at 700 nm, whereas a ratio of a red band to a green band will show a small value for both vegetation and soil. For bands where the atmospheric path radiance is small [e.g., in the NIR or short-wave infrared (SWIR) spectral regions], the spectral band ratio will be proportional to the surface reflectance ratio. In this case, the spectral band ratio is insensitive to topographic effects. If the path radiance is not small, then it should be reduced or removed using a technique such as Dark Object Subtraction (DOS) before spectral band ratios are calculated (2). Vegetation Indices. A number of specific ratio formulae have been defined in attempts to obtain features that relate to the amount of photosynthetic vegetation on the Earth’s surface. All depend on the red and NIR spectral reflectances (i.e., calibrated data). They are summarized in Table 2 and plotted as isolines in the NIR-red reflectance space in Fig. 3.

INFORMATION PROCESSING FOR REMOTE SENSING

87

Table 2. Definition of Common Vegetation Indices Index

Formula

Remarks

NIR red

—

NIR ⫺ red NIR ⫹ red

—

Ratio (R) Normalized Difference Vegetation Index (NDVI)

冉

Soil-Adjusted Vegetation Index (SAVI)

冊

NIR ⫺ red (1 ⫹ L) NIR ⫹ red ⫹ L

Even though vegetation indices can be used as features in classifications, they are commonly produced as an end-product indicating photosynthetic activity, particularly on a global scale from Advanced Very High-Resolution Radiometer (AVHRR) data. Spectral Band Correlation Spectral band correlation can result from several factors. First, the sensor spectral sensitivities sometimes overlap between adjacent spectral bands. Second, the spectral reflectance of most natural materials on the earth, particularly over spectral bandwidths of 10 nm or greater, change slowly with wavelength. Therefore, the reflectance in one band will be similar to that in an adjacent band. A notable exception is the ‘‘vegetation edge’’ at about 700 nm where the reflectance of photosynthetic vegetation increases dramatically from the red to the NIR spectral regions. Finally, topographic shading

L is an empirical constant, typically 0.5 for partial cover.

can introduce into remotely sensed data an apparent spectral correlation because it affects all solar reflective bands equally. Principal Components. The Principal Component Transformation is often used to eliminate spectral band correlation. The PCT also produces a redistribution of spectral variance into fewer components, isolates spectrally uncorrelated signal components and noise, and produces features that, in some cases, align with physical variables. It is a data-dependent, linear matrix transform of the original spectral vectors into a new coordinate system that corresponds to a specific coordinate axes rotation in n-dimensions (2,5). The PCT for a particular data set is derived from the eigenvalues and eigenvectors of the spectral covariance of the data, which is represented in matrix form as

=

N 1 (x − µ)(x j − µ)T N − 1 j=1 j

(2)

where N is the number of pixels in the image, xj is the jth image data vector (pixel), the superscript T denotes the vector transpose, and 애 is the vector mean value of the image given by

0.8

0.8

µ=

N 1 x N j=1 j

(3)

0.6

ρ NIR

The eigenvalues and eigenvectors of ⌺ are the solutions of the equation 0.4

0.2

0

φ = λφ

R = 1, NDVI = 0 R = 2, NDVI = 0.33 R = 3, NDVI = 0.5 SAVI = 0 SAVI = 0.33 SAVI = 0.5

assuming is not the zero vector (6,7). The eigenvalues are ordered in decreasing order, and the corresponding eigenvectors are combined to form the eigenvector matrix = [φ1 φ2 · · · φn ]

0

0.2

0.4

0.6

0.8

(4)

1

ρ red

Figure 3. Isolines for three different vegetation indices in the NIR spectral reflectance space. The spectral ratio R and the NDVI are redundant in that either one can be expressed in terms of the other (see Table 2). The SAVI requires an empirically determined constant (4). A value of 0.5 is used for this graph and is appropriate under most conditions of partial vegetation cover with soil background. SAVI has a smaller slope than does NDVI in this graph. Therefore, SAVI is less sensitive to the ratio of the NIR reflectance than NDVI, reflecting the former’s adjustment for soil background.

(5)

The PCT is then given by y = T x

(6)

Each output axis is a linear combination of the input axes (e.g., the spectral bands) and is orthogonal to the other output axes (this characteristic can isolate uncorrelated noise in the original bands). The weights on the inputs x are the eigenvectors, and the variances of the output axes y are the eigenvalues. Because the eigenvalues are ordered in decreasing order,

88

INFORMATION PROCESSING FOR REMOTE SENSING

the PCT achieves a compression of data variation into fewer dimensions when a subset of PCT components corresponding to the larger eigenvalues is selected. A disadvantage of the PCT is that it is a global, data-dependent transform and must be recalculated for each image. The greatest computation burden is usually the covariance matrix for the input features. Figure 4 displays the first four principal components of the Landsat 5 TM scene displayed in Fig. 1. Canonical Components. The Canonical Components Transform is similar to the PCT, except that the data are not lumped into one distribution in n-dimensional space when de-

riving the transformation matrix. Rather, training data for each class are used to find the transformation that maximizes the separability of the defined classes. A compression of significant information into fewer dimensions results, but it is not optimal as in the case of the PCT. Selection of the first three canonical components for a three-band color composite produces a color image that visually separates the classes better than any combination of three of the original bands. The CCT is a linear transformation on the original feature space such that the transformed features are optimized and arranged in order of decreasing maximum separability of the classes. The optimization is accomplished based upon max-

(a)

(b)

(c)

(d)

Figure 4. (a)–(d) The first four principal components from the PCT of the seven-band Landsat 5 TM image of Washington, DC (see Fig. 1). These four principal components contain 98.95% of the data variance contained in the original seven spectral bands.

INFORMATION PROCESSING FOR REMOTE SENSING

imizing the ratio of the between-class variance to the withinclass variance. The specific quantities are

W =

n

P(ωi ) i

(within-class scatter matrix)

(7)

i=1

B =

n

P(ωi )(µi − µ0 )(µi − µ0 )T

i=1

(between-class scatter matrix) µ0 =

n

P(ωi )µi

(8) (9)

i=1

where 애i, ⌺i, and P(웆i) are the mean vector, covariance matrix, and prior probability, respectively, for class 웆i. The optimality criterion then is defined as −1 J1 = tr( W

B )

(10)

The transformation results in new features that are linear combinations of the original bands. The size of the feature eigenvalues indicates the relative class discrimination value. Thus, the size of the eigenvalues gives some idea as to how many features should be used. Tasseled Cap Components. The Tasseled Cap Transform is a linear matrix transform, just as the PCT and CCT, but is fixed and independent of the data. It is, however, sensor dependent and must be newly derived for each sensor. The TCT produces a new set of components that are linear combinations of the original bands. The coefficients of the transformation matrix are derived relative to the ‘‘tasseled cap,’’ which describes the temporal trajectory of vegetation pixels in the n-dimensional spectral space as the vegetation grows and matures during the growing season. The TCT was originally derived for crops in temperate climates, namely the U.S. Midwest, and is most appropriately applied to that type of data (8–11). For the Landsat MSS (Multispectral Scanner) data, four new axes are defined: soil brightness, greenness, yellow stuff, and non-such. For the Landsat TM (Thematic Mapper) data, six new axes are defined, soil brightness, greenness, wetness, haze, and an otherwise unnamed fifth and sixth axes. The transformed data in the tasseled cap space can be compared directly between sensors (e.g., Landsat MSS soil brightness and Landsat TM soil brightness). Spectral Band Selection Global satellite sensors must be designed to image a wide range of materials of interest in many different applications. The sensor design is thus a compromise for any particular application (continuous spectral sensing, such as that produced by hyperspectral sensors, is a way to provide data suitable for all applications, at the expense of large data volumes). A multispectral sensor may have bands in the red and NIR suitable for vegetation mapping but lack bands in the SWIR suitable for mineral mapping. In spectral band selection, an optimal set of spectral bands are selected for analysis. The spectral characteristics of the material classes of interest must be defined before this technique is applied. The spectral characteristics are obtained from training data and may consist of the class mean vectors or the class mean vectors and covariances matrices (second-

89

order statistics), depending on the metric to be used. Various band combinations can be compared to find the combination that best separates (distinguishes) the given classes. Many separability metrics have been defined to measure separability. Each can be interpreted as a type of distance in spectral space (Table 3). The angular distance metric is particularly interesting because it conforms to the general shape of the scattergram between spectral bands in many cases. Topographic shading introduces a scatter of spectral signatures along a line through the origin of the spectral space. The angular metric directly measures angular separation of two distributions and is insensitive to the distance of a class distribution from the origin. To select the optimum spectral bands from a sensor band set, an exhaustive calculation is performed to find the average interclass separability for each possible combination of bands. For example, bands 2, 3, and 4 of Landsat TM may show the highest average transformed divergence of any three-band combination of the seven TM bands for a vegetation and soil classification. The full classification can then be performed using only bands 2, 3, and 4. MULTISPECTRAL IMAGE DATA CLASSIFICATION Data classification is the process of associating a thematic label with elements of the data set. The data elements so labeled are typically individual pixels, but they may be groups of pixels that have been associated with one another, for example, by having previously segmented the scene into regions (i.e., spectrally homogeneous areas). Mathematically, the process of classification may be described as mapping the data from a vector-valued space (spectral feature space) to a scalar space that contains the list of final classes desired by the user (i.e., mapping from the data to the desired output). Classification is carried out based upon ancillary information, often in terms of samples labeled by the analyst as being representative of each class of surface cover to be mapped. These samples are often called training samples or design samples. The development of an appropriate list of classes and the process of labeling these samples into these classes is a key step in the analysis process. A valid list of classes for a given data set must be, simultaneously, 1. exhaustive—There must be a logical and appropriate class to which to associate every pixel in the data set. 2. separable—It must be possible to discriminate accurately each class from the others in the list based on the spectral features available. 3. of informational value—The list of classes must contain the classes desired to be identified by the user. Training Phase Classification is typically carried out in two phases: the training phase and the analysis phase. During the training phase, ancillary information available to the analyst is used to define the list of classes to be used, and, from it, to determine the appropriate quantitative description of each of the classes. How this is done is situation-dependent, based on the form and type of ancillary information available and the desired classification output. In some cases, the analyst may have partial knowledge of the scene contents based upon observa-

90

INFORMATION PROCESSING FOR REMOTE SENSING

Table 3. Separability Metrics for Classification (6,12) Metric

Formula* L1 ⫽ 兩애i ⫺ 애j兩

City block

Normalized city block Euclidean

NL1 ⫽

n

ib

Divergence

Transformed divergence Bhattacharyya

JeffriesMatusita

Normalizes for class variance

jb

ib

jb

L2 ⫽ 储애i ⫺ 애j储 ⫽ [(애i ⫺ 애j)T(애i ⫺ 애j)]1/2

冋冘 n

(mib ⫺ mjb)

ANG ⫽ acos

冋

册 冉

Results in linear decision boundaries

1/2

b⫽1

Mahalanobis

Results in piecewise linear decision boundaries

冘 (兩m ⫹⫺m)/2兩

b⫽1

⫽ Angular

Remarks

MH ⫽ (애i ⫺ 애j)T

2

애Ti애j 储애i储 储애j储

冉兺 兺 冊 i

⫹ 2

⫺1

j

⫺1

冊

Normalizes for topographic shading

册

1/2

(애i ⫺ 애j)

⫺1

Assumes normal distributions; normalizes for class covariance; zero if class means are equal

D ⫽ tr关(兺i ⫺ 兺j)(兺i ⫺ 兺j )兴 ⫺1 ⫺1 ⫹ tr 关(兺i ⫹ 兺j )(애i ⫺ 애j)(애i ⫺ 애j)T 兴

Zero if class means and covariances are equal; does not converge for large class separation

Dt ⫽ 2关1 ⫺ e⫺D/8兴

Asymptotically converges to one for large class separation

B ⫽ MH ⫹ ln

冋 兺兺 兺兺 册 兩( i ⫹ j)/2兩 (兩 i兩兩 j兩)1/2

JM ⫽ 关2(1 ⫺ e⫺B )兴1/2

Zero if class means and covariances are equal; does not converge for large class separation Asymptotically converges to one for large class separation

In the formulae, mib is the mean value for class i and band b, ib is the standard deviation for class i and band b, 애i is the mean vector for class i, 兺i is the covariance matrix for class i, and n is the number of spectral bands.

tions from the ground and photointerpretation of air photographs of a part of the scene or from generalized knowledge of the area that is to be made more quantitative and specific by the analysis. For example, the data set may be of an urban area in which the analyst is partially familiar and can designate in the data areas which are used for classes such as high-density housing, low-density housing, commercial, industrial, and recreational. The analyst would use this generalized knowledge to mark areas in the data set that are typical examples of each class. These then become the training areas from which the quantitative description of each class is calculated. Examples of other types of ancillary data from which training samples may be identified are so-called signature banks, which are databases of spectral responses of the materials to be identified that were collected at another time and location with perhaps different instruments. In this case, the additional problem exists of reconciling the differences in data collection circumstances for the database with those of the data set to be analyzed. Examples of these circumstances are the differences in the instruments used to collect the data, the spatial and spectral resolution, the atmospheric conditions, the time of day, the illumination and direction of view variables, and the season.

Another example of an ancillary data source that might be used for deriving training data is more fundamental knowledge about the materials to be identified. For example, in a geological mapping problem, it might be known that certain minerals of interest have molecular absorption features as used by chemical spectroscopists to identify specific molecules. If such spectral features can be extracted from the data to be analyzed, they can be used to label training samples for such classes. Analysis Phase During the second phase of classification, the analysis phase, the pixel or region features are compared quantitatively to the class descriptions derived during the training phase to accomplish the mapping of each of the data elements to one of the defined classes. Classifiers may be of two types: relative and absolute. A relative classifier is one that assigns a data element to a class after having compared it to the entire list of classes to see to which class it is most similar. An absolute classifier compares the data element to only one class description to see if it is sufficiently similar to it. Generally speaking, in remote sensing, relative classifiers are the more common and more powerful.

INFORMATION PROCESSING FOR REMOTE SENSING

Many different algorithms are used for classification in the analysis phase (6,12). A common approach for implementing a relative classifier is through the use of a so-called discriminant function. Designate the data element to be classified as vector X, in which the elements of the vector are the values measured for that pixel in each spectral band. Then, for a kclass situation, assume that we have k functions of X, 兵g1(X), g2(X), . . ., gk(X)其 such that gi(X) is larger than all others whenever X is from class i. Let 웆i denote the ith class. Then the classification rule can be stated as Decide X is in ωi if and only if gi (X) ≥ g j (X) for all j = 1, 2, . . ., k

(11)

The functions gi(X) are referred to as discriminant functions. An advantage of using this scheme is that it is easy to implement in computer software or hardware. A common scheme for defining discriminant functions is to use the class probability density functions. The classification process then amounts to evaluating the value of each class density function at X. The value of a probability density function at a specific point is called the likelihood of that value. Such a classifier is called a maximum likelihood classifier because it assigns the data element to the most likely class. Another example for a classification rule is the so-called Bayes rule strategy (13). Bayes’ Theorem from the theory of probability states that p(ωi |X) =

p(X|ωi ) p(X, ωi ) p(ωi ) = p(X) p(X)

(12)

where p(웆i兩X) is the probability of class 웆i given the data element valued X, p(X兩웆i) is the probability density function for class 웆i, p(웆i) is the probability that class 웆i occurs, p(X, 웆i) of the value X and the class 웆i, and p(X) is the probability density function for the entire data set. Then, to maximize the probability of correct classification, one must select the class that maximizes p(웆i兩X). Because p(X) is the same for any i, one may use as the discriminant function, just the numerator of Eq. (12), p(X兩웆i) p(웆i). Thus, the classification rule becomes Decide X is in ωi if and only if p(X|ωi ) p(ωi ) ≥ p(X|ω j ) p(ω j ) for all j = 1, 2, . . ., k

(13)

This classification strategy leads to the minimum error rate. Note that if all the classes are equally likely, the p(웆i) terms may be canceled and the Bayes rule strategy reduces to the maximum likelihood strategy. Because, in a practical remote sensing problem, the prior probabilities p(웆i) are not known, it is common practice to assume equal priors. Other factors that are significant in the analysis process are the matter of how the class probability density functions are modeled and, related to this, how many training samples are available by which to train the classifier. Parametric models, assuming that each class is modeled by one or a combination of Gaussian distributions, are very common and powerful. Within this framework, one can also make various simplifying assumptions. Some common ones, in parametric form, and the corresponding discriminant functions follow: • Assume that all classes have the same covariance, in which there is no correlation between bands, and that all

91

bands have unit variance: gi (X) = (X − µi )T (X − µi )

(14)

The decision boundary that results is linear in spectral feature space and is oriented perpendicular to the line connecting the class mean values at the midpoint of the line. This is the minimum-distance-to-means classifier. • Assume that all classes have the same covariance but account for correlation between bands and for different variances in each: gi (X) = (X − µi )T −1 (X − µi )

(15)

The resulting decision boundary in spectral feature space is linear, but its orientation and location are dependent upon the common covariance ⌺. • Assume that classes have different covariances: gi (X) = − 12 ln | i | − 12 (X − µi )T i−1 (X − µi )

(16)

The resulting decision boundary in spectral space is a second-order hypersurface whose shape and location is dependent upon the individual mean vectors 애i and covariance matrices ⌺i. This is the maximum likelihood classifier. • Assume that the class densities have a more complex structure such that a combination of a small number of Gaussian densities is not adequate: gi (X) =

X − Xji 1 K Ni λ

(17)

The resulting decision boundary in spectral space can be of nearly arbitrary shape. It can be seen that this list of discriminant functions has a steadily increasing generality and a steadily increasing complexity, such that a rapidly increasing number of training samples is required to adequately estimate the rapidly growing number of parameters in each. The latter one, for example, which, though it is still parametric in form, is referred to as a nonparametric Parzen density estimator with kernel K. The kernel function K, as well as the number of kernal terms to be used Ni, is selectable by the analyst. For example, one possible selection is a Gaussian-shaped function, thus making this discriminant function a direct generalization of the previous ones. There are many additional variations to this list of discriminant functions. There are also additional variations to the possible training procedures. For example, one variation that is popular at the present time is the neural network method. This method uses an iterative scheme for determining the location of the decision boundary in spectral feature space. A network is designed, consisting of as many inputs as there are spectral features, as many outputs as there are classes, and threshold devices with weighting functions connecting the inputs to the outputs. Training samples are applied to the input sequentially, and the resulting output for each is observed. If the correct classification is obtained for a given sample, as evidenced by the output port for the correct

92

INFORMATION PROCESSING FOR REMOTE SENSING

class being the largest, the weights for correct output are augmented, and incorrect class output weights are diminished. The training set is reused for as many times as necessary to obtain good classification results. The advantage of this approach is its generality and that it can be essentially automatic. Characteristics generally regarded as disadvantages are that it is nearly entirely heuristic, thus making analytical calculations and performance predictions difficult, its generality means that very large training sets are required to obtain robust performance, and there is a great deal of computation required in the training process. Because, in practical circumstances, classifiers must be retrained for every new data set, characteristics affecting the training phase are especially significant. Unsupervised Classification A second form of classification that finds use in remote sensing is unsupervised classification, also known as clustering. In this case, data elements, usually individual pixels, are assigned to a class without the use of training samples, thus the ‘‘unsupervised’’ name. The purpose of this type of classification is to assign pixels to a group whose members have similar spectral properties (i.e., are near to one another in spectral feature space). There are, again, many algorithms to accomplish this. Generally speaking, three capabilities are needed. 1. A measure of distance between points. Euclidean distance is a common choice. 2. A measure of distance or separability between the sets of points comprising each cluster. Any separability measure, such as listed in Table 3 could be used, but usually simpler measures are selected. 3. A cluster compactness criterion. An example might be the sum of squared distances from the cluster center for all pixels assigned to a cluster. The process typically begins when one selects (often arbitrarily) a set of initial cluster centers and then assigns each pixel to the nearest cluster center using step 1. After assigning all the pixels, one computes the new cluster centers. If any of the cluster centers have moved, all the pixels are reassigned to the new cluster centers. This iterative process continues until the cluster centers do not move or the movement is smaller than a prescribed threshold. Then steps 2 and 3 are used to test if the clusters are sufficiently distinct (separated from one another) and compact. If they are not adequately distinct, the two that are the closest are combined, and the process is repeated. If they are not sufficiently compact, an additional cluster center is created within the least distinct cluster, and the process is repeated. Clustering is ordinarily not useful for final classification as such because it is unlikely that the data would be clustered into classes of specific interest. Rather it is primarily useful as an intermediate processing step. For example, in the training process, it is often used to divide the data into spectrally homogenous areas that might be useful in deciding on supervised classifier classes and subclasses and in selecting training samples for these classes and subclasses.

CLASSIFICATION USING NEURAL NETWORKS Unlike statistical, parametric classifiers, Artificial Neural Network (ANN) classifiers rely on an interative error minimization algorithm to achieve a pattern match. A network consists of interconnected input (feature) nodes, hidden layer nodes, and output (class label) nodes. A wide range of network architectures have been proposed (14); here a simple threelayer network is considered to explain the basic operation. The input nodes do no processing but simply provide the paths for the data into the hidden layer. Each input node is connected to each hidden layer node by a weighted link. In the hidden layer, the weighted input features are summed and compared to a thresholding decision function. The decision function is usually ‘‘soft,’’ with a form known as a sigmoid, output(input) =

1 1 + exp(−input)

(18)

The output from each hidden layer node is then fed through a weighted link to each output layer node. The same processing, summation and comparison to a threshold, is performed in each output node. The output node with the highest resulting value is selected as the label for the input feature vector. The decision information of the ANN is contained in its weights. To adapt the weights to the data, an iterative algorithm is required. The classic example is the Back Propagation (BP) algorithm (15,16). The BP algorithm minimizes the output error over all classes for a given set of training data. It achieves this by measuring the output error and adjusting the ANN’s link weights progressively backward through each layer to reduce the error. If local minima in the decision space of the ANN can be avoided, the BP algorithm will converge to a global minimum for the output error [although one is never sure that it is not in reality a local minimum (i.e., the algorithm cannot be proven to result in a global error minimum)]. Other convergence algorithms, such as Radial Basis Functions, have been used and are faster than BP. One parameter that must be set for ANNs is the number of hidden layer nodes. A way to specify this is to relate the total number of Degrees-Of-Freedom (DOF) in the ANN to that of another classifier for comparison (2). For example, in a three-layer ANN, the DOF are NANN = H(K + L)

(19)

where H is the number of hidden layer nodes, K is the number of input features, and L is the number of output classes. For the same number of features and classes, the ML classifier has the following DOF: NML =

LK(K + 3) 2

(20)

Therefore, to compare the two classifiers, it is logical to set their DOF equal, obtaining H=

LK(K + 3) 2(K + L)

(21)

for the number of hidden layer nodes in the ANN. This analysis yields only 20 hidden layer nodes for six bands of nonther-

INFORMATION PROCESSING FOR REMOTE SENSING

mal TM imagery, even for as many as 20 classes. Fewer hidden layer nodes result in faster BP training. Performance Comparison to Statistical Classifiers The ANN type of classifier has some unique characteristics that are important in comparing it to other classifiers: 1. Because the weights are initially randomized, the final output results of the ANN are stochastic (i.e., they will vary from run to run on the same training data). It has been estimated that this variation is as much as 5% (17). 2. The decision boundaries move in the feature space to reduce the total output error during the optimization process. The network weights and final classification map that result will depend on when the process is terminated. The ANN classifier is nonparametric (i.e., it makes no assumptions about an underlying statistical distribution for each class). In contrast, the ML classifier assumes a Gaussian distribution for each class. These facts make the feature space decision boundaries totally different. It appears that the boundaries from a three-layer ANN trained with the BP algorithm are often more similar to those from the minimum-distance-to-means classifier than to those from the ML classifier. Experiments with a land-use/land-cover classification involving heterogeneous class spectral signatures indicate that the nonparametric characteristic of the ANN classifier results in superior classifications (18).

IMAGE SEGMENTATION Image segmentation is a partitioning of an image into regions based on the similarity or dissimilarity of feature values between neighboring image pixels. It is often used in image analysis to exploit the spatial information content of the image data. Most image segmentation approaches can be placed in one of three categories (19): 1. characteristic feature thresholding or clustering, 2. boundary detection, or 3. region growing. Characteristic feature thresholding or clustering does not exploit spatial information. The unsupervised classification (clustering) approaches discussed previously are a form of this type of image segmentation. Boundary detection exploits spatial information by examining local edges found throughout the image. For simple noise-free images, detection of edges results in straightforward boundary delineation. However, edge detection on noisy, complex images often produces missing edges and extra edges that cause the detected boundaries to not necessarily form a set of closed connected curves that surround connected regions. Image segmentation through region growing uses spatial information and guarantees the formation of closed, connected regions. However, it can be a computationally intensive process.

93

Edge Detection Edge detection approaches generally examine pixel values in local areas of an image and flag relatively abrupt changes in pixel values as edge pixels. These edge pixels are then extended, if necessary, to form the boundaries of regions in an image segmentation. Derivative-Based Methods for Edge Detection. The simplest approaches for finding abrupt changes in pixel values compute an approximation of the gradient at each pixel. The mathematical definition of the gradient of the continuous function f(x, y) is ∂f ∂f (x, y), (x, y) (22) ∇ f (x, y) = ∂x ∂y where ⵜf(x, y) is the gradient at position (x, y), and (⭸f /⭸x)(x, y) and (⭸f /⭸y)(x, y) are the first derivatives of the function f(x, y) with respect to the x and y coordinates, respectively. The gradient magnitude is 2 2 ∂f ∂f (x, y) + (x, y) |∇ f (x, y)| = (23) ∂x ∂y and the gradient direction (angle) is ∂f ∂x (x, y) φ = arctan ∂f (x, y) ∂y

(24)

In order to apply the concept of a mathematical gradient to image processing, (⭸f /⭸x)(x, y) and (⭸f /⭸y)(x, y) must be approximated by values on a discrete lattice corresponding to the image pixel locations. Such a simple discretization is ∂f (x, y) ∼ = f (x + 1, y) − f (x, y) ∂x

(25)

for edge detection in the x direction and ∂f (x, y) ∼ = f (x, y + 1) − f (x, y) ∂y

(26)

for edge detection in the y direction. These functions are equivalent to convolving the image with one of the two templates in Fig. 5, where (x, y) is the upper left corner of the window.

–1

1

–1

0

0

0

1

0

∂f (x, y) ∂x

∂f (x, y) ∂y

Figure 5. Convolution templates corresponding to the discretized first deriviative of the image function f(x, y) in the x and y directions. These templates can be used as image edge detectors. However, their small 2 ⫻ 2 window size makes these templates very susceptible to noise.

94

INFORMATION PROCESSING FOR REMOTE SENSING

Figure 6. The Sobel and Prewitt edge detection templates. These 3 ⫻ 3 window templates are somewhat less susceptible to noise as compared to the 2 ⫻ 2 window templates illustrated in Fig. 5.

–1

0

1

1

2

1

–1

0

1

1

1

1

–2

0

2

0

0

0

–1

0

1

0

0

0

–1

0

1

–1

–2

–1

–1

0

1

–1

–1

–1

∂f (x, y) ∂x

∂f (x, y) ∂y Sobel template

A disadvantage of this and other similar [e.g., Roberts template (20)] approximations of the gradient function is that the small 2 ⫻ 2 window size makes them very susceptible to noise. Somewhat less susceptible to noise are the 3 ⫻ 3 window templates devised by Sobel [see Duda and Hart (21)] and Prewitt (22), which are illustrated in Fig. 6. The edge detection templates given in Figs. 5 and 6 are approximations of an image gradient or a discretation of the first derivative of the image function. The second derivative of the image function, called the Laplacian operator, can also be used for edge detection. Whereas the first derivative produces positive or negative peaks at and image edge, the second derivative produces a zero value at the image edge, surrounded closely by positive and negative peaks. Edge detection then reduces to detecting these ‘‘zero-crossing’’ values from the Laplacian operator. For a continuous function f(x, y), the Laplacian operator is defined as ∇ 2 f (x, y) =

∂ 2 f (x, y) ∂ 2 f (x, y) + ∂x2 ∂y2

(27)

The usual discrete approximation is ∇ 2 f (x, y) = 4 f (x, y) − f (x − 1, y) − f (x, y + 1) − f (x + 1, y) (28) This can be represented by convolving a two-dimensional image with the image template shown in Fig. 7. Note that the Laplacian operator is directionally symmetric. Image Filtering for Edge Detection. All these methods for edge detection are intrinsically noise sensitive (some more than others) because they are based upon differences between pixels in local areas of the image. Marr and Hildreth (23) suggested the use of Gaussian filters with relatively large window sizes to remove noise in images. Combining the Gaussian filter with the Laplacian operator yields the Laplacian of

0

–1

0

–1

4

–1

0

–1

0

∂f (x, y) ∂x

Figure 7. The Laplacian edge detection template. This edge detection template is the discretized second derivative of the image function f(x, y). This operator produces a zero value at image edges, which is surrounded closely by positive and negative peaks.

∂f (x, y) ∂y

Prewitt template

Gaussian (LOG) function

∇ 2 G(x, y) =

1 2πσ 4

2

x + y2 −(x2 + y2 ) · − 1 exp σ2 2σ 2 (29)

where controls the amount of smoothing provided by the filter. Edge detection through convolving the image with the LOG function and searching for zero-crossing locations is less sensitive to noise than the previously discussed methods. Even more sophisticated filtering and edge location techniques have been devised. These techniques were unified in a paper by Shen and Castan (24), in which they derive the optimal filter for the multi-edge case. Region Growing Region growing is a process by which image pixels are merged with neighboring pixels to form regions, based upon a measure of similarity between pixels and regions. The basic outline of region growing follows (25–27): 1. Initialize by labeling each pixel as a separate region. 2. Merge all spatially adjacent pixels with identical feature values. 3. Calculate a similarity or dissimilarity criterion between each pair of spatially adjacent regions. 4. Merge the most similar pair of regions. 5. Stop if convergence has been achieved; otherwise, return to step 3. Beaulieu and Goldberg (25) describe a sequential implementation of this algorithm in which step 3 is kept efficient through updating only those regions involved in or adjacent to the merge performed in step 4. Tilton (26) describes a parallel implementation of this algorithm in which multiple merges are allowed in step 4 (best merges are performed in image subregions) and the (dis)similarity criterion in step 3 is calculated in parallel for all regions. Schoenmakers (27) simultaneously merges all region pairs with minimum dissimilarity criterion value in step 4. The similarity or dissimilarity criterion employed in step 3 should be tailored to the type of image being processed. A simple criterion that has been used effectively with remotely sensed data is the Euclidean spectral distance (27), as in Table 3. Other criteria that have been employed are the Normalized Vector Distance (28), criteria based on minimizing the mean-square error or change in image entropy (29), and a

INFORMATION PROCESSING FOR REMOTE SENSING

criterion based on minimizing a polynomial approximation error (25). Clear-cut convergence criteria have not been developed for region growing segmentation. Simple criteria that are satisfactory in some applications are the number of regions or a ratio of number of regions to the total number of image pixels. Direct thresholding on the dissimilarity criterion value (i.e., perform no merges between regions with a dissimilarity criterion value greater than a threshold) has also been used with mixed results. More satisfactory results have been obtained by defining convergence as the iteration prior to the iteration at which the maximum change in dissimilarity criterion value occurred. Extraction and Classification of Homogeneous Objects An image segmentation followed by a maximum likelihood classification is the basic idea behind the Extraction and Classification of Homogeneous Objects (ECHO) classifier (30,31). The segmentation scheme used by ECHO was designed for speed on the computers of mid-1970s and could be replaced by a segmentation approach of more recent vintage. However, the formalization of the maximum likelihood classification for image regions (objects) is still appropriate. For single pixels, the maximum likelihood decision rule is Decide X is in ωi if and only if p(X|ωi ) ≥ p(X|ω j ) for all j = 1, 2, . . ., k

(30)

The rule is just Eq. (13) with p(웆j) ⫽ 1. Suppose that an image region consists of m pixels. To apply the maximum likelihood decision rule to this region, X must be redefined to include the entire region, that is, X ⫽ 兵X1, X2, . . ., Xm其. The evaluation of p(X兩웆i), where X is redefined as a collection of pixels, is very difficult. However, this collection of pixels belongs to a homogeneous region. In this case, it is reasonable to assume that the pixels are statistically independent. This assumption allows the evaluation of p(X兩웆i) as the product

p(X|ωi ) = p(X1 , X2 , . . ., Xm |ωi ) =

m

p(X j |ωi )

(31)

j=1

Split and Merge Seeking more efficient methods for region-based image segmentation has led to the development of split-and-merge approaches. Here the image is repeatedly subdivided until each resulting region has a minimum homogeneity. After the region-splitting process converges, the regions are grown as previously described. This approach is more efficient when large homogenous regions are present. However, some segmentation detail may be lost. See Cross et al. (32) for an example of split-and-merge image segmentation. Hybrids of Edge Detection and Region Growing A number of approaches have been offered for combining edge detection and region growing. Pavlidis and Liow (33) perform a split-and-merge segmentation such that an oversegmented result is produced and then eliminate or modify region boundaries based on general criteria including the contrast between the regions, region boundary smoothness, and the variation of the image gradient along the boundary. LeMoigne and Til-

95

ton (34) use region growing to generate a hierarchical set of image segmentations and make local selections of the best level of segmentation detail based on edges produced by an edge detector. Hybrids of Spectral Clustering and Region Growing Tilton (35) has recently demonstrated the potential of a hybrid of spectral clustering and region growing. In this approach, spectral clustering is performed in between each region growing iteration. The spectral clustering is constrained to merge regions that are at least as similar as the last pair of regions merged by region growing, and is not allowed to merge any spatially adjacent regions. This approach to image segmentation is very computationally intensive. However, practical processing times have been achieved by a recursive implementation on a cluster of 64 Pentium Pro PCs configured as a Beowulf-class parallel computer (35,36). HYPERSPECTRAL DATA Hyperspectral Data Normalization Hyperspectral imagery contains significantly more spectral information than does multispectral imagery such as that from Landsat TM. Imaging spectrometers produce hundreds of spectral band images, with narrow (typically 10 nm or less) contiguous bands across a broad wavelength range (e.g., 400– 2400 nm). Also, such new sensor systems are capable of generating more precise data radiometrically, with signal-tonoise ratios justifying 10 or more bit data systems (1024 or more shades of gray per band), as compared to 6 or 8 bit precision in previous systems. This potentially high precision requires concomitant substantially improved calibration for atmospheric, solar, and topographic effects, particularly if comparisons are to be made to laboratory or field reflectance spectra for classification. To convert remote sensing data to reflectance, one must first correct for the additive and multiplicative factors in Eq. (1). Even though in some circumstances (e.g., multitemporal analysis) this correction may be useful for all spectral data, it is especially critical for hyperspectral imagery when the intention is to use narrow band spectral absorption features in a deterministic sense because 1. narrow atmospheric absorption bands have a severe effect on corresponding sensor bands, and 2. some algorithms for physical constituent estimation, either in the atmosphere or on the Earth’s surface, require precise measurements of absorption band locations, widths, and depths. The computation burden for calibration is, of course, much larger for hyperspectral imagery than it is for multispectral imagery. In one effective calibration technique, the empirical line method (37), the sensor values are linearly correlated to field reflectance measurements. In this single process, all the coefficients in Eq. (1) are determined except for topographic shading. Obtaining field reflectance measurements is difficult and expensive at best, so a number of indirect within-scene approaches have also been used.

INFORMATION PROCESSING FOR REMOTE SENSING

An example of the use of within-scene information to achieve a partial calibration of hyperspectral imagery is termed flat-fielding (38). An object that can be assumed spectrally uniform and with high radiance (‘‘white’’ in a visual sense) must be located within the scene. Its spectrum as seen by the sensor contains the atmospheric transmittance terms of Eq. (1). If the data are first corrected for the haze level, or if it can be ignored (at longer wavelengths such as in the NIR and SWIR), then a division of each pixel’s spectrum by the bright reference object’s spectrum will tend to cancel the solar irradiance and atmospheric transmittance factors in Eq. (1). An example is shown in Fig. 8. The data are from an Airborne Visible-InfraRed Imaging Spectrometer (AVIRIS) flight over Cuprite, Nevada, in 1990. The mineral kaolinite contains a doublet absorption feature centered at about 2180 nm, which is masked in the at-sensor radiance data by the downward trend of the solar irradiance. An atmospheric carbon dioxide absorption feature can also be seen at 2060 nm. After the flatfield operation, the relative spectral reflectance closely matches a sample reflectance curve in shape, including no atmospheric absorption features. The reflectance magnitude does not agree because the flat-field correction does not correct for topographic shading [the cosine term in Eq. (1)]. After the hyperspectral data are normalized in this way, it is possible to characterize the surface absorption features by such parameters as their location, depth (relative to a continuum, which is a hypothetical curve with no absorption features), and width (Fig. 9). These features can be used to distinguish one mineral (or any other material with narrow absorption features) from another (39). The feature extraction algorithms first detect local minima in the spectral data using

0.8 0.7

200

0.6 0.5 0.4

150

0.3 100

Kaolinite (flat field) Kaolinite CM9 Kaolinite (radiance)

Relative reflectance

At-sensor radiance (W-m–2-sr –1-µ m–1)

250

0.2 0.1

0 50 2000 2050 2100 2150 2200 2250 2300 2350 2400 Wavelength (nm)

Figure 8. AVIRIS radiance data for the mineral kaolinite at Cuprite, Nevada, before and after flat-field normalization, compared to spectral reflectance data from a mineral reflectance library (sample designated CM9). It is evident that the normalization process produces a spectral signal from the image radiance data that more closely matches the shape of the spectral reflectance curve. If classification of image radiance data is performed with library spectral reflectance as the reference signal, either an empirical normalization of this type, or a difficult calibration of the sensor radiance data to reflectance would be required.

Continuum Relative reflectance

96

Depth Width

Location Wavelength Figure 9. Definition of spectral absorption features in hyperspectral data. After an absorption band is detected in the spectral data, these features can be measured and compared with the same features derived either from labeled training pixels within the image itself or from library spectral reflectance data. For surface materials with characteristic absorption bands, this approach can considerably reduce the amount of computation required for classification of hyperspectral imagery.

operations such as a spectral derivative and then calculate the depth and width of those minima. Zero crossings in second-order derivatives and a spectral scale-space can also be used to detect and measure significant spectral absorption features (40). Classification and Analysis of Hyperspectral Data Data in higher-dimensional spaces have substantially different characteristics than that in three-dimensional space, such that the ordinary rules of geometry of three-dimensional space do not apply. For example, two class distributions can lie right on top of one another, in the sense of having the same mean values and yet they may be perfectly separable by a well-designed classifier. Examples of these differences of data in high-dimensional space follow (41). As dimensionality increases, 1. The volume of a hypercube concentrates in the corners. 2. The volume of a hypersphere concentrates in an outside shell. 3. The diagonals are nearly orthogonal to all coordinate axis. When data sets contain a large number of spectral bands or features, more than 10 or so, the ability to discriminate between classes with higher accuracy and to derive greater information detail increases substantially, but some additional aspects become significant in the data analysis process in order to achieve this enhanced potential. For example, as the data dimensionality increases, the number of samples necessary to define the class distributions to adequate precision increases very rapidly. Furthermore, both first- and secondorder statistics are significant in achieving optimal separability of classes. The fact that second-order statistics are significant, in addition to first-order statistics, tends to exacerbate the need for a larger number of training samples. For example, if one were to attempt to analyze a 200-dimensional data set at full di-

INFORMATION PROCESSING FOR REMOTE SENSING

mensionality using conventional estimation methods, many thousands of samples may be necessary in order to obtain the full benefit of the 200 bands. Rarely would this number of samples be available. Hyperspectral Feature Extraction Quantitative feature extraction methods are especially important because of the large number of spectral bands on the one hand and the significantly enhanced amount and detail of information that is potentially extractable from such data on the other. Given the large number of spectral bands in such data, feature selection, choosing the best subset of bands of size m from a complete set of N bands, quickly becomes intractable or impossible. For example, to choose the best subset of bands of size 10 out of 100 bands, there are more than 1.7 ⫻ 1013 possible subsets of size 10 that must be examined if the optimum set is to be determined. It is possible to avoid working directly at such high dimensionality without a penalty in classifier performance and with a substantial improvement in computational efficiency. This is the case because, as implied by the preceding geometric characteristics, the volume in a hyperdimensional feature space increases very rapidly as the dimensionality increases. A result of this is that, for remote sensing problems, such a space is mostly empty, and the important data structure in any given problem will exist in a subspace. The particular subspace is very problem-dependent and is different for every case. Thus, if one can determine which subspace is needed for the problem at hand, one can have available all the separability that the high-dimensional data can provide, but with reduced need for training set size and reduced amounts of computation. The problem then becomes focused on finding the correct subspace containing this key data structure. Feature extraction algorithms may be used for this purpose. The CCT introduced earlier is one of several approaches that is suitable for extraction features from hyperspectral data. Each approach tends to have unique advantages and some disadvantages. The CCT, for example, is a relatively fast calculation. However, it does not perform well when the classes to be discriminated have only a small difference in their mean values, and it produces predictably useful features only up to one less than the number of classes to be separated. It has another disadvantage common to many such algorithms, namely that it depends on parameters, class means, and covariances, which must be estimated from the training samples at full dimensionality. Thus, it may produce a suboptimal feature subspace resulting from the imprecise estimation of the class parameters. Another possible scheme is the Decision Boundary Feature Extraction algorithm (42). This scheme does not have the disadvantages of CCT. However, it tends to be a lengthy calculation because it is based directly upon the training samples instead of the class statistics derived from them. A very fortunate characteristic of high-dimensional spaces is that for most high-dimensional data sets, lower-dimensional linear projections tend to be normally distributed or with a combination of normal distributions. This tends to add to the credibility of using a Gaussian model for the classification process, reduces the need to consider nonparametric schemes, and reduces the need to consider the use of higherorder statistics.

97

Thus, based upon the algorithms referred to previously, one can expect to do a very effective analysis of high-dimensional multispectral data and, in a practical circumstance, achieve a near to optimal extraction of desired information with performance substantially enhanced over that possible with more conventional multispectral data. BIBLIOGRAPHY 1. J. C. Tilton, Image segmentation by iterative parallel region growing with applications to data compression and image analysis, Proc. 2nd Symp. Frontiers Massively Parallel Computat., pp. 357–360, Fairfax, VA, 1988. 2. R. A. Schowengerdt, Remote Sensing—Models and Methods for Image Processing, 2nd ed. Chestnut Hill, MA: Academic Press, 1997. 3. A. Berk, L. S. Bernstein, and D. C. Robertson, MODTRAN: A Moderate Resolution Model for LOWTRAN 7, U.S. Air Force Geophysics Laboratory, No. GL-TR-89-0122, 1989. 4. A. R. Huete, A soil adjusted vegetation index (SAVI), Remote Sens. Environ., 25: 295–309, 1988. 5. J. A. Richards, Remote Sensing Digital Image Analysis: An Introduction, 2nd ed. Berlin: Springer-Verlag, 1993. 6. K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd ed., Boston: Academic Press, 1990. 7. G. W. Stewart, Introduction to Matrix Computations, New York: Academic Press, 1973. 8. R. J. Kauth and G. S. Thomas, The Tasselled Cap—A graphic description of spectral-temporal development of agricultural crops as seen by Landsat, Proc. 2nd Int. Symp. Remotely Sensed Data, 4B: 41–51, Purdue University, West Lafayette, IN, 1976. 9. D. R. Thompson and O. A. Whemanen, Using Landsat digital data to detect moisture stress in corn-soybean growing regions, Photogrammetric Eng. Remote Sens., 46: 1087-1093, 1980. 10. E. P. Crist and R. C. Cicone, A physically-based transformation of Thematic Mapper data—the TM Tasseled Cap, IEEE Trans. Geosci. Remote Sens., GE-22: 256–263, 1984. 11. E. P. Crist, R. Laurin, and R. C. Cicone, Vegetation and soils information contained in transformed Thematic Mapper data. Proc. 1986 Int. Geosci. Remote Sens. Symp., pp. 1465–1470, Zurich, 1986. 12. P. H. Swain and S. M. Davis, Remote Sensing: The Quantitative Approach, New York: McGraw-Hill, 1978. 13. A. Papoulis, Probability, Random Variables, and Stochastic Processes, Tokyo: McGraw-Hill, 1984. 14. J. D. Paola and R. A. Schowengerdt, A review and analysis of backpropagation neural networks for classification of remotelysensed multi-spectral imagery, Int. J. Remote Sens., 16: 3033– 3058, 1995. 15. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, eds., Parallel Distributed Processing: Explorations in the Microstruction of Cognition, Vol. I, pp. 318–362, Cambridge, MA: MIT Press, 1986. 16. R. P. Lippmann, An introduction to computing with neural nets, IEEE ASSP Magazine, 4 (2): 4–22, 1987. 17. J. D. Paola and R. A. Schowengerdt, The effect of neural network structure on a multispectral land-use classification, Photogrammetric Eng. Remote Sens., 63: 535–544, 1997. 18. J. D. Paola and R. A. Schowengerdt, A detailed comparison of backpropagation neural network and maximum-likelihood classifiers for urban land use classification, IEEE Trans. Geosci. Remote Sens., 33: 981-996, 1995.

98

INFORMATION RETRIEVAL AND ACCESS

19. K. S. Fu and J. K. Mui, A survey on image segmentation, Pattern Recognition, 13: 3–16, 1981.

northern Grapevine Mountains, Nevada and California, Remote Sens. Environment, 24: 31–51, 1988.

20. L. G. Roberts, Machine perception of three dimensional solids, Proc. Symp. Optical Electro-Optical Image Processing Technol., pp. 159–197, Cambridge, MA: MIT Press, 1965.

40. M. A. Piech and K. R. Piech, Symbolic representation of hyperspectral data, Appl. Opt., 26: 4018–4026, 1987.

21. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis. New York, Wiley, 1973. 22. J. M. S. Prewitt, Object enhancement and extraction. In B. S. Lipkin and Rosenfeld, eds., Picture Processing and Psychopictorics, pp. 75–149. New York: Academic Press, 1970. 23. D. Marr and E. Hildreth, Theory of edge detection, Proc. Roy. Soc. London, B 207: 187–217, 1980.

41. L. Jimenez and D. A. Landgrebe, Supervised classification in high dimensional space: geometrical, statistical, and asymptotical properties of multivariate data, IEEE Trans. Syst. Man. Cybern., 286: 39–54, 1998. 42. C. Lee and D. A. Landgrebe, Feature extraction based on decision boundaries, IEEE Trans. Pattern Anal. Mach. Intell., 15: 338– 500, 1993.

JAMES C. TILTON

24. J. Shen and S. Castin, Towards the unification of band-limited derivative operators for edge detection, Signal Process., 31: 103– 119, 1993.

NASA’s Goddard Space Flight Center

DAVID LANDGREBE

25. J.-M. Beaulieu and M. Goldberg, Hierarchy in picture segmentation: A stepwise optimization approach, IEEE Trans. Pattern Anal. Mach. Intell., 11: 150–163, 1989.

Purdue University

ROBERT A. SCHOWENGERDT University of Arizona

26. J. C. Tilton, Image segmentation by iterative parallel region growing and splitting, Proc. 1989 Int. Geosci. Remote Sens. Symp., pp. 2235–2238, Vancouver, Canada, 1989. 27. R. P. H. M. Schoenmakers, Integrated Methodology for Segmentation of Large Optical Satellite Image in Land Applications of Remote Sensing, Agriculture series, Catalogue number : CL-NA16292-EN-C, Luxembourg: Office for Official Publications of the European Communities, 1995. 28. A. Baraldi and F. Parmiggiani, A neural network for unsupervised categorization of multivalued input parameters: An application to satellite image clustering, IEEE Trans. Geosci. Remote Sens., 33: 305–316, 1995. 29. J. C. Tilton, Experiences using TAE-Plus command language for an image segmentation program interface, Proc. TAE Ninth Users’ Conf., New Carrollton, MD, pp. 297–312, 1991. 30. R. L. Kettig and D. A. Landgrebe, Computer classification of remotely sensed multispectral image data by Extraction and Classification of Homogeneous Objects, IEEE Trans. Geosci. Electron., GE-14: 19–26, 1976. 31. D. A. Landgrebe, The development of a spectral-spatial classifier for Earth observational data, Pattern Recognition, 12: 165–175, 1980. 32. A. M. Cross, D. C. Mason, and S. J. Dury, Segmentation of remotely sensed image by a split-and-merge process, Int. J. Remote Sens., 9: 1329–1345, 1988. 33. T. Pavlidis and Y.-T. Liow, Integrating region growing and edge detection, IEEE Trans. Pattern Anal. Mach. Intell., 12: 225– 233, 1990. 34. J. Le Moigne and J. C. Tilton, Refining image segmentation by integration of edge and region data, IEEE Trans. Geosci. Remote Sens., 33: 605–615, 1995. 35. J. C. Tilton, Image segmentation by region growing and spectral clustering with natural convergence criteria, Proc. 1998 Int. Geosci. Remote Sens. Symp., Seattle, Washington, 1998. 36. D. J. Becker et al., Beowulf: A parallel workstation for scientific computation, Proc. Int. Conf. Parallel Process., 1995. 37. F. A. Kruse, K. S. Kierein-Young, and J. W. Boardman, Mineral mapping at Cuprite, Nevada with a 63-channel imaging spectrometer, Photogrammetric Eng. Remote Sens., 56: 83–92, 1990. 38. M. Rast et al., An evaluation of techniques for the extraction of mineral absorption features from high spectral resolution remote sensing data, Photogrammetric Eng. Remote Sens., 57: 1303– 1309, 1991. 39. F. A. Kruse, Use of Airborne Imaging Spectrometer data to map minerals associated with hydrothermally altered rocks in the

INFORMATION PROCESSING, OPTICAL. See OPTICAL NEURAL NETS.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3608.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Meteorological Radar Standard Article Richard J. Doviak1 and Dušan S. Zrni1 1National Severe Storms Laboratory The University of Oklahoma, Norman, OK Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3608 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (182K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Fundamentals of Meteorological Doppler Radar Reflectivity and Velocity Fields of Precipitation Rain, Wind, and Observations of Severe Weather Wind and Temperature Profiles in Clear Air Trends and Future Technology About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3608.htm17.06.2008 15:36:45

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

596

METEOROLOGICAL RADAR

METEOROLOGICAL RADAR The origins of meteorological radar (RAdio Detection And Ranging) can be traced to the 1920s when the first echoes from the ionosphere were observed with dekametric (tens of meters) wavelength radars. However, the greatest advances in radar technology were driven by the military’s need to detect aircraft and occurred in the years leading to and during WWII. (A brief review of the earliest meteorological radar development is given in Ref. 1; a detailed review of the development during and after WWII is given in Ref. 2.) Precipitation echoes were observed almost immediately with the deployment of the first decimeter (앒0.1 m) wavelength military radars in the early 1940s. Thus, the earliest meteorological radars used to observe weather (i.e., weather radars) were manufactured for military purposes. The military radar’s primary mission is to detect, resolve, and track discrete targets such as airplanes coming from a particular direction, and to direct weapons for interception. Although weather radars also detect aircraft, their primary objective is to map the intensity of precipitation (e.g., rain, or hail), which can be distributed over the entire hemisphere above the radar. Each hydrometeor’s echo is very

weak; nevertheless, the extremely large number of hydrometeors within the radar’s beam returns a continuum of strong echoes as the transmitted pulse propagates through the field of precipitation. Thus, the weather radar’s objective is to estimate and map the fields of reflectivity and radial velocities of hydrometeors; from these two fields the meteorologists need to derive the fall rate and accumulation of precipitation and warnings of storm hazards. The weather radar owes its success to the fact that centimetric waves penetrate extensive regions of precipitation (e.g., hurricanes) and reveal, like an X-ray photograph, the morphology of weather system. The first U.S. national network of weather radars, designed to map the reflectivity fields of storms and to track them, were built in the mid 1950s and operated at 10 cm wavelengths. 1988 marked the deployment of the first network of Doppler weather radars (i.e., the WSR-88D), which in addition to mapping reflectivity, have the capability to map radial (Doppler) velocity fields. This latter capability proved to be very helpful in identifying those severe storm cells that harbor tornadoes and damaging winds. If a hydrometeor’s diameter is smaller than a tenth of the radar’s wavelength, its echo strength is inversely proportional to the fourth power of the wavelength. Thus, shorter wavelength (i.e., millimeter) radars are usually the choice to detect clouds. Cloud particle diameters are less than 100 애, and attenuation due to cloud particles is not overwhelming. However, if clouds bear rain of moderate intensity, precipitation attenuation can be severe (e.g., at a wavelength of 6.2 mm, and rainrate of 10 mm h⫺1, attenuation can be as much as 6 dB km⫺1 (3). Spaceborne meteorological radars also operate in the millimetric band of wavelengths in order to obtain acceptable angular resolution with reasonable antenna diameters required to resolve clouds at long ranges (4). Airborne weather radars operate at short wavelengths of approximately 3 and 5 cm; these are used to avoid severe storms that produce hazardous wind shear and extreme amounts of rainwater (which extinguish jet engines), and to study weather phenomena (5). The short wavelength waves are used to obtain acceptable angular resolution with small antennas on aircraft; however, short waves are strongly attenuated as they propagate into heavy precipitation. At longer wavelengths (e.g., ⬎10 cm), only hail and heavy rain significantly attenuate the radiation. Weather radars are commonly associated with the mapping of precipitation intensity. Nevertheless, the earliest, of what we now call, meteorological radars detected echoes from the nonprecipitating troposphere in the late 1930s (1,2). Scientists determined that these echoes are reflected from the dielectric boundaries of different air masses (1,6). The refractive index n of air is a function of temperature and humidity, and spatial irregularities in these parameters, caused by turbulence, were found to be sufficiently strong to cause detectable echoes from refractive index irregularities.

FUNDAMENTALS OF METEOROLOGICAL DOPPLER RADAR The basic principles for meteorological radars are the same as for any radar that transmits a periodic train of short duration pulses (i.e., with period Ts called the pulse repetition time (PRT), and duration ) of microwaves, and measures the delay between the time of emission of the transmitted pulse and the time of reception of any of its echoes. The PRT (i.e., Ts) is

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

METEOROLOGICAL RADAR

V (r, t) = Ae j[2π f (t−2r/c)+ψ )U (t − 2r/c)

(1)

at the input to the synchronous detectors is a replica of the signal transmitted; A is the echo amplitude that depends on the hydrometeor’s range r and its backscattering cross section b, and 2 앟f(t ⫺ 2r/c) ⫹ is the echo phase. The microwave carrier frequency is f, t is time after emission of the transmitted pulse, and is the sum of phase shifts introduced by the radar system and by the scatterer; these shifts are

usually independent of time. The function U locates the echo; it is one when its argument is between zero and , and zero otherwise. The output of one synchronous detector is called the in-phase (I) voltage and the other is called the quadrature-phase (Q) voltage (Fig. 1); these are the imaginary and real parts of the echo’s complex voltage [Eq. (1)] after its carrier frequency f is shifted to zero. Thus I(t, r) = A cos ψeU (t − 2r/c), where ψe = −

λ

) S(r, θ ,

Scatterer

θ

θ0 Antenna

T/R switch

D φ0

(4)

where 웆d is the Doppler shift (in radians per second). For typical transmitted pulse widths (i.e., 앒 10⫺6 s) and hydrometeor velocities (tens of m s⫺1), the changes in phase are extremely small during the time that U(t ⫺ 2r/c) is nonzero. Therefore, the echo phase change is measured over the longer PRT period (Ts 앒 10⫺3 s) and, consequently, the pulse Doppler weather radar is both an amplitude– and phase–sampling system. Samples are at s ⫹ mTs, where s is the time delay between a transmitted pulse and an echo, and m is an integer; s is a continuous time scale and always lies in the interval 0 ⱕ s ⱕ Ts, and mTs is called sample time, which increments in Ts steps. Because the transmissions are periodic, echoes repeat, and thus, there is no way to determine which transmitted pulse produced which echo (Fig. 2). That is, because s is measured with respect to the most recent transmitted pulse and has values ⬍Ts, the apparent range cs /2 is always less than the unambiguous range ra ⫽ cTs /2. However, the true range r can be cs /2 ⫹ (Nt ⫺ 1)ra, where Nt is the trip number, and Nt ⫺ 1 designates the number of cTs /2 intervals that need to be

Surface of constant range

Klystron amplifier transmitter

Filter amplifer G(f)

(3)

dψe 4π dr 4π =− =− ν = ωd dt λ dt λ

φ – φ0

Filter amplifer

4πr +ψ λ

is the echo phase, and ⫽ c/f is the wavelength of the transmitted microwave pulse. The time rate of the echo phase change is related to the scatterer’s radial (Doppler) velocity ,

Pulse modulator

"Stalo" microwave oscillator 0° 90° Synchronous detectors

Q(r, t) = A sin ψeU (t − 2r/c) (2)

ic Electr field θ ,φ ) E(r,t ,

typically of the order of milliseconds, and pulsewidths are of the order of microseconds. The radar has Doppler capability if it can measure the change in frequency or wavelength between the backscattered and transmitted signals. The Doppler radar’s microwave oscillator (Fig. 1) generates a continuous wave sinusoidal signal, which is converted to a sequence of microwave pulses by the pulse modulator. Therefore, the sinusoids in each microwave pulse are coherent with those generated by the microwave oscillator; that is, the crests and valleys of the waves in the pulse bear a fixed or known relation between the crests and valleys of the waves emitted by the microwave oscillator. The microwave pulses are then amplified by a high-power amplifier (a klystron is used in the WSR-88D) to produce about a megawatt of peak power. The propagating pulse has a spatial extent of c, and travels at the speed of light c along the beam (beamwidth 1 is the one-way, 3 dB width of the beam, and is of the order of 1 degree). The transmit/receive (T/R) switch connects the transmitter to the antenna (an 8.53 m diameter parabolic reflector is used for the WSR-88D) during , and the receiver to the antenna during the interval Ts ⫺ . The echoes are mixed in the synchronous detectors with a pair of phase quadrature signals (i.e., sine, 90⬚, and cosine, 0⬚, outputs from the oscillator). The pair of synchronous detectors and filter amplifiers shift the carrier frequency from the microwave band to zero frequency in one step for the homodyne radar and allow measurement of both positive and negative Doppler shifts (most practical radars use a two step process involving an intermediate frequency). A hydrometeor intercepts the transmitted pulse and scatters a portion of its energy back to the antenna, and the echo voltage

597

θ1 r

cτ

F Pu Side lobes R

lse

θ – θ0

I(t) Q(t) Data processing and display Figure 1. Simplified block diagram of a homodyne radar (no intermediate-frequency circuits are used to improve performance) showing the essential components needed to illustrate the basic principles of a meteorological Doppler radar.

598

METEOROLOGICAL RADAR

yy;y ;y ;; τ s2 +Ts

τ s1

Figure 2. Range-ambiguous echoes. The nth transmitted pulse and its echoes are crosshatched. This example assumes that the larger echo at delay s1 is unambiguous in range, but the smaller echo, at delay s2, is ambiguous. This smaller second trip echo, which has a true range time delay Ts ⫹ s2, is due to the (n ⫺ 1)th transmitted pulse.

n–1

n

n+1

τ s2

Ts

Transmitter pulses

added to the apparent range to obtain r. There is range ambiguity if r ⱖ ra. The I and Q components of echoes from stationary and moving scatterers are shown in Fig. 3 for three successive transmitted pulses. The echoes from the moving scatterer clearly exhibit a systematic change, from one mTs period to the next, caused by the scatterers’ Doppler velocity, whereas there is no change in echoes from stationary scatterers. Echo phase e ⫽ tan⫺1(Q/I) is measured, and its change over Ts is proportional to the Doppler shift given by Eq. (4). The periodic transmitted pulse sequence also introduces velocity ambiguities. A set of e samples cannot be related to one unique Doppler frequency. As Fig. 4 shows, it is not possible to determine whether V(t) rotated clockwise or counterclockwise and how many times it circled the origin during the interval Ts. Therefore, any of the frequencies ⌬e /Ts ⫹ 2앟p/Ts (where p is a ⫾ integer, and ⫺앟 ⬍ ⌬e ⱕ 앟) could be correct. All such Doppler frequencies are called aliases, and f N ⫽ 웆N /2앟 ⫽ 1/(2Ts) is the Nyquist frequency (in units of Hertz). All Doppler frequencies between ⫾f N are the principal aliases, and frequencies higher or lower than ⫾f N are ambiguous with those between ⫾f N. Thus, hydrometeor radial velocities must lie within the unambiguous velocity limits, a ⫽ ⫾ /4Ts, to avoid ambiguity. Signal design and processing methods have been advanced to deal with range-velocity ambiguities (1).

REFLECTIVITY AND VELOCITY FIELDS OF PRECIPITATION A weather signal is a composite of echoes from a continuous distribution of hydrometeors. After a delay (the roundtrip time of propagation between the radar and the near boundary of the volume of precipitation), echoes are continuously received (Fig. 5) during a time interval equal to twice the time it takes the microwave pulse to propagate across the volume containing the hydrometeors. Because one cannot resolve each of the hydrometeor’s echoes, meteorological radar circuits sample the I and Q signals at uniformly spaced intervals along s, and convert the analog values of the I, Q voltages to digital numbers. For each sample, there is a resolution volume V6 (i.e., the volume enclosed by the surface on which angular and range-weighting functions (1) are smaller than 6 dB below their peak value) along the beam within which hydrometeors contribute significantly to the sample. Each scatterer within V6 returns an echo and, depending on its precise position to within a wavelength, its corresponding I or Q can have any value between maximum positive and negative excursions. Echoes from the myriad of hydrometeors constructively or destructively (depending on their phases) interfere with each other to produce the composite weather signal voltage V(mTs, s) ⫽ I(mTs, s) ⫹ jQ(mTs, s) for the mth Ts interval. The random size and location of hydrometeors cause the I and Q weather signals to be a random function of s. How-

Range time,τ s

1 µs

I(τ s) 0

0

Figure 3. I(s) and Q(s) signal traces vs s for three successive sampling intervals Ts have been superimposed to show the relative change of I, Q for both stationary and moving scatterers.

τs

Q(τ s)

τs

Stationary scatterers

Moving scatterers

METEOROLOGICAL RADAR

scatterers accurately. The samples’ average power P is P(ro ) = η(rr )I(rr o , r ) dV

Q V(t+Ts)

599

(5)

V(t)

∆ψ e

ψe(t)

in which the reflectivity , the sum of the hydrometeor backscattering cross sections b per unit volume, is

I

∞

η(rr ) = 0

Figure 4. A phasor diagram used to depict frequency aliasing. The phase of the signal sample V(t) could have changed by ⌬e over a period Ts.

ever, these random signals have a correlation time c (Fig. 5), dependent on the pulsewidth and the receiver’s bandwidth (1). Thus, V(mTs, s has noise-like fluctuations along s even if the scatterer’s time averaged density is spatially uniform. The sequences of M(m ⫽ 1 씮 M) samples at any s are analyzed to determine the motion and reflectivity of hydrometeors in the corresponding V6. The dashed line in Fig. 5 depicts a possible sample time mTs, dependence of I(mTs, s1) for hydrometeors having a mean motion that produces a slowly changing sample amplitude along mTs. The rate of change of I and Q vs mTs is determined by the radial motion of the scatterers. Because of turbulence, scatterers also move relative to one another and, therefore, the I, Q samples at any s change randomly with a correlation time along mTs dependent on the relative motion of the scatterers. For example, if turbulence displaces the relative position of scatterers a significant fraction of a wavelength during the Ts interval, the weather signal at s will be uncorrelated from sample to sample, and Doppler velocity measurements will not be possible; Doppler measurements require a relatively short Ts. The random fluctuations in the I, Q samples have a Gaussian probability distribution with zero mean; the probability of the signal power I2 ⫹ Q2 is exponentially distributed (e.g., the weakest power is most likely to occur). Using an analysis of the V(mTs, s) sample sequence along mTs, the meteorological radar’s signal processor to estimate both the average sample power and the power-weighted velocity of the

σb (D)N(D, r ) dD

(6)

The factor I(ro, r) in Eq. (5) is a composite angular and rangeweighting function; its center at ro depends on the beam direction as well as s. The values of I(ro, r) at r depend on the antenna pattern, transmitted pulse shape and width, and the receiver’s frequency or impulse transfer function (1). In general, I(ro, r) has significant values only within V6; N(D), the particle size distribution, determines the expected number density of hydrometeors with equivolume diameters between D and D ⫹ dD. The meteorological radar equation, P(ro ) =

Pt g2 gs λ2 ηcτ πθ1 (4π )3 r2o l 2 lr 16 ln 2

(7)

is used to determine from measurements of P, wherein Pt is the peak transmitted power, g is the gain of the antenna (a larger antenna directs more power density along the beam and hence has larger gain), and gs is the gain of the receiver (e.g., the net sum of losses and gains in the T/R switch, the synchronous detectors, and the filter/amplifiers in Fig. 1). Here ro 앒 cs /2, l is one-way atmospheric transmission loss, and lr is the loss due to the receiver’s finite bandwidth (1). Equation (7) is valid for transmitted radiation having a Gaussian function dependence on distance from the beam axis, and for uniform reflectivity fields. Radar meteorologists have related reflectivity , which is general radar terminology for the backscattering cross section per unit volume, to a reflectivity factor Z which has meteorological significance. If hydrometeors are spherical and have diameters much smaller than (i.e., the Rayleigh approximation), the reflectivity factor,

∞

Z=

N(D, r )D6 dD

(8)

π5 |Km |2 Z λ4

(9)

0

is related to by I(τ s1) Sample

η=

Range time

τs

Ts

m Sa

pl

e

e tim τ s1

τc

τ s2

Figure 5. Idealized traces for I(s) of weather signals from a dense distribution of scatterers. A trace represents V(mTs, s) vs s for the mth Ts interval. Instantaneous samples are taken at sample times s1, s2, etc. The signal correlation time along s is c. Samples at fixed s are acquired at Ts intervals and are used to compute the Doppler spectrum for scatterers located about the range cs /2.

where Km ⫽ (m2 ⫺ 1)/(m2 ⫹ 2), and m ⫽ n(1 ⫺ j) is the hydrometeor’s complex refractive index, and is the attenuation index (1). The relation between radial velocity (r) at a point r and the power–weighted Doppler velocity (ro) is ν(rr )η(rr )I(rro , r ) dV (10) ν(rr o ) P(rr o ) It can be shown (1) that (ro) is the first moment of the Doppler spectrum. An example of a Doppler spectrum for echoes

600

METEOROLOGICAL RADAR

Power (dB below peak)

Velocity (m s–1) –100

∞

D3 N(D)ωt (D) dD ms−1

Power (dB below peak)

0

20

40

60

80

100

Noise level

Velocity (m s–1) –80

–60

–40

–20

0

20

0

40

60

80

100

Hann

–20 –40

Fields of reflectivity factor Z and the power-weighted mean velocities (ro) are displayed on color TV monitors to depict the morphology of storms. The Z at low elevation angles is used to estimate rain rates because hydrometeors there are usually rain drops, and vertical air motion can be ignored so that the drops are falling at their terminal velocity wt, a known function of D. The rainfall rate R is usually measured as depth of water per unit time and is given by

–20

–40

RAIN, WIND, AND OBSERVATIONS OF SEVERE WEATHER

π 6

–40

–20

from a tornado is plotted in Fig. 6. This power spectrum is the magnitude squared of the spectral coefficients obtained from the discrete Fourier transform for M ⫽ 128 V(mTs, s) samples at a s corresponding to a range of 35 km. The obscuration of the maximum velocity (i.e., 앒60 m s⫺1) of the scatterers in this tornado, and the power of stronger spectral coefficients leaked through the spectral sidelobes of the rectangular window (i.e., uniform weighting function) are evident. The von Hann weighting function reduces this leakage and better defines both true signal spectrum and maximum velocity. Where spectral leakage is not significant (e.g., samples weighted with the von Hann window function), the spectral coefficients have an exponential probability distribution and hence there is a large scatter in their power. Thus, a 5point running average is plotted to show the spectrum more clearly. The spectral density of the receiver’s noise is also obscured by the leakage of power from the larger spectral components if voltage samples are uniformly weighted.

R=

–60

Rect

–100 Figure 6. The spectral estimates (denoted by ⫻) of the Doppler spectrum of a small tornado that touched down on 20 May 1977 in Del City, Oklahoma. V6 is located at azimuth: 6.1⬚; elevation: 3.1⬚; altitude: 1.9 km. Rect signifies the spectrum for weather signal samples weighted by a rectangular window function (i.e., uniform weight), whereas Hann signifies samples weighted by a von Hann window function (1).

–80

0

(11)

0

where mks units are used. To convert to the more commonly used units of millimeters per hour, multiply Eq. (11) by the factor 3.6 ⫻ 106. The simple and often observed N(D) is an exponential one, and even in this case, we need to measure or specify two parameters of N(D) to use Eq. (11). A real dropsize distribution requires an indefinite number of parameters to characterize it and thus, the radar-determined value of Z

alone cannot provide a unique measurement of R. Although radar meteorologists have attempted for many years to find a useful formula that relates R to Z, there is unfortunately no universal relation connecting these parameters. Nonetheless, it is common experience that larger rainfall rates are associated with larger Z. For stratiform rain, the relation Z = 200R1.6

(12)

has often proved quite useful. Although the Doppler radar measures only the hydrometeor motion toward and away from the radar, the spatial distribution of Doppler velocities can reveal meteorological features such as tornadoes, microbursts (i.e., the divergent flow of strong thunderstorm outflow near the ground), buoyancy waves, etc. For example, the observed Doppler velocity pattern for a tornado in a mesocyclone (a larger-scale rotating air mass) is shown in Fig. 7. The strong gradient of Doppler velocities associated with a couplet of closed ⫾ isodops (i.e., contours of constant Doppler velocity) is due to the tornado having a diameter of about 700 m, and the larger-scale closed isodop pattern (i.e., the ⫺30 and ⫹20 contour) is due to the larger scale (i.e., 3.8 km diameter) mesocyclone. In practice, the data are shown on color TV displays wherein the regions between ⫾ isodops are often colored with red and green hues of varying brightness to signify values of ⫾ (ro). A front is a relatively narrow zone of strong temperature gradients separating air masses. A dry line is simply the boundary between dry and moist air masses. Turbulent mixing along these boundaries creates relatively intense irregularities of refractive index, which return echoes through the Bragg scatter mechanism (1,2,6; also described in the following section). Figure 8 shows the reflectivity fields associated with a cold front and a dry line, as well as the storms that initiated at the intersection of these boundaries. From Doppler velocity fields and observations of the cold front position at subsequent times, it was established that the cold air mass to the northwest of the front is colliding with the ambient air flowing from the SSW. The convergence along the boundary creates a line of persistent vertical motion that can lift scat-

METEOROLOGICAL RADAR

601

ward, and storms are initiated at this moving intersection point.

4

0

Distance (km)

WIND AND TEMPERATURE PROFILES IN CLEAR AIR

10

2 –10 –40 –50

0 –30 –20 –2

20

N

Norman radar Binger tornado 22 May 1981 10 1909 CST, 0.4° Elev

–10 0 –4 –4

30 20 ms–1

–2

0 Distance (km)

2

4

Figure 7. The isodops for the Binger, Oklahoma tornadic storm on 22 May 1981. The center of the mesocyclone is 70.8 km from the Norman Doppler radar at azimuth 284.4⬚; the data field has been rotated so that the radar is actually below the bottom of the figure.

tering particles, normally confined to the layers closer to the ground, making them visible as a reflectivity thin line. Thus, the reflectivity along the two boundaries could be due to these particles as well as to Bragg scatter. The intersection of cold fronts and dry lines is a favored location for the initiation of storms (seen to the northeast of the intersection). As the cold front propagates to the southeast, the intersection of it and the relatively stationary dry line progresses south-southwest-

z(dBz) 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75

In addition to particles acting as tracers of wind, irregularities of the atmosphere’s refractive index can cause sufficient reflectivity to be detected by meteorological radars. Although irregularities have a spectrum of sizes, only those with scales of the order of half the radar wavelength provide echoes that coherently sum to produce a detectable signal (1). This scattering mechanism is called stochastic Bragg scatter because the half-wavelength irregularities are in a constant state of random motion due to turbulence, and thus the echo signal intensity fluctuates exactly like signals scattered from hydrometeors. The reflectivity is related to the refractive index structure parameter Cn2 that characterizes the itensity of the irregularities (1,2,7) η = 0.38Cn2 λ−1/3

(13)

Mean values of Cn2 range from about 10⫺14 m⫺2/3 near sea level, to 10⫺17 m⫺2/3 at 10 km above sea level. Meteorological radars that primarily measure the vertical profile of the wind in all weather conditions and, in particular during fair weather, are called profilers. A wind profile is obtained by measuring the Doppler velocity vs range along beams in at least two directions (about 15 ⬚ from the vertical) and along the vertical beam and by assuming that wind is uniform within the area encompassed by the beams. The vertical profile of the three components of wind can be calculated from these three radial velocity measurements along the range and the assumption of wind uniformity (7). A prototype network of these profilers has been constructed across the central United States to determine potential benefits for weather forecasts (8). Temperature profiles are measured using a Radio-Acoustic Sounding System (RASS; 1,7). This instrument consists of a vertically pointed Doppler radar (for this application the wind profiling radar is usually time-shared) and a sonic transmitter that generates a vertical beam of acoustic vibrations, which produce a backscattering sinusoidal wave of refractive index propagating at the speed of sound. The echoes from the acoustic waves are strongest under Bragg scatter conditions (i.e., when the acoustic wavelength is one-half the radar wavelength). The backscatter intensity at various acoustic frequencies is used to identify those frequencies that produce the strongest signals. Because the acoustic wave speed (and thus wavelength) is a function of temperature, this identification determines the acoustic velocity and hence the temperature. Allowance must be made for the vertical motion of air, which can be determined by analyzing the backscatter from turbulently mixed irregularities. TRENDS AND FUTURE TECHNOLOGY

Figure 8. Intersecting reflectivity thin lines in central Oklahoma on 30 April 1991 at 2249 U. T. The thin line farthest west is along a NESW oriented cold front; the thin line immediately east is along a NNE-SSW oriented dry line. The reflectivity factor (dBZ) categories are indicated by the brightness bar. (Courtesy of Steve Smith, OSF/ NWS.)

Networks of radars producing Doppler and reflectivity information in digital form have had a major impact on our capability to provide short-term warnings of impending weather hazards. Still, there are additional improvements that could enhance the information derived from meteorological radars

602

METROPOLITAN AREA NETWORKS

significantly. Resolution of velocity and range ambiguities, faster coverage of the surveillance volume and better resolution, estimates of cross beam wind, and better measurements of precipitation type and amounts are some of the outstanding problems. Signal design techniques that encode transmitted pulses or stagger the PRT are candidates to mitigate the effects of ambiguities (1). Faster data acquisition can be achieved with multiple-beam phase-array radars, and better resolution can be obtained at the expense of a larger antenna. The cross beam wind component can be obtained using a bistatic dual-Doppler radar (i.e., combining the radial component of velocity measured by a Doppler weather radar with the Doppler shift measured at the distant receiver), or by incorporating the reflectivity and Doppler velocity data into the equations of motion and conservation. Vector wind fields in developing storms could be used in numerical weather prediction models to improve short-term forecasts. We anticipate an increase in application of millimeter wavelength Doppler radars to the study of nonprecipitating clouds (5). For better precipitation measurements, radar polarimetry offers the greatest promise (1,2). Polarimetry capitalizes on the fact that hydrometeors have shapes that are different from spherical and a preferential orientation. Therefore, differently shaped hydrometeors interact differently with electromagnetic waves of different polarization. To make such measurements, the radar should have the capability to transmit and receive orthogonally polarized waves, e.g., horizontal and vertical polarization. Both backscatter and propagation effects depend on polarization; measurements of these can be used to classify and quantify precipitation. Large drops are oblately shaped and scatter more strongly the horizontally polarized waves; they also cause larger phase shift of these waves along propagation paths. The differential phase method has several advantages for measurement of rainfall compared to the reflectivity method [Eq. (12)]. These include independence from receiver or transmitter calibrations errors, immunity to partial beam blockage and attenuation, lower sensitivity to variations in the distribution of raindrops, less bias from either ground clutter filtering or hail, and possibilities to make measurements in the presence of ground reflections not filtered by ground clutter cancelers. Polarimetric measurements have already proved the hail detection capability, but discrimination between rain, snow (wet, dry), and hail also seems quite possible. These areas of research and development could lead to improved short-term forecasting and warnings. BIBLIOGRAPHY 1. R. J. Doviak and D. S. Zrnic´, Doppler Radar and Weather Observations, 2nd ed., San Diego: Academic Press, 1993. 2. D. Atlas, Radar in Meteorology, Boston: Amer. Meteorol. Soc., 1990. 3. L. J. Battan, Radar Observations of the Atmosphere, Chicago: Univ. of Chicago Press, 1973. 4. R. Meneghini and T. Kozu, Spaceborne Weather Radar, Norwood, MA: Artech House, 1990. 5. Proc. IEEE; Spec. Issue Remote Sens. Environ., 82 (12): 1994. 6. E. E. Gossard and R. G. Strauch, Radar Observations of Clear Air and Clouds, Amsterdam: Elsevier, 1983. 7. S. F. Clifford et al., Ground-based remote profiling in atmospheric studies: An overview, Proc. IEEE, 82 (3): 1994.

8. U.S. Department of Commerce, National Oceanic and Atmospheric Administration, Wind profiler assessment report and recommendations for future use 1987–1994, Prepared by the staffs of the National Weather Service and the Office of Oceanic and Atmospheric Research, Silver Spring, MD, 1994.

RICHARD J. DOVIAK DUSˇAN S. ZRNIC´ National Severe Storms Laboratory The University of Oklahoma

METER, PHASE. See PHASE METERS. METERS. See OHMMETERS; POWER SYSTEM MEASUREMENT. METERS, ELECTRICITY. See WATTHOUR METERS. METERS, REVENUE. See WATTHOUR METERS. METERS, VOLT-AMPERE. See VOLT-AMPERE METERS. METHOD OF MOMENTS SOLUTION. See INTEGRAL EQUATIONS.

METHODS, OBJECT-ORIENTED. See BUSINESS DATA PROCESSING.

METHODS OF RELIABILITY ENGINEERING. See RELIABILITY THEORY.

METRIC CONVERSIONS. See DATA PRESENTATION. METRICS, SOFTWARE. See SOFTWARE METRICS.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3610.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Microwave Propagation and Scattering for Remote Sensing Standard Article Adrian K. Fung1 1University of Texas at Arlington, Arlington, TX Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3610 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (321K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Basic Terms in Scattering and Transmission Radiative Transfer Formulation Scattering from Soil Surfaces Scattering from a Vegetated Area Scattering from Snow-Covered Ground About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3610.htm17.06.2008 15:37:03

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

182

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING The main objective of remote sensing is to learn about the properties of the target (which can be an object, a scene, or a medium) by processing the signal scattered from it. For this reason it is desirable to remove system effects such as the

antenna gain and the range of observation from the received signal. This leads to the introduction of quantities proportional to the scattered power—for example, the scattering cross section for isolated targets and the scattering coefficient for area extensive targets such as a rough surface or a snowfield. These quantities are dependent on the exploring frequency, view angle, polarization, and the geometric and electric properties of the target as well as their scattering and propagation properties which are the subjects of interest here. This article treats the scattering and propagation of microwave fields and power in various earth environments. We begin with a discussion of a number of relevant terms, as well as reflection and transmission of polarized electromagnetic waves at a plane boundary between air and a finitely conducting medium. Then a vector radiative transfer formulation (1) for scattering and propagation within an inhomogeneous layer and scattering at its boundaries is discussed to provide a framework to understanding the interrelationships among wave propagation, volume scattering, and surface scattering. A rough surface, or an inhomogeneous half-space, is a special case of this formulation. Three application areas are considered. The first area of application is scattering and transmission at an irregular soil boundary. Scattering of waves that takes place at a surface boundary between two homogeneous media is called surface scattering. For natural ground surfaces where the roughness can only be described statistically, the scattered field will vary from location to location. Such a variation in the received signal is called fading, and the associated field amplitude and power distributions are its fading statistics. A meaningful signature of the rough surface is the statistically averaged, received power. It follows that this average power must be a function of the statistical parameters of the surface such as the standard deviation of the surface height (root mean square or rms height) and its height correlation function. In remote sensing it is the scattered field that is received by the observing antenna. Thus scattering is the key mechanism. However, in the presence of an inhomogeneous medium such as vegetation, sea ice, or a snow layer, the propagating fields within the medium are equally important, since they are part of the sources of the scattered field. For an inhomogeneous layer with irregular boundaries, an incident wave will generate scattering throughout the volume of the layer. Such a scattering mechanism is called volume scattering. In general, there will also be surface scattering at the boundaries and hence surface–volume interaction; multiple volume scattering and surface scattering are also present within the layer. When the phase relationship between scatterers is needed in volume scattering calculation, the type of scattering is said to be coherent. Otherwise, the total scattered power can be calculated by adding the scattered power from individual scatterers, and the associated scattering is said to be incoherent or independent. In the radiative transfer formulation coherent calculation is used to derive the scattering phase function which describes single scattering by a single scatterer in a sparse medium or a group of scatterers in a dense medium. Our second application area is scattering by and propagation through a vegetation layer above an irregular ground surface. The volume occupied by the vegetation biomass relative to the total volume of a vegetation layer is generally less

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

than 1%. For this reason a vegetation medium is taken to be a sparse medium where multiple scattering beyond the second order is assumed to be negligible. A more precise definition of a sparse medium is a situation in which the scatterers are in the far field of one another. The usual condition for far field is that the range between scatterers is greater than 2D2 / , where D is the largest dimension of the scatterer and is the operating wavelength in the medium in which the wave is propagating. Under this condition the phase of the propagating wave is a linear function of the range. In such a medium it is possible to ignore phase relations (or coherency) between scatterers in scattering calculations. A vegetated medium does not really satisfy this far-field condition. However, estimates of scattering using far-field calculations have been shown to give results that compare well with measurements (2). The third application area we consider is scattering and propagation in a snow medium above a ground surface. The volume fraction of ice particles in snow generally ranges from about 10% to 40%, while the ice particle size is in the range of 0.03 cm to 0.3 cm. Two scenarios are possible: 1. Within the distance of a wavelength there are two or more scatterers. Such a medium is called an electrically dense medium. In this case two or more scatterers scatter as a group, so a scattering phase function for the group is needed in the scattering calculation. 2. The adjacent scatterers are not in the far field of each other, but the average spacing between them is more than a wavelength. In this case the scattering phase function is for a single scatterer, but the far-field approximation is not valid. Such a medium is a spatially dense medium. A general dense medium may be spatially and electrically dense. Hence the general scattering phase function for snow must include both of these effects (3–5). In a dense medium scattered fields from scatterers interact at all distances. They are said to interact in the near field, when the distance between scatterers is small compared to the operating wavelength. BASIC TERMS IN SCATTERING AND TRANSMISSION In radar remote sensing the quantity measured is the radar cross section for an isolated target or the scattering coefficient for an area extensive target. To measure the radar cross section of a target, the size of the target must be smaller than the coverage of the radar beam; the converse is true in measuring the scattering coefficient. Intuitively, an object can scatter an incident wave into all possible directions with varying strength, and this scattering pattern should vary with the incident direction. To compare between the scattering strengths of objects in a given direction, some common reference is needed. For the radar cross section of an object, the common reference is an idealized isotropic scatterer. Thus the radar cross section of an object observed in a given direction is the cross section of an equivalent isotropic scatterer that generates the same scattered power density as the object in the observed direction. Mathematically the radar cross section r of an object observed in a given direction is the ratio

183

of the total power scattered by an equivalent isotropic scatterer to the incident power density on the object: σr =

4πR2 |E s |2 ≡ 4π|S|2 |E i |2

(1)

where R is the range between target and the radar receiver; Ei is the incident field; Es is the scattered field along the direction under consideration, and S is the scattering amplitude of the object defined by (R2兩Es兩2)/兩Ei兩2. For an area-extensive target such as a randomly rough soil surface, the scattered field comes from the area illuminated by the radar antenna. To avoid dependence on the size of the illuminated area A0, we want to define a per unit area quantity, the scattering coefficient of the surface 0, which is the statistically averaged radar cross section of A0 divided by A0. Let 具 典 be the symbol for statistical average. Then 0 can be written as σ0 =

σr 4πR2 |E s |2 4π|S|2 = ≡ i 2 A0 A0 |E | A0

(2)

When the transmitting and receiving antennas of the radar are co-located, the radar is said to operate in the monostatic mode. If the locations of these antennas are separated, it is said to operate in a bistatic mode. The scattering coefficient corresponding to the bistatic operation is referred to as a bistatic scattering coefficient. Reflection and Transmission at a Finitely Conducting Boundary In remote sensing of the earth’s environment, we generally encounter plane wave reflection from and transmission through a weakly finitely conducting medium. This problem has been extensively treated in Ref. 6. As shown in Fig. 1 a plane wave incident at a plane boundary is said to be horizontally polarized, or a transverse electric (TE) wave, if its electric field vector is perpendicular to the plane of incidence, which is the plane parallel to the wave propagation direction and the normal vector to the boundary, the xz-plane in Fig. 1. In this case the magnetic field vector is parallel to the plane of incidence. The incident wave is said to be parallel or vertically polarized, or a transverse magnetic (TM) wave, if the direction of the electric field vector is parallel to the plane of incidence. The law of reflection requires the incident and re-

z Hr

Medium 1

Ei Hi

θ (

1,

Er

θr

µ 1) x

(

2,

µ 2, σ 2)

θt

Medium 2 Et

Ht Figure 1. Reflection and transmission at a plane boundary between a dielectric upper medium and a finitely conducting lower medium.

184

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

flected angles to be the same, r ⫽ , and Snell’s law for dielectric media shows that the angle of transmission can be computed from √µ sin θ θt = sin−1 (3) √1 1 µ2 2 where 애 and ⑀ denote, respectively, the permeability and permittivity of a medium. When the lower medium is finitely conducting but the conductivity is small, Eq. (3) still gives a good estimate of the transmission angle, and the attenuation of the transmitted field may be estimated by the loss factor exp[−0.5σ2 η2 |z|]

(4)

where 2 is the conductivity, z is the distance into medium 2, and 2 ⫽ 兹애2 / ⑀2 is the intrinsic impedance of medium 2. The reader is referred to Ref. 6 for t and attenuation calculations, when the loss is not small. For most naturally occurring media 애1 앒 애2. From Eq. (3) we see that if ⑀1 ⬎ ⑀2 the sine of t may exceed unity for some range of . The for which sin t ⫽ 1 is called the critical angle c. When ⬎ c, sin t ⬎ 1, and there is no real angle of transmission. Physically the incident field is totally reflected. The Fresnel reflection and transmission coefficients for horizontal polarization with 애1 앒 애2 앒 애0 may be written for the electric fields as (6) Rh =

Er k cos θ − k2 cos θt η cos θ − η1 cos θt = 1 = 2 Ei k1 cos θ + k2 cos θt η2 cos θ + η1 cos θt

(5)

While it is not possible for Rh to be zero with ⑀1 ⬆ ⑀2 between dielectric media for some incident angle, it is possible for Rv to be zero. This particular incident angle is called the Brewster angle B. At this incident angle Tv ⫽ 1, Rv ⫽ 0 for dielectric media. The Brewster angle can be found from θB = tan−1

r 2

(10)

1

To extend the scattering coefficient 0 to include polarization dependence, note that its dependence on polarization is through the incident and scattered electric fields. Let p denote the incident polarization and q the scattered polarization. The symbols p and q may represent either vertical or horizontal polarization. Then we can add qp as subscripts to the scatter0 ing coefficient as qp . Like and Cross Polarizations Radar measurements are generally acquired in both like (or co-polarized) and cross polarizations. The polarization of an antenna used for measurement is defined to be the same as that of the wave it transmits and the polarization of a wave is the direction of its electric field. The term, like polarization, means transmitting and receiving with matched antennas: |aˆ r · aˆ t | = 1

(11)

where aˆt, aˆr are the polarizations of the transmitting and receiving antennas, respectively. Cross or orthogonal polarization is defined to be associated with zero reception:

and

|aˆ r · aˆ t | = 0

Et 2k1 cos θ 2η2 cos θ Th = i = = = 1 + Rh E k1 cos θ + k2 cos θt η2 cos θ + η1 cos θt (6) where k1,2 ⫽ 웆兹애0⑀1,2 is the wave number of the medium. For a finitely conducting lower medium, k2 cos t is actually a complex quantity. Its exact representation may be found in Ref. (6). For a medium with small conductivity, it is possible to approximate Rh, Th by replacing ⑀2 by ⑀2 ⫺ j2 /웆 and use Eq. (3) to calculate t. For example,

k2 cos θt

jσ ≈ω µ − cos θ µ /( − jσω/ω)

η2 ≈ cos θt

2

0

0

2

2

t

(7)

2

Hr k cos θ − k1 cos θt η cos θ − η2 cos θt = 2 = 1 Hi k2 cos θ + k1 cos θt η1 cos θ + η2 cos θt

(8)

and

Tν =

To illustrate the meaning of Eq. (11), consider the transmitting antenna, with polarization defined by Eq. (13) which represents a left-handed elliptically polarized plane wave from a transmitting antenna. The term, left-handed, means that when the thumb of the left hand is in the direction of propagation, the fingers are pointing in the direction of rotation of the electric field vector as time increases (Fig. 2) E t = xˆ cos τt cos(ωt − kz) − yˆ sin τt sin(ωt − kz)

(13)

The angle t defines the relative magnitudes of the semi-axes of the ellipse and is known as the ellipticity angle. A sign change in t or z would make the wave right handed. If the receiving antenna is chosen to have the same polarization, its

cos θt

Analogous relations can be obtained for the Fresnel reflection and transmission coefficients of vertically polarized waves of the magnetic fields by interchanging k1, 1 with k2, 2 as follows: Rν =

(12)

x′ x z y′

τt y

Ht 2k2 cos θ 2η1 cos θ = = = 1 + Rν Hi k2 cos θ + k1 cos θt η1 cos θ + η2 cos θt (9)

z′

Figure 2. Illustration of the polarizations of transmitting and receiving antenna systems. A left-handed elliptically polarized transmitted field is shown.

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

radiated field will have the same mathematical form but will be expressed in coordinates for the receiving antenna (the primed coordinates in Fig. 2). As illustrated in Fig. 2, transmitting and receiving antenna systems must point in opposite directions. This means that when the radiated field of the receiving antenna is expressed in the coordinate system of the transmitting antenna, its propagation phase must take the form 웆t ⫹ kz, and the rectangular coordinate system for the receiving antenna may be related to that of the transmitting antenna as follows: xˆ = xˆ yˆ = −yˆ

(14)

Thus the radiating field from the receiving antenna expressed in the coordinates of the transmitting antenna is E r = xˆ cos τr cos(ωt + kz) + yˆ sin τr sin(ωt + kz)

(15)

When we convert it to phasor form, it becomes E r = (xˆ cos τr − j yˆ sin τr ) exp[ j(ωt + kz)] ≡ aˆ r e j(ωt+kz)

(16)

Similarly from Eq. (13), the polarization unit vector in phasor form for the transmitting antenna is aˆ t = xˆ cos τ + j yˆ sin τ when we set t ⫽ r ⫽ . Clearly it is the complex conjugate of aˆr and 兩aˆr ⭈ aˆt兩 ⫽ 1. The polarization states of both antennas are left-handed elliptic. Hence this case is referred to as likepolarization. The special case, where is zero, yields linear polarization in the xˆ direction for both the transmitting and receiving antennas. To illustrate cross or orthogonal polarization, consider the receiving antenna defined by Eq. (15), and set r ⫽ t ⫺ 앟/2. Thus the polarization vector of the receiving antenna expressed in the transmitting coordinates becomes aˆ r = xˆ sin τ + j yˆ cos τ

(17)

after we set t ⫽ r ⫽ . If we take the dot product according to Eq. (12), we find that aˆr ⭈ aˆt ⫽ 0. When we check the polarization state of the receiving antenna, it is right-hand elliptic. Thus left-handed elliptic and right-handed elliptic polarizations are mutually orthogonal. The special case, where is zero, yields linear polarization in the xˆ direction for the transmitting antenna and in the yˆ direction for the receiving antenna. These directions are clearly orthogonal.

RADIATIVE TRANSFER FORMULATION In this section we present the basic development of the radiative transfer theory and its formulation for scattering from and propagation through an inhomogeneous layer with irregular boundaries. In the classical formulation of the radiative transfer equation (7) the fundamental quantity used is the specific intensity I. It is defined in terms of the amount of power dP (watts) flowing in the rˆ direction within a solid

185

angle d⍀ through an elementary area dS in a frequency interval (, ⫹ d) as follows: dP = Iν cos α dS d dν

(18)

where 움 is the angle between the outward normal sˆ to dS and the unit vector rˆ. The dimension of I is W m⫺2 sr⫺1 Hz⫺1. In most remote-sensing applications, the radiation at a single frequency is considered. Thus it is more convenient to consider the intensity I at a frequency , which is defined as the integral of I over the frequency interval ( ⫺ d /2, ⫹ d /2). In terms of intensity, the amount of power at a single frequency can be written as dP = I cos α dS d

(19)

The transfer equation governs the variation of intensities in a medium that absorbs, emits, and scatters radiation. Within the medium, consider a cylindrical volume of unit cross-section and length dl. The change in intensity may be a gain or a loss. The loss in intensity I propagating through the cylindrical volume along the distance dl is due to absorption and scattering away from the direction of propagation, and the gain is from thermal emission and scattering into the direction of propagation: dI = −κa Idl − κs Idl + κa Ja dl + κs Js dl

(20)

where a, s are the volume absorption and volume-scattering coefficients. In Eq. (20) Ja and Js are the absorption source function (or emission source function) and the scattering source function. Equation (20) is the radiative transfer equation in which the definition of Js is Js (θs , φs ) =

1 4π

2π 0

π

P(θs , φs ; θ, φ)I(θ, φ) sin θ dθ dφ

(21)

0

where P(s, s; , ) is the phase function accounting for scattering within the medium to be defined in the next subsection. It is clear from Eq. (21) that Js is not an independent source of the medium but is itself a function of the propagating intensity. On the other hand, Ja is an independent source function proportional to the temperature profile of the medium; namely it is the source function in passive remote-sensing problems. As such, it should be dropped in active remotesensing problems in which the source is an incident wave from the radar transmitter outside the scattering medium. For the active sensing problem to be considered in this section, we will treat partially polarized waves by introducing the Stokes parameters. Then we will generalize the scalar radiative transfer equation to a matrix equation. In so doing, it is helpful first to establish the relation between the scattered intensity and the incident intensity and then to relate these intensities to the corresponding electric fields. Stokes Parameters, Phase Matrices, and Radiative Transfer Equations For an elliptically polarized monochromatic plane wave, E ⫽ (Evvˆ ⫹ Ehhˆ) exp( jk ⭈ r), propagating through a differential solid angle d⍀ in a medium with intrinsic impedance , where vˆ and hˆ are the unit vectors denoting vertical and horizontal polarization, respectively, k is the wave number, and

186

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

r is the propagation distance. The modified Stokes parameters I, Ih, U, and V in the dimension of intensity can be defined in terms of the electric fields as 2

0.5 Re

|Eν | Iν d = 0.5 Re η∗ |Eh |2 Ih d = 0.5 Re η∗ Eν Eh∗ U d = Re η∗ Eν Eh∗ V d = Im η∗

(22) (23)

0.5 Re

(25)

Phase Matrix for Rough Surfaces. To relate the scattered intensity to the incident intensity, consider a plane wave illuminating a rough surface area A0. The relation between the vertically and horizontally polarized scattered field components Esv, Esh and those of the incident field components Evi , Ehi is

Eνs e jkR Sν ν = R Shν Ehs

Sν h Shh

Eνi Ehi

|S |2 |E i |2 |Sν ν |2 |Eνi |2 + νh ∗ h ∗ η η ∗ Sν ν S∗ν h Eνi Ehi +2 Re η∗ 1 R2

S

∗ i i∗ ν ν Sν h Eν Eh η∗

= 2 Re

[Re(Sν ν S∗ν h )

+

E i E i

j Im(Sν ν S∗ν h )]

Re

Ei Ei

+ j Im

h

η∗

h

η∗

Ei Ei ∗

=

2 Re(Sν ν S∗ν h )Re

ν

η∗

h

Ei Ei ∗

− 2 Im(Sν ν S∗ν h )Im

d

R2

+ Re(Shν S∗hh )U − Im(Shν S∗hh )V Re

Im

(27)

= |Shν |2 Iν + |Shh |2 Ih d

(28) R2

Eνs Ehs∗ d

= [2Re(Sν ν S∗hν )Iν + 2Re(S∗hh Sν h )Ih ] 2 η∗ R + [Re(Sν ν S∗hh + Sν h S∗hν )U

(29)

− Im(Sν ν S∗hh − Sν h S∗hν )V ]d /R2

Eνs Ehs∗ d

= [2Im(Sν ν S∗hν )Iν + 2Im(S∗hh Sν h )Ih ] 2 η∗ R ∗ ∗ + [Im(Sν ν Shh + Sν h Shν )U

(30)

− Re(Sν ν S∗hh − Sν h S∗hν )V ]d /R2 The left-hand sides of the above equations are in watts per square meter. To convert them to intensity, we need to divide both sides of the equation by the solid angle subtended by the illuminated area A0 at the point of observation, (A0 cos s)/R2, where s is the angle between the scattered direction and the direction normal to A0. Equation (27) becomes

0.5R2 Re

|Eνs |2 ∗ η A0 cos θs

= |Sν ν |2 Iν + |Sν h |2 Ih d

A0 cos θs d

− Im(Shν S∗hh )V A0 cos θs

+ Re(Sν ν S∗ν h )U

(31)

The term on the left-hand side of the above equation is the intensity of the scattered field. In view of Eq. (2), we can rewrite the above equation in terms of the scattering coefficients as

Is =

∗

ν

∗

ν

h

η∗

+ Re(Sν ν S∗ν h )U − Im(Sν ν S∗ν h )V

d

4π cos θs

(32)

Similarly we can convert the left-hand sides of Eqs. (28) through Eq. (30) into intensities and rewrite all four resulting equations into a matrix equation. This matrix equation relates the scattered intensities Is to the incident intensities Ii through a dimensionless quantity known as the phase matrix P,

|E s |2

= |Sν ν |2 Iν + |Sν h |2 Ih

0 Iνs = (σν0ν Iν + σν0h Ih + σν0ν ν hU + σhν hhV )

where

2 Re

ν

η∗

(26)

where Spq(p, q ⫽ v or h) is the scattering amplitude in meters, R is the distance from the center of the illuminated area to the point of observation, and k is the wave number. Consider 兩Esv兩2 / *

|Eνs |2 /η∗ =

|E s |2

(24)

where * is the symbol for the complex conjugate, ⫽ 兹애/(⑀ ⫺ j /웆) for a finitely conducting medium and the righthand side of Eq. (22) or Eq. (23) is the average Poynting vector representing power density in units of watts per meter squared. These four parameters have the same dimensions and hence are more convenient to use than amplitude and phase, which have different dimensions. It has been shown that the amplitude, phase, and polarization state of any elliptically polarized wave can be completely characterized by these parameters (8).

Recognizing the above relation, we can obtain the following quantities using Eq. (26) and Eqs. (22) through Eq. (25):

ν

η∗

h

1 PIi d

4π

(33)

The components of Ii are the Stokes parameters as defined by Eqs. (22) through Eq. (25) for the incident plane wave. The components of the scattered intensity Is are also Stokes parameters but are defined for spherical waves. They differ from the plane wave definition in the normalizing solid angle (A0 cos s)/R2. The element of the phase matrix relating Isv to Ivi is

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING 0 vv /cos s. To sum up all possible incident intensities from all directions contributing to Is along a given direction, we integrate over all solid angles: 1 Is = PIi d

(34) 4π 4π

Equation (34) is the generalized version of Eq. (21) for partially polarized waves. Here Is, Ii are column vectors whose components are the Stokes parameters. The detailed contents of the phase matrix written in terms of scattering amplitudes are summarized below: P=

4πM A cos θs

(35)

the free-space wave number, we define the absorption coefficient for p polarization as √ κa p = 2k0 |Im a p |

|Sν ν |2 |Shν |2 2 Re(Sν ν S∗hν ) 2 Im(Sν ν S∗hν )

Qa p =

Re(Sν ν S∗ν h ) Re(Shν S∗hh ) Re(Sν ν S∗hh + Sν h S∗hν ) Im(Sν ν S∗hh + Sν h S∗hν )

−Im(Sν ν S∗ν h ) −Im(Shν S∗hh ) −Im(Sν ν S∗hh − Sν h S∗hν ) Re(Sν ν S∗hh − Sν h S∗hν )

Phase Matrix for an Inhomogeneous Medium. Consider a homogeneous medium with randomly embedded scatterers. Each scatterer is characterized by a bistatic radar cross section p due to a p-polarized (p ⫽ v or h) incident intensity. The scattering cross section of the scatterer Qsp is defined as the cross section that would produce the total scattered power surrounding the scatterer due to a unit incident Poynting vector of polarization p, 1 Qs p (θ, φ) = σ p d s = |Sν p |2 + |Sh p |2 d s (36) 4π 4π 4π where , indicate the incident direction, and integration is over the scattered solid angle. The volume-scattering coefficient for the inhomogeneous medium and polarization p is κs p = Nν Qs p

1 Qs p V

(41)

and the extinction coefficient is ep ⫽ NvQep. In Eq. (41), Qep is the effective area that generates the total scattered and absorbed power due to a unit incident Poynting vector of polarization p. The ratio of sp to ep is the albedo of the random medium. Conceptually either Qep or Qsp may be used in place of A cos s in Eq. (35) to define the phase matrix of a single scatterer. However, unlike A cos s, Qep and Qsp have polarization dependence in general (10) and hence are matrices. Let us denote them as Qe and Qs. The choice of the definition for the phase matrix depends on the assumed form of the scattering source term in Eq. (20). When the term is written as sJs the definition is (11) Ps = 4πQ−1 s M

(42)

If the term is written as eJ⬘s , then the definition should be (8) Pe = 4πQ−1 e M

(43)

Both definitions have appeared in the literature. Clearly the phase matrix is a term created for convenience; the scattering source term is the fundamental quantity. Thus, while Eq. (42) is not the same as Eq. (43), the source terms are the same in both cases, as they should be, κs Js = κe Js =

Nν MI d

(44)

4π

In view of Eq. (44) and Eq. (20), the radiative transfer equation for partially polarized waves in a discrete inhomogeneous medium is

dI κe = −κe I + Pe I d + κa Ja dl 4π 4π κs = −κe I + Ps I d + κa Ja 4π 4π

(38)

Another important parameter for characterizing an inhomogeneous medium is its absorption loss, represented by the volume-absorption coefficient, ap. This quantity may be defined in terms of the average relative permittivity ⑀ap of the medium, where p denotes the incident polarization. Letting k0 be

(40)

From Eq. (37) and Eq. (40) the total cross section, also known as the extinction cross section, for a scatterer is

(37)

where Nv is the number of scatterers per unit volume (8). The scattering coefficient sp represents the scattering loss per unit length and has the units of Np m⫺1. In the case of a continuous, inhomogeneous medium defined by a spatially varying permittivity function, the scattering amplitudes Svp, Shp are for an effective volume V. The volume scattering coefficient is defined as (9) κs p =

κa p Nν

Qe p = Qa p + Qs p

|Sν h |2 |Shh |2 2 Re(Sν h S∗hh ) 2 Im(Sν h S∗hh )

(39)

This equation may be used for either a continuous inhomogeneous medium or a discrete inhomogeneous medium. In the latter case the absorption cross section Qap for one particle and p polarization can be defined as

where the Stokes matrix M is

187

(45)

or dI = −κe I + dl

Nν MI d + κa Ja 4π

(46)

188

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

In a continuous, inhomogeneous medium, it is more convenient to use the Stokes matrix instead of the phase matrix. Making use of Eq. (38), we have dI = −κe I + dl

4π

M I d + κa Ja V

(47)

In Eq. (47), V is the effective illuminated volume as given in Eq. (38) and will cancel out upon evaluating 具M典. It is clear from Eq. (46) and Eq. (47) that the fundamental quantity in scattering is the scattering amplitude, while phase function is an artificially created quantity. The radiative transfer equation is formulated on the basis of energy balance. The phase changes of the scattered wave and its cross-correlation terms are ignored in the solution of the transfer equation. For a sparsely populated random medium it is not necessary to track the phase change between scatterers. In a dense medium a group of scatterers may scatter coherently. Thus phase relation among scatterers must be taken into account in the derivation of the phase function. However, phase effects in multiple scattering calculations can still be ignored because the mechanism of multiple scattering tends to destroy phase relations between scatterers. To date, the radiative transfer formulation is still the most practical approach to compute multiple scattering from an inhomogeneous medium. Furthermore it provides a natural way to combine surface boundary scattering with volume scattering from within an inhomogeneous layer, as we will see next. Scattering from an Inhomogeneous Layer with Irregular Boundaries For bounded media, scattering or reflection may occur at the boundary. Both incident and scattered intensities are needed in the boundary conditions. Therefore it is necessary to split the intensity matrix into upward I⫹ and downward I⫺ components and rewrite Eq. (46) as two equations. For active sensing applications, the thermal source term is not needed. It is also standard practice to express the slant range in terms of the vertical distance, that is, let l ⫽ z/cos (Fig. 3). Consider the problem of a plane wave in air incident on an inhomogeneous layer above a ground surface. The geometry of the scattering problem is depicted in Fig. 3. The inhomogeneous layer is assumed to have such characteristics that the

z Ii

z=0

θ µ1

1

µ2

2

Is

θs φs φt

x

θt I+ I–

z = –d Figure 3. Scattering geometry for an inhomogeneous layer above a homogeneous half-space.

upward intensity I⫹ and the downward intensity I⫺ satisfy the radiative transfer equation. On rewriting Eq. (46) in terms of these intensities, we obtain (1)

µs

µs

d + I (z, µs , φs ) dz = −κe I+ (z, µs , φs ) 2π 1 1 + κs Ps (µs , µ, φs − φ)I+ (z, µ, φ) dµ dφ 4π 0 0 2π 1 1 + κs Ps (µs , −µ, φs − φ)I− (z, µ, φ) dµ dφ 4π 0 0 (48)

d − I (z, µs , φs ) dz = κe I− (z, µs , φs ) 2π 1 1 − κs Ps (−µs , µ, φs − φ)I+ (z, µ, φ) dµ dφ 4π 0 0 2π 1 1 − κs Ps (−µs , −µ, φs − φ)I− (z, µ, φ) dµ dφ 4π 0 0 (49)

where 애s ⫽ cos s, 애 ⫽ cos , I⫹ and I⫺ are column vectors containing the four modified Stokes parameters, and Ps is the phase matrix. To find the upward intensity due to an incident intensity Ii, where Ii ⫽ I0웃(애 ⫺ 애i)웃( ⫺ i), 웃( ) is the Dirac delta function, and (i, i) denotes the direction of propagation of the incident wave, we need to solve Eq. (48) and Eq. (49) subject to the following boundary conditions: At z ⫽ ⫺d the upward and downward intensities are related through the groundscattering phase matrix G as

I+ (−d, µs , φs ) 2π 1 1 = G(µs , −µ, φs − φ)I− (−d, µ, φ) dµ dφ 4π 0 0

(50)

If the ground surface is flat, G may be written in terms of the reflectivity matrix Rg as G = 4πRg δ(µs − µ)δ(φs − φ)

(51)

At the top boundary z ⫽ 0, the upward and downward intensities are related through the surface-scattering and transmission phase matrices SR and ST (12)

I− (0, µs , φs ) 2π 1 1 = SR (−µs , µ, φs − φ)I+ (0, µ, φ) dµ dφ (52) 4π 0 0 2π 1 1 + ST (−µs , −µ, φs − φ)Ii (0, µ, φ) dµ dφ 4π 0 0 Once I⫹ (0, 애s, s) is determined within the inhomogeneous layer, the upward intensity transmitted from the layer into air can be found using the transmission scattering matrix of the surface, ST as

I+ (µs , φs ) =

1 4π

2π 0

1 0

ST (µs , µ, φs − φ)I+ (0, µ, φ) dµ dφ (53)

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

The total scattered intensity in air is given by the sum of I⫹ (애s, s) and Is, where Is is the intensity due to random surface scattering by the top layer boundary: Is =

1 4π

2π 0

1 0

SR (−µs , −µ, φs − φ)Ii (0, µ, φ) dµ dφ

(54)

The explicit forms of the matrices Rg, SR, and ST are available in Ref. 1. The expressions for SR and ST are for an irregular boundary. The G matrix is assumed to have the same mathematical form as SR. Once the total scattered intensity for a ppolarized component Isp of the intensity matrix is found, the scattering coefficient for this component is defined relative to the incident intensity Iqi ⫽ Iq0웃(애 ⫺ 애i)웃( ⫺ i) of polarization q along (애i, i) direction as 0 σ pq =

4πI sp cos θs

(55)

Iq0

The transfer equations given by Eq. (48) and Eq. (49) can be solved exactly by using numerical techniques (13,14). Analytic solutions by iteration are available to the first order in albedo (1, ch. 2) which is also known as the Born approximation. It is generally practical to carry the iterative solution process to the second order in albedo. This additional complexity is justified only for cross-polarization in the plane of incidence, because its first-order result is zero. For like-polarization the difference between the first and second-order results is generally within experimental error. First-Order Solution of the Layer Problem. In many practical applications the volume scattering from an inhomogeneous layer can be approximated by using a first-order solution whenever the albedo of the medium is smaller than about 0.3. Furthermore, because volume scattering has a slowly varying angular behavior over incident angles in the range between 0⬚ and 70⬚, we can approximate the transmission across an irregular boundary by a plane boundary. Under these conditions the first-order solution to the radiative transfer equations consists of four major terms: (1) volume scattering by the inhomogeneous layer transmitted across the top boundary, (2) scattering by the bottom layer boundary passing through the layer into the upper medium, (3) scattering between layer volume and lower boundary passing through the layer into the upper medium, and (4) surface scattering by the top boundary. Note that only the last term represents pure surface scattering and does not involve propagation through the layer volume. The second and third terms are dependent on contributions from the lower boundary and can be ignored in dealing with a half-space or a very thick layer. The volume backscattering term for p polarized scattering has the form

σν0p p (θ ) = 0.5 cos θt

κ s

Tp (θ, θt )Tp (θt , θ ) 1 − exp κe Ps p p (θt , π; π − θt , 0) −2κ d e = 2π cos θt Tp (θ, θt ) 1 − exp cos θt |S p p (θt , π; π − θt , 0)|2 Tp (θt , θ ) σe

−2κ d e

cos θt

(56)

189

where t is the angle of transmission, Tp(t, ) is the Fresnel power transmission coefficient for p polarization, d is the depth of the layer, Spp(t, 앟; 앟 ⫺ t, 0) is the scattering amplitude and the ensemble average is over the distribution of the orientation of the scatterer, e ⫽ Nve; Nv is the number density of scatterers, and the extinction cross section is given by σe = −

4π k

Im[S p p (π − θt , 0; π − θt , 0)]

(57)

The extinction coefficient e is the controlling factor for propagation through the layer. Both the scattering and the extinction coefficients are dependent on the scattering amplitude of the scatterer. In Eq. (56) we provide two forms of the scattering coefficient, for the reason that for some problems such as Rayleigh scattering the phase function Pspp is known, and for others only the scattering amplitude is available. Surface backscattering from the lower boundary is given by the surface scattering coefficient from the lower boundary, 0 spp (), modified by propagation loss through the layer and transmission across the top boundary as σl0p p (θ ) = cos θTp (θ, θt )

σs0p p (θt ) cos θt

Tp (θt , θ ) exp

−2κ d e

cos θt

(58)

0 The explicit form of the surface scattering coefficient spp () is given in the next section. Finally we give the expression for the volume–surface interaction term resulting from the incident wave transmitted through the layer, reflected by the lower boundary, and then scattered by layer inhomogeneities back into the direction of the receiver. By reciprocity the wave that traverses the same path in the reverse direction will make the same contribution to the receiver. This term has the form

κs d|R p (θt )|2 −2κe d Tp (θt , θ ) exp cos θt cos θt {Ps p p (π − θt , π; π − θt , 0) + Ps p p (θt , π; θt , 0)}

0 σlv p p (θ ) = cos θTp (θ, θt )

= cos θTp (θ, θt )

Nν d|R p (θt )|2 −2κe d Tp (θt , θ ) exp cos θ cos θt

(4π ){|Ss p p (π − θt , π; π − θt , 0)|2 + |Ss p p (θt , π; θt , 0)|2 }

(59)

In Eq. (59), 兩Rp(t)兩2 is the p polarized Fresnel reflectivity and Nv is the number density. The relative importance of each of the four terms and the actual contents of the phase function or scattering amplitude are dependent on the specific application. This is illustrated in the subsequent sections. SCATTERING FROM SOIL SURFACES When an incident electromagnetic wave impinges on an irregular surface, it induces a current on it. The waves radiated by this current are called scattered waves. To calculate surface scattering, one needs to solve the integral equation that governs this induced current on the surface. In general, there is no closed-form, analytic solution for this integral equation. An approximate solution to it is available in Chapter 4 of Ref. 1 and a bistatic scattering coefficient was derived from it. It

190

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

is shown in Chapter 5 of Ref. 1 that the classical scattering coefficients under high- and low-frequency conditions for rough surfaces, i.e., those based on the Kirchhoff and the small perturbation approximations, are special cases of this bistatic scattering coefficient. Hence, we can examine rough surface scattering properties over the entire frequency band based on this coefficient. Only single-scatter, backscattering from a randomly rough soil surface is considered here. Readers interested in bistatic scattering and multiple surface scattering are referred to Refs. 1 and 15, respectively.

In Eq. (62) the polarization-dependent coefficients, f pp and Fpp, are

2Rh 2Rν , f hh = − cos θ cos θ 2 sin2 θ (1 + Rν )2 (r − 1) (sin2 θ + r cos2 θ ) = r2 cos3 θ

fν ν = Fν ν and

Fhh = − Backscattering from a Randomly Rough Soil Surface To compute backscattering from a soil surface, we need to know both the electric and geometric properties of the surface. In general, the permeability of the soil can be taken to be the same as that of air and only the complex dielectric constant is needed. An empirical formula for the relative complex dielectric constant of soil is available in Ref. 16. It has the form

1/0.65 sbd(4.70.65 − 1) + mvb (w0.65 − 1) r = 1 + 2.65

(60)

where sbd stands for soil bulk density in g cm⫺3; mv is the volumetric soil moisture; ⑀w is the permittivity of water given by

(w0 − 4.9) 1 + j f Tτ 1.1109 3.824T 6.938T 2 5.096T 3 − Tτ = + − 3 5 10 10 10 107 2 6.295T 1.075T 3 w0 = 88.045 − 0.4147T + + 4 10 105 f = frequency in GHz w = 4.9 +

T = temperature in degrees centigrade and b = 1.09 − 0.11S + 0.18C is the parameter accounting for the percent of clay, C, and the percent of sand, S, in the soil. For a randomly rough surface not skewed by natural forces such as the wind, it is sufficient to describe the geometry of the surface by its first- and second-order statistics. In this case it is the surface root mean square (rms) height, , and its autocorrelation function, (), normalized to its height variance, 2. The expression for the single-scatter, backscattering coefficient is

σ p0p (θ ) =

∞ k2 W (n) (−2k sin θ, 0) exp(−2k2 σ 2 cos2 θ ) σ 2n |I np p |2 2 n! n=1 (61)

where k is the wave number, is the incident angle, p stands for either vertical () or horizontal (h) polarization and I np p = (2k cos θ )n f p p exp(−k2 σ 2 cos2 θ ) +

(k cos θ )n Fp p 2

(62)

2 sin2 θ (1 + Rh )2 (r − 1) cos3 θ

The surface roughness spectrum W(n)(⫺2k sin , 0) is related to the surface autocorrelation function by 1 ρ n (ξ , ζ )e− j2kξ sin θ dξ dζ W (n) (−2k sin θ, 0) = (63) 2π 2 L exp[−(kL sin θ )2 ] 2 −ξ 2 if ρ(ξ ) = exp L2 2 xn−1 K−(xn−1) (2kL sin θ ) L (2kL sin θ ) = 2xn−1 (xn) 1 + ξ 2 −x if ρ(ξ ) = ,x ≥ 1 L2 2 2 −1.5 L [1 + (2kL sin θ ) ] −ξ if ρ(ξ ) = exp , ξ ≥ 0 (64) L where ⌫( ) is the gamma function and K⫺v( ) is the Bessel function of the second kind and of order v with an imaginary argument. In the above we have assumed that the local incident angle in the Fresnel reflection coefficients, Rv, Rh, in f pp can be approximated by the incident angle unless the surface correlation length is larger than a wavelength and the rms slope exceeds 0.4. In the latter case Rp() 씮 R(0) in f pp. This assumption leads to a restriction on the applicability of Eq. (61). Such a restriction is a complex function of the surface roughness parameters relative to the incident wavelength, the shape of the correlation function, and the relative permittivity ⑀r of the surface. An examination of this problem can be found in Ref. 1 (ch. 6). Fortunately we rarely have to deal with this complication in applications because natural soil surfaces have many roughness scales. Almost without exception there is a smaller scale roughness than the incident wavelength under consideration which will dominate scattering. Hence Eq. (61) is generally applicable as long as the surface rms slope is less than about 0.4. Note that when the grain size of the soil is comparable to or larger than the incident wavelength such as at optical frequencies, the wave no longer sees a surface. It sees a collection of grains and the scattering phenomenon changes from surface to volume. Models for Large-Scale Roughness. If k is larger than 1.5, the second term in Eq. (62) can be ignored. This is because the first term in Eq. (62) has a large enough growth factor to compensate for the associated exponential decay factor, while the second term does not. Thus, for large k, only the first term in Eq. (62) (also known as the Kirchhoff term) remains, and the series in Eq. (61) should be rewritten in exponential form. The final form for the scattering coefficient depends on

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

the assumed surface height density function. Let us consider the Gaussian and a modified exponential height density function

where c ⫽ 兹2v/ , 애 ⫽ v ⫺ 0.5, 0.75 ⬍ v ⬍ 1, ⌫ and K애 are the gamma and modified Bessel functions, and is the surface rms height. In backscattering we have

Gaussian ν, 0.75

(65)

Backscattering coefficient (dB)

0

|z|µ cµ+1 Kµ (c|z|) √ 2µ π(µ + 0.5)

|R p (0)|2 − tan2 θ exp , 2σs2 cos4 θ (2σs2 ) √ 3|R p (0)|2 − 3 tan θ σ p0p (θ ) = exp , 2σs2 cos4 θ σs 2|R p (0)|2 sin θ 2 tan θ K−1 , σs3 cos5 θ σs

Correlation comparisons 20

– 20

– 40 VV (1.5) HH (1.5) VV (Exp) HH (Exp) VV (Gau) HH (Gau)

–60

–80

–100 –20

ν, 1.0

where s2 ⫽ 2兩(0)兩 is the variance of the surface slope; (0) is the second derivative of the normalized surface correlation at the origin. Equation (65) is independent of the special form of the surface correlation function. It is useful for surfaces with large-scale roughness that are larger than the exploring wavelength and do not have smaller-scale roughness so that the geometric optics condition can be realized. This can happen for sea surfaces under low wind conditions. Theoretical Model Behaviors Large Roughness or High Frequency Models. We want to first plot the expressions in Eq. (65) to show the differences in the backscattering behaviors of these scattering coefficients. In Fig. 4 we see that there is not much difference between ⫽ 0.75 and ⫽ 1.0, expressions at large angles of incidence. At small angles the ⫽ 1.0 expression gives the highest values and the Gaussian expression the lowest. Furthermore the angular trends for different height density functions are different.

0

20 40 60 Incident angle (dB)

80

100

(a)

Correlation (0° to 30°) 5 0 Backscattering coefficient (dB)

p(z) =

191

–5 –10 VV (1.5) HH (1.5)

–15

VV (Exp)

–20

HH (Exp) VV (Gau)

–25

HH (Gau) –30 –5

0

5

10 15 20 Incident angle (deg)

25

30

35

(b)

Figure 5. (a) Effects of surface correlation on backscattering, k ⫽ 0.2, kL ⫽ 5, and dielectric constant ⫽ 15. Exponential correlation function gives the highest scattering level at large angles of incidence. (b) An enlarged comparison of the effects of surface correlation function in the small angular region. k ⫽ 0.2, kL ⫽ 5, and ⑀r ⫽ 15. Angular shapes of the backscattering curves are shown to be controlled by the surface correlation function when k is small.

Backscattering coefficient (dB)

5

0

–5

–10

–15

σs = 0.35 MED/1.0 MED/0.75 Gaussian

–10

– 25 –10

0

10

20 30 40 Incident angle (deg)

50

60

Figure 4. Comparisons of the backscattering coefficients in Eq. (65) when the rms slope is 0.35. Results indicate significant differences in angular trends.

Low to Moderate Roughness or Frequency Models. Next we illustrate the dependence of the surface scattering coefficient given by Eq. (61) on the type of correlation function, roughness scales and the dielectric constant. Effects of Surface Correlation. To show the effect of the surface correlation function on surface scattering, we plot in Fig. 5 the backscattering coefficients corresponding to exponential, x-power, and Gaussian correlation functions in Eq. (63). At large angles of incidence the exponential function gives the highest level, and the Gaussian gives the lowest level among the three functions. The exponential correlation function is commonly used in theoretical models to compare with mea-

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING 10

0

5 Backscattering coefficient (dB)

10

–10

–20

VV (kσ = 0.1) HH (kσ = 0.1)

–30

VV (kσ = 0.2) HH (kσ = 0.2) VV (kσ = 0.5) HH (kσ = 0.5)

–40

–50 –10

0

10

20 30 40 Incident angle (deg)

50

60

Figure 6. Increase in backscattering with k occurs when k is in the range, 0 to 0.5, with kL ⫽ 4 and ⑀r ⫽ 25.

surements. In most cases it gives a better agreement than the Gaussian correlation, especially for small incident angles or for surface roughness values that fall into the low- or intermediate-frequency regions. However, it is not differentiable at the origin and does not have a slope distribution; that is, it represents a surface with 90⬚ slopes at some points. Thus it is incompatible with theoretical model development. For the purpose of theoretical analysis, it should be viewed as an approximation to a more complicated correlation function that is differentiable at the origin. At low frequencies it is the functional form of the correlation function not its property at the origin that is important. We can view the use of the exponential function as a means to simplify our model to facilitate its application. The curvatures of the three sets of backscattering curves in Fig. 5(a) are different. More details can be seen after we replot them over a smaller angular region (0⬚ to 30⬚) as shown in Fig. 5(b). Here we see that the Gaussian correlation leads to a bell-shaped curve, as expected, while the 1.5-power function generates a fairly straight line and the exponential correlation function produces an exponentially shaped angular curve. The latter two curves coincide almost exactly at 0 and 30⬚, and the 1.5-power curve is clearly higher in between angles. In applications the 1.5-power function may be a good alternative to the exponential function for some cases. Effects of k and kL. Next we show the effect of the surface parameter k. This is illustrated in Fig. 6 and Fig. 7. Here the 1.5-power correlation function is used. The parameter kL is chosen to be 4. In Fig. 6 the backscattering coefficient is seen to rise in level as k increases from 0.1 to 0.5. The angular trends remain almost unchanged except for a gradual narrowing of the spacing between vertical and horizontal polarizations. This means that for k smaller than 0.5, its influence is primarily on the level of the backscattering curve and to a much lesser extent slows down the angular trends for both vertical and horizontal polarization. However, as k increases further from 0.5 to 1.2, Fig. 7 shows that the angular trend of the backscattering coefficient begins to level off significantly. The level of the backscattering curve at small incident angles begins to drop after k reaches 0.8, while at

0 –5 –10

VV (k σ = 0.5) HH (k σ = 0.5)

–15

VV (kσ = 0.8) HH (k σ = 0.8) VV (k σ = 1.2)

– 20 – 25 –10

70

HH (k σ = 1.2) 0

10

20 30 40 Incident angle (deg)

50

60

70

Figure 7. Further increase in k changes the angular shape of backscattering when kL ⫽ 4 and ⑀r ⫽ 25. Under this condition backscattering decreases in the range, 0⬚ to 10⬚, and increases in the range, 30⬚ to 60⬚.

large angles it continues to rise. The angular region where the backscattering curves with small k values cross over those with large k values is between 15⬚ and 20⬚. In summary, the effects of k on backscattering vary depending on its value. When k is increasing up to around 0.5, it causes a rise in the backscattering curve over all angles of incidence, as shown in Fig. 6. Then, as k increases further to 0.8, it causes a gradual leveling off of the angular trend by actually lowering the backscattering at small angles of incidence. This leveling off becomes significant as k increases beyond 0.8. In the meantime the spacing between vertical and horizontal polarizations decreases with an increase in k. Next we consider the effects of kL variation on the backscattering coefficient in Fig. 8, when k is fixed at 0.5. An increase in kL is seen to cause a narrowing of the spacing between vertical and horizontal polarizations. This means

20 kσ = 0.5

10 Backscattering coefficient (dB)

Backscattering coefficient (dB)

192

0 –10 – 20 VV (kL = 2.5) HH (kL = 2.5) VV (kL = 5) HH (kL = 5) VV (kL = 10)

– 30 – 40 – 50

HH (kL = 10) – 60 –10

0

10

20 30 40 Incident angle (deg)

50

60

70

Figure 8. Faster drop off of backscattering with the incident angle as kL increases with ⫽ (1 ⫹ x2)⫺1.5, k ⫽ 0.5, and ⑀r ⫽ 64.

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

scattering by leaves, we consider only backscattering from a half-space of disk- and needle-shaped leaves in this section. Readers are referred to Ref. 1 (ch. 11) for scattering by combinations of leaves, branches, and trunks. To use Eq. (56) for volume scattering calculation from leaves, we need the scattering amplitude of disk- and needleshaped leaves. A basic approach to this problem is to use the scattered field formulation based on the volume equivalence theorem,

Backscattering coefficient (dB)

–5

–10

–15

– 20 VV ( HH ( VV ( HH ( VV ( HH (

– 25

–30

–35 –10

193

0

r = 3) r = 3) r = 9) r = 9) r = 36) r = 36)

10

20 30 40 Incident angle (deg)

E s (rr ) =

50

60

70

In Fig. 10 we show a comparison between the surface model given by Eq. (61) and measurements from a rough soil surface reported in Ref. 23 where ground truth data were acquired by researchers, so model parameters were fixed. Except for the 50⬚ point at 1.5 GHz, very good agreement is realized in levels and trends between the model predictions and data at both 1.5 and 4.75 GHz, in VV and HH polarizations, and over an angular range from 20⬚ to 70⬚. SCATTERING FROM A VEGETATED AREA A vegetation layer may be viewed as an inhomogeneous layer without a top boundary. In general, the scatterers within the layer are collections of leaves, stems, branches, and trunks. At frequencies around 8.6 GHz or higher, leaves are usually the dominant scatterer and attenuator beyond 20⬚ off the vertical in the backscattering direction. To illustrate volume

V

exp(− jk|rr − r |) E in dV |rr − r |

(66)

20 IEM (vv) Backscattering coefficient (dB)

10

IEM (vv) Data (vv)

0

Data (vv)

–10 – 20 L = 8.4 cm. σ = 1.12 cm

– 30

f = 1.5 GHz,

r

= 15.34 - j3.66

ρ (r) = exp[–(r/L)1.2]

– 40 – 50

0

10

20

30

40

50

60

70

80

(a) 20 IEM(vv) 10 Backscattering coefficient (dB)

Comparisons with Soil Measurements

where k is the wave number in air, ⑀r is the relative permittivity of the scatterer (leaf), Ein is the field inside the scatterer, and integration is over the volume V of the scatterer defined in terms of the prime variables. Clearly the scattered field can be found if the field inside the scatterer is known, and

Figure 9. Effect of change in the surface dielectric constant on backscattering. Calculation uses k ⫽ 0.125, kL ⫽ 1.4, and 1.5 power correlation function. An increase in surface dielectric constant causes backscattering coefficients to increase for both vertical and horizontal polarizations and a faster angular drop-off for horizontal polarization.

that the current model approaches the Kirchhoff model when kL gets large. In general, the backscattering coefficient drops off faster as kL increases. With the choice of 1.5-power correlation function the angular trends are mostly linear. Dependence on Dielectric Constant. Finally we illustrate the dependence of the backscattering coefficient on the dielectric constant of the surface. For simplicity the term dielectric constant will always refer to its value relative to vacuum. It is generally expected that the level of the backscattering coefficient increases with an increase in the dielectric constant and that its effect on the angular trend is negligible. This is true for vertically polarized wave. For a horizontally polarized wave, both the level and the angular trend are affected. As shown in Fig. 9, the horizontally polarized coefficient drops off faster with the incident angle as the surface dielectric constant becomes larger causing the spacing between the VV and HH polarizations to widen.

k2 (r − 1) 4π

IEM(vv) Data(vv)

0

Data(vv)

–10 – 20 L = 8.4 cm. σ = 1.2 cm

– 30

f = 4.75 GHz, – 40 – 50

r

= 15.23 - j2.12

ρ (r) = exp[–(r/L)1.2]

0

10

20

30 40 50 Incident angle (deg)

60

70

80

(b)

Figure 10. Comparison between the surface model and backscattering measurements from a known soil surface. Results indicate good agreements between data and model in frequency, incidence angle, and polarization.

194

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

to facilitate integration, we need to express the integration variables in the local frame (principal frame of the scatterer) and then relate the local frame to the reference frame. This setup will also allow arbitrary orientation of the scatterer relative to the reference frame, since the angular separations between the two frames can be varied. Furthermore the leaves of a given species will have an orientation distribution, and we need to average over this distribution in order to find the scattering coefficient. Scattering Amplitudes of Scatterers To derive the scattering amplitudes for leaf-type scatterers, we need to allow the leaves to be arbitrarily oriented, and we want to obtain an estimate of the fields inside the leaf. Three types of leaf-shape are considered: elliptic disk, circular disk, and needle. Due to a lack of symmetry, the orientation of an elliptic disk is specified by three angles, while the circularand needle-shaped leaves are specified by only two. Relation between Reference and Local Frames. To relate a reference frame (x, y, z), to a local frame which is the principal frame of the scatterer (x⬙, y⬙, z⬙) for a symmetric scatterer such as a needle or a circular disk, we need to specify a polar angle 웁 and an azimuthal angle 움 of rotation between the coordinates. Let z⬙ correspond to the normal vector to the disk or needle axial axis. From Fig. 11 the two angles between the coordinate systems are defined by first rotating around z⬙ by 움 and then around y⬙ by 웁, yielding

x cos β cos α y = − sin α z cos α sin β

− sin β x x 0 y ≡ U y cos β z z (67)

cos β sin α cos α sin α sin β

For an elliptic disk another rotation with respect to the z⬙ axis by an angle 웂 defined by

x cos γ y = − sin γ z 0

0 x x 0 y ≡ U y z z 1

sin γ cos γ 0

(sin α cos β cos γ + cos α sin γ ) − sin α cos β sin γ + cos α cos γ sin α sin β x ≡ Ue y z

− sin β cos γ x sin β sin γ y cos β z (69)

where U and Ue are unitary matrices. Their inverses are equal to their transposes. Clearly U is a special case of Ue when 웂 ⫽ 0. Estimate of the Field Inside a Scatterer. For an elliptic disk, the field inside the scatterer in the local frame is related to the incident field, Ei ⫽ E0 exp(⫺jk ⭈ r), where k ⫽ ıˆk; ıˆ is the unit vector in the incident direction and r is the displacement vector in the reference frame, (17)

1 a 1 E local = 0 0

0 1 a2 0

0 0 Ue · E i 1 a3

(70)

Converting it to the reference frame, we have E in = U−1 e · E local ≡ A e · E i

(71)

where Ae is the matrix of transformation relating the incident field in the reference frame to the inner field in the same frame. In Eq. (70) a vector that appears with a matrix is understood to be a column matrix and

a1 = 1 + (r − 1)g1 a2 = 1 + (r − 1)g2

(72)

a3 = 1 + (r − 1)g3 where gi are the demagnetizing factors that vary depending on the shape of the scatterer. For an elliptic disk-shaped leaf, we have (18)

z′′

β

c K(e, π/2) − E(e, π/2) (1 − e2 )0.5 + a e2 c E(e, π/2) − (1 − e2 )K(e, π/2) g2 = a e2 (1 − e2 )0.5 c E(e, π/2) g3 = 1 − a (1 − e2 )0.5 g1 =

y′′

α y

x

x (cos α cos β cos γ − sin γ sin α) y = − cos α cos β sin γ − sin α cos γ z cos α sin β

(68)

z

α

is needed, yielding the final relation after redefining the coordinates as

β x′′

Figure 11. Illustration of principal frame of the scatterer relative to the reference frame.

(73)

In the above a ⬎ b Ⰷ c are the semi-axes of the elliptic disk e = [1 − (b/a)2 ]0.5

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

and the elliptic integrals of the first and second kinds are given by

K

E

e, π 2

π /2

= 0

e, π 2

π /2

=

1

dξ

1 − e sin2 ξ

(74)

1 − e sin ξ dξ

0

For circular disk-shaped leaf (a ⫽ b Ⰷ c), we should replace Ae by Ac, which is equal to Ae with 웂 ⫽ 0, and the demagnetizing factors given by

√

m2 − 1 g1 = g2 = sin √ 2 2(m − 1) m m2 − 1 √ m2 m2 − 1 1 −1 sin , 1− √ g3 = 2 m −1 m m2 − 1 1

m2

−1

defined in accordance with vertical and horizontal polarizations can be written as

k s , k i ) · pˆ i ] [ pˆ s · f e (k 2 k (r − 1)Ie (νˆ s · A e · νˆ i ) = 4π (hˆ s · A e · νˆ ) i

2

νˆ s = xˆ cos θs cos φs + yˆ cos θs sin φs − zˆ sin θs

−1 m=

a c (75)

The use of Eq. (72) in Eq. (66) with the phase of the incident field k ⭈ r replaced by (Ue ⭈ k) ⭈ r⬙ allows the integration variables to be expressed in the local frame to facilitate integration. Scattering Amplitude of an Elliptic Disk. In this section we show the expression for the scattering amplitude with sˆ, ıˆ denoting the unit vectors in the scattered and incident directions, respectively. For a pˆs polarized scattered field the amplitude portion of pˆs ⭈ Ein is pˆs ⭈ Ein ⫽ pˆs ⭈ Ae ⭈ E0. From Eq. (66) we can write the pˆs polarized scattered field component for an elliptic disk in the far field by letting 兩r ⫺ r⬘兩 equal to r ⫺ sˆ ⭈ r⬘ in the phase and equal to 兩r兩 ⫽ r in the amplitude. Then Eq. (66) reduces to

= ≡

(79)

hˆ s = −xˆ sin φs + yˆ cos φs

m − 1 m 1 m (m2 − 1) + ln 2 m2 − 1 2 m + 1 q m a2 m −1 2 ln +1 , m = 1− 2 g3 = −(m − 1) 2 m +1 c (76)

≡

(78)

i

sˆ = xˆ sin θs cos φs + yˆ sin θs sin φs + zˆ cos θs

g1 = g2 =

≈

f ν ν f hν (νˆ s · A e · hˆ i ) ≡ fν h fν h (hˆ s · A e · hˆ )

where the vertical and horizontal polarization unit vectors in Eq. (78) are chosen to agree with ˆ and ˆ unit vectors of a standard spherical coordinate system and form an orthogonal set with sˆ as

Similarly the unit polarization vectors associated with the incident direction are

iˆ = xˆ sin θi cos φi + yˆ sin θi sin φi + zˆ cos θi

For a needle-shaped leaf (a ⫽ b Ⰶ c), we should replace Ae by An, which is equal to Ae with 웂 ⫽ 0 and another set of demagnetizing factors given by

pˆ s · E s (rr ) =

195

k2 (r − 1) exp[− jk(r − sˆ − r )] ( pˆ s · E in ) dV 4π |rr| V k2 (r − 1) ( pˆ s · A e · E 0 ) exp(− jkr) 4πr V ˆ · r ] dV exp[ jk(sˆ − i) (77) k2 (r − 1) ( pˆ s · A e · E 0 ) exp(− jkr)Ie 4πr Ae exp(− jkr) 2 Ie · E 0 pˆ s · k (r − 1) 4π r exp(− jkr) k s , k i ) · pˆ i E0 pˆ s · f e (k r

where f e(ks, ki) is the scattering amplitude matrix for an elliptic disk, ks ⫽ ksˆ, and ki ⫽ kıˆ. The elements of this matrix

νˆ i = xˆ cos θi cos φi + yˆ cos θi sin φi − zˆ sin θi

(80)

hˆ i = −xˆ sin φi + yˆ cos φi and Ie = V

where

ˆ · r ] dV = 4πabc J1 (q) exp[ jk(sˆ − i) q

(81)

ˆ x }2 + {b[Ue · (sˆ − i)] ˆ y }2 q = k {a[Ue · (sˆ − i)] Scattering Amplitude of a Circular Disk. For a circular diskshaped leaf, the forms of Eqs. (77), (78), and (81) are valid, but we need to replace Ae by Ac because the definitions for the demagnetizing factors g1, g2, g3 are different and we need to set a ⫽ b in Eq. (81). Scattering Amplitude of a Needle. For a needle-shaped leaf, the forms of Eqs. (77) and (78) are valid, but we need to replace Ae by An because the definitions for the demagnetizing factors g1, g2, g3 are different. The corresponding expression for Ie will be called In. Its integral form is the same as Eq. (81) except that we have to evaluate it differently. Letting its length be L ⫽ 2c, we have

In = (πa2 ) =

L/2 −L/2

exp( jz qz ) dz

πa2 [exp( jqz L/2) − exp(− jqz L/2)] 2πa2 qz L = sin jqz qz 2 (82)

Theoretical Behaviors We consider the frequency, size, and moisture dependence of the backscattering coefficients 0 of a circular- and needleshaped leaf in this section. To do so, we need an estimate for the dielectric constant for leaves as a function frequency and moisture content. The empirical formula we use here is from Ref. 19.

196

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

With the above quantities known, the permittivity of vegetation is given as a function of frequency in GHz by 75 − j(22.86/ f ) (Mg ) = n + ν f f 4.9 + 1 + j( f /18) (83) 55 + ν f b 2.9 + 1 + ( j f /0.18)0.5

Backscattering coefficient, 8.6 GHz –10 Mg = 0.5

–12 –14

Mg = 0.3 –16 σ

0

–18 Mg = 0.1 –20 –22 –24 20

VV1 HH1 VV2 HH2 VV3 HH3 30

40 50 Incident angle (deg)

60

Figure 12. Dependence of the backscattering coefficient on the gravimetric moisture content of a circular disk-shaped leaf. Mg ⫽ 0.1, 0.3, and 0.5. Thickness of the leaf ⫽ 0.01 cm, and radius ⫽ 1.5 cm. Results indicate negligible difference between vertical and horizontal polarizations and a monotonic increase of 0 with moisture.

Permittivity of Vegetation Given GMC. When the gravimetric moisture content (GMC) is given and denoted by Mg, the nondispersive residual component of the dielectric constant is n = 1.7 − 0.74Mg + 6.16Mg2 The free-water volume fraction is v f f = Mg (0.55Mg − 0.076) while the volume fraction for the bound water is

v fb =

Dependence on Moisture Content, Size, and Frequency. In Fig. 12 we see that 0 increases over all incident angles as the moisture content of the circular leaves increases from 0.1 to 0.5 for both vertical and horizontal polarization. In general, there is very little difference between horizontal and vertical polarization because we assumed random orientation distribution for the leaves. Similar trends are expected for needleshaped leaves. In Fig. 13(a) we show circular leaf size dependence (radius) at 8.6 GHz, 40⬚ incidence at a dielectric constant of 14.9-j2.5. There is a steep rise over small size values reminiscent of the Rayleigh region followed by an oscillatory behavior indicating the resonant region. Finally saturation occurs in the high-frequency region where the length or radius of the scatterer exceeds one wavelength. In the Rayleigh region vertical polarization is smaller than horizontal, while the reverse is true in the larger size region. A similar plot versus the length of the needle-shaped leaf is shown in Fig. 13(b) where we see a similar trend without oscillations and a higher horizontal than vertical polarization level for lengths exceeding 2 cm. Similar plots of the extinction cross sections give a monotonic increase with the size of the disk in Fig. 14(a) and the length of the needle in Fig. 14(b). The increase is much faster for the disk than the needle. For the disk-shaped leaf, horizontal polarization is seen to experience more attenuation than the vertical, and the role of the polarizations reverses for the needle-shaped leaf. In Fig. 15 we show the frequency dependence of the backscattering coefficient for the two types of leaf. Both horizontal and vertical polarization increase with frequency, and the increase is faster for the needle-shaped leaf. Comparisons with Measurements

4.64Mg2

The first volume scattering medium considered is a soybean canopy. It is modeled as a half-space of randomly oriented,

1 + 7.36Mg2

Backscattering coefficient (40°)

Backscattering coefficient (40°) –6

–5 –6

– 6.5 –7 –7 σ Figure 13. Dependence of the backscattering coefficient at 8.6 GHz with (a) the radius of a randomly oriented circular disk-shaped leaf of thickness 0.01 cm and ⑀r ⫽ 14.9 ⫺ j2.5, (b) a randomly oriented needle with radius 0.12 cm and ⑀r ⫽ 8.36 ⫺ j3.12. There is a Rayleigh region for small size and a saturation behavior when the length or radius of the scatterer exceeds a wavelength.

–8 σ0

0

–7.5

–10

VV HH

–8

2

3

4

5

6

7

–9 VV HH

–11 8

0

1

2

3

4

Disk radius (cm)

Length (cm)

(a)

(b)

5

6

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

197

Extinction cross section (40°)

Extinction cross section (40°) 0.5 2.5

0.45

2

0.4 0.35

kc 1.5

kc

1

0.3 0.25

kcv kch

0.5

kcv kch

0.2 0.15

2

3

4

5

6

7

0

8

1

2

3

4

5

Disk radius (cm)

Needle length (cm)

(a)

(b)

disk-shaped leaves. Figure 16 shows the comparisons between Eq. (56) and the data from Ref. 20. The agreement between the model and data is very good. Then in Fig. 17 we show another comparison of Eq. (56) with data from deciduous trees. Again very good agreement is obtained. Since the leaves of soybeans and trees are clearly different in shape, it follows that while shape should make a difference in scattering, its effect becomes negligible when we consider random distributions. Thus the volume scattering model is applicable to disk-shaped leaves regardless of their shape whenever the leaf distribution is very wide. For needle-shaped vegetation we show in Fig. 18 a comparison with coniferous vegetation. Very good agreement is obtained between Eq. (56) and the data reported in Ref. 21.

Backscattering coefficient (40°) –10

6

Figure 14. Variation of the extinction cross section at 8.6 GHz with (a) the radius of a randomly oriented circular diskshaped leaf of thickness 0.01 cm and ⑀r ⫽ 14.9 ⫺ j2.5, (b) a randomly oriented needle with radius 0.12 cm and ⑀r ⫽ 8.36 ⫺ j3.12. For circular-shaped leaves extinction cross section is higher for horizontal polarization and for needle-shaped leaves vertical polarization is higher.

SCATTERING FROM SNOW-COVERED GROUND A snow medium consists of a dense population of wet or dry ice particles in air with a volume fraction usually between 0.1 and 0.4. Within the distance of a centimeter there are several needlelike ice particles that are randomly oriented. Due to their random orientation spherical particles have been used to model snow with a radius around 0.5 millimeter. Due to metamorphism an actual snow layer may have a grain size that increases with depth. It is clear that snow is a dense medium both spatially and electrically in the microwave region. Recall that the classical radiative transfer formulation is for sparse media and that the phase function is the product between the average of the magnitude squared of the scattering amplitude 具兩S兩2典 of a single scatterer multiplied by the number density n0. This definition of the phase function is applicable to sparse media where independent scattering occurs. For snow, the scatterers may scatter as a group in some

Backscattering coefficient (4.25 GHz)

–20

–5

– 30

–10

σ0

–15

VVc HHc VVn HHn

– 40

σ0

–20

–50 2

3

4 5 6 Frequency (GHz)

7

8

Figure 15. Vertically and horizontally polarized backscattering coefficients from a volume of randomly oriented circular disks (VVc, HHc) and needles (VVn, HHn) plotted as a function of frequency at a moisture content of 0.4. Disk thickness ⫽ 0.01 cm, and radius ⫽ 1.5 cm. Needle radius ⫽ 0.17 cm, and length ⫽ 1.67 cm. The level of backscattering is higher for circular disk-shaped leaves, but backscattering increases faster with frequency for needle-shaped leaves.

VV Da 1 Da 2 Da 3

–25

0

20 40 60 Incident angle (deg)

80

Figure 16. Comparison between volume scattering model and measurements from a soybean canopy. Leaf thickness ⫽ 0.02 cm, radius ⫽ 1.5 cm, and ⑀r ⫽ 29.1 ⫺ j6.1. [From (20).]

198

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

can be destroyed. Thus the most significant correction for dense media is the replacement of the number density by the effective number density.

Backscattering coefficient (8.6 GHz) –5

–10

An Effective Number Density From Ref. 4, an effective number density neff was derived for spherical scatterers. It is assumed that the positions of these scatterers are Gaussian correlated. It has the form

–15 σ0

–20

VV HH Dav Dah

–25

20

30 40 50 Incident angle (deg)

neff =

2 2 2 2 ∞ (k2si σ 2 )m 1 − e−k si σ e−k si σ + 3 3 d d m! m=1 (84) 2 2 3 −ksi σ π L exp − a(kx )a(ky )a(kz ) m d 4m

60

where

Figure 17. Comparison between volume scattering model given by Eq. (56) with measurements from deciduous trees in Kansas. Leaf thickness ⫽ 0.01 cm, radius ⫽ 1.5 cm, and ⑀r ⫽ 14.9 ⫺ j4.9. Both data and model indicate negligible difference between horizontal and vertical polarizations in backscattering from trees.

π L −k L

(md/L) + jkr L Re erf √ m d 4m 2 m k s = k(xˆ sin θs cos φs + yˆ sin θs sin φs + zˆ cos θs )

a(kr ) =

exp

2 r

2

k i = k(xˆ sin θi cos φi + yˆ sin θi sin φi + zˆ cos θi ) ˆ x + yk ˆ y + zk ˆ z ≡ k si k s − k i = xk

correlated manner, and near-field interaction may have to be included. For this reason we need an effective number density for correlated scatterers to account for phase coherence and a modified scattering amplitude to include near-field interaction (4). The effective number density of scatterers is smaller than the actual number because several scatterers are acting coherently as one scatterer. This effect can be very significant under large volume fraction condition or when only a few scatterers are within the distance of one wavelength. Because of random orientation, when there are too many scatterers lying within a wavelength, the coherence among scatterers

and 2 is the variance of a scatterer from its mean position, L is the correlation length among scatterer positions, and d is the average spacing between adjacent scatterers. For spherical scatterers, Ref. 4 has extended the Mie phase function to include the near-field interaction by not invoking the far-field approximation. Since the contents of Mie phase function is complex but well documented in the literature, the reader is referred to Refs. 4 and 16 for its content.

Backscattering coefficient (5.3 GHz) –10 –12

Backscattering coefficient (9.9 GHz) –5

–14

Vvv Vhh Svv Shh VV HH Dav Dah

–16

–10

σ 0 –18

–15

–20

σ0

–22

–20

–24

VV Dav

–25

20

10

15

20 25 30 35 Incident angle (deg)

40

Figure 18. Comparison of model given by Eq. (56) with coniferous tree data. Length ⫽ 1.67 cm, radius ⫽ 0.17 cm, and ⑀r ⫽ 8.36 ⫺ j3.12, f ⫽ 9.9 GHz. [From (21).] The agreement validates the model for coniferous vegetation.

30 40 50 Incident angle (deg)

60

Figure 19. Comparison between the layer model defined by Eqs. (56), (58), (59), and (61) and snow data reported by Ref. 22. The notations Vvv, Vhh stand for total volume scattering, Svv, Shh for total surface scattering, and VV, HH for total scattering by the layer for vertical and horizontal polarizations respectively. Dav, Dah denote data. Relative importance between surface and volume scattering is shown.

MICROWAVE RECEIVERS

Backscattering coefficient (9.5 GHz) –10 –12 –14

Vvv Vhh Svv Shh VV HH Dav Dah

–16 σ 0 –18

–20 –22 –24

20

30 40 50 Incident angle (deg)

60

Figure 20. Comparison between the layer model defined by Eqs. (56), (58), (59), and (61) and snow data reported by Ref. 22. The notations Vvv, Vhh stand for total volume scattering, Svv, Shh for total surface scattering, and VV, HH for total scattering by the layer. Relative importance between surface and volume scattering is shown.

Comparison with Measurements In this section we want to show an application of the layer model defined by Eqs. (56), (58), (59), and (61) and the relative contributions of the surface and volume scattering terms to the total backscattering from a snow-covered irregular ground surface. Most of the model parameters have been estimated by Ref. 22, so very little selection is needed. The rms heights of the snow-air boundary and snow ground boundary are 0.45 cm and 0.32 cm; correlation lengths of snow and ground surface are chosen to be 0.7 cm and 1.1 cm, and the snow and ground permittivities are 1.97-j0.007 and 4.7. We use exponential correlation function for both surfaces. Within the snow medium, the ice particle radius and permittivity are 0.014 cm, and 3.15 snow density is 0.48 g/cm3, and snow depth is 60 cm. Using these values, we compute backscattering at 5.3 GHz in Fig. 19 and at 9.5 GHz in Fig. 20. The results are in very good agreement with the dry snow data which are available only for vertical polarization. Figure 19 indicates that surface scattering contribution is dominating scattering but that volume scattering from snow makes significant contribution to total scattering. This is more so at 9.5 GHz where volume scattering accounts for about 2 dB difference. BIBLIOGRAPHY 1. A. K. Fung, Microwave Scattering and Emission Models and Their Applications, Norwood, MA: Artech House, 1994. 2. M. A. Karam et al., Microwave scattering model for layered vegetation, IEEE Trans. Geosci. Remote Sens., 30: 767–784, 1992. 3. A. K. Fung et al., Dense medium phase and amplitude correction theory for spatially and electrically dense media, Proc. IGARSS ’95, Vol. 2, 1995, pp. 1336–1338. 4. H. T. Chuah et al., A phase matrix for a dense discrete random medium: Evaluation of volume scattering coefficient, IEEE Trans. Geosci. Remote Sens., 34: 1137–1143, 1996.

199

5. H. T. Chuah et al., Radar backscatter from a dense discrete random medium, IEEE Trans. Geosci. Remote Sens., 35: 892–900, 1997. 6. F. T. Ulaby, R. K. Moore, and A. K. Fung, Microwave Remote Sensing, Vol. 1, Norwood, MA: Artech House, 1981. 7. S. Chandrasekhar, Radiative Transfer, New York: Dover, 1960. 8. A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 1, New York: Academic Press, 1978, pp. 30–33, 157–165. 9. L. Tsang and J. A. Kong, Thermal microwave emission from half space random media, Radio Sci., 11: 599–609, 1976. 10. A. Ishimaru and R. L. Cheung, Multiple scattering effects on wave propagation due to rain, Ann. Telecommun., 35: 373–378, 1980. 11. H. C. Van de Hulst, Light Scattering by Small Particles, New York: Wiley, 1957. 12. A. K. Fung and H. J. Eom, Multiple scattering and depolarization by a randomly rough Kirchhoff surface, IEEE Trans. Antennas Propag., 29: 463–471, 1981. 13. A. K. Fung and M. F. Chen, Scattering from a Rayleigh layer with an irregular interface, Radio Sci., 16: 1337–1347, 1981. 14. A. K. Fung and H. J. Eom, A theory of wave scattering from an inhomogeneous layer with an irregular interface, IEEE Trans. Antennas Propag., 29: 899–910, 1981. 15. C. Y. Hsieh et al., A further study of the IEM surface scattering model, IEEE Trans. Geosci. Remote Sens., 35: 901–909, 1997. 16. F. T. Ulaby, R. K. Moore, and A. K. Fung, Microwave Remote Sensing: Active and Passive, Vol. 3, Reading, MA: AddisonWesley, 1986, App. E8, p. 2103. 17. J. A. Stratton, Electromagnetic Theory, New York: McGraw-Hill, 1941. 18. M. A. Karam and A. K. Fung, Leaf-shape effects in electromagnetic wave scattering from vegetation, IEEE Trans. Geosci. Remote Sens., 27: 1989. 19. F. T. Ulaby and M. A. El-Rayes, Microwave dielectric spectrum of vegetation, Part II: Dual dispersion model, IEEE Trans. Geosci. Remote Sens., 25: 550–557, 1987. 20. A. K. Fung and H. J. Eom, A scatter model for vegetation up to Ku-band, Remote Sens. Environ., 15: 185–200, 1984. 21. H. Hirosawa et al., Measurement of microwave backscatter from trees, Int. Colloq. on Special Signature of Objects in Remote Sensing, Les Arcs, France, 1985. 22. J. R. Kendra, K. Sarabandi, and F. T. Ulaby, Radar measurements of snow: Experiment and analysis, IEEE Trans. Geosci. Remote Sens., 36: 864–879, 1998. 23. Y. Oh, K. Sarabandi, and F. T. Ulaby, An empirical model and an inversion technique for radar scattering from bare soil surfaces, IEEE Trans. Geosci. Remote Sens., 30: 370–381, 1992.

ADRIAN K. FUNG University of Texas at Arlington

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3609.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Microwave Remote Sensing Standard Article Richard K. Moore1 1The University of Kansas, Lawrence, KS Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3609 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (436K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Radiometers Radar Scattering Radar Scatterometers Radar Altimeters Ground-Penetrating Radars Imaging Radars Real-Aperture Radars Synthetic-Aperture Radars About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3609.htm17.06.2008 15:37:22

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

214

MICROWAVE REMOTE SENSING THEORY

image processing techniques, molecular theory of microwave radiation of gases, etc. The scattering effects of geophysical terrain can be characterized by random rough surface scattering and volume scattering from inhomogenities of the medium. In rough surface scattering, the rough surface has many peaks and valleys and the height profile can be described by random processes (1–3). In volume scattering, there are many particles that interact with microwaves. The positions of these particles are random. Such volume scattering effects are described by random distribution of wave scattering (2,4–6). This article studies the wave scattering by random rough surfaces and random discrete scatterers and their applications to microwave interaction with geophysical media in the context of microwave remote sensing. At microwave frequency, the size of the scatterers and the rough surface heights in a geophysical terrain are comparable to microwave wavelengths. Thus, the use of the wave approach based on solutions of Maxwell’s equations is essential. First, we review the basic principles of microwave interaction in active remote sensing and passive remote sensing. Next, we describe vector radiative transfer theory (2,7), which treats volume scattering and the small perturbation method for treating rough surface scattering. With the advent of modern computers and the development of computation methods, recent research in scattering problems emphasizes Monte Carlo simulations of solutions of Maxwell’s equations. These consist in generating samples or realizations of rough surface and random discrete scatterers and then using numerical methods to solve Maxwell’s equations for such boundary value problems. In the final section, we describe the results of such approaches.

BASICS OF MICROWAVE REMOTE SENSING Active Remote Sensing We first consider the radar equation for scattering by a conglomeration of scatterers (Fig. 1). Consider a volume V containing a random distribution of particles. The volume is illuminated by a transmitter in the direction of kˆi where kˆi is the

MICROWAVE REMOTE SENSING THEORY Microwave remote sensing of the earth has advantages over other remote sensing techniques, in that microwaves can penetrate clouds and also provide day and night coverage. Recent advances in microwave remote sensing measurements include synthetic aperture radar (SAR), imaging radar, interferometric SAR, spotlight SAR, circular SAR for active remote sensing, polarimetric radiometry, and SAR for passive remote sensing. The emphasis of this article is on how microwaves interact with geophysical terrain such as snow, ice, soils, forests, vegetation, rocky terrain, ocean, and sea surface. The scattering effects of such media contribute to microwave measurements. The scattering effects can be divided into surface scattering and volume scattering. This article describes the analytic and numerical approaches for treating such effects. Microwave remote sensing is a broad subject. We refer the reader to other articles in this encyclopedia for measurement techniques of antennas, radars, and radiometers, signal and

unit vector in the direction of incident wave propagation. The scattered wave in the direction kˆs is received by the receiver. Consider a differential volume dV containing N0 ⫽ n0dV num-

dV

^

^

ki

ks

Ri Transmitter

Rr Receiver

Figure 1. Scattering by a conglomeration of scatterers. The scattering geometry for remote sensing, showing both the transmitter and the receiver.

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

MICROWAVE REMOTE SENSING THEORY

215

ber of particles, where n0 is the number of particles per unit volume. The Poynting vector Si incident on dV is

For scattering by a small sphere of relative permittivity ⑀r and radius a, the differential cross section is

G (kˆ )P Si = t i 2 t 4πRi

− 1 2 σd = k a r + 2

(1)

where Pt is the power transmitted by the transmitter, Gt(kˆi) is the gain of the transmitter in the direction kˆi, and Ri is the distance between the transmitter and dV. Let d(N0)(kˆs, kˆi) denote the differential scattering cross section of the N0 particles in dV. The physical size of dV is chosen so that (N0 )

σd

(kˆ s , kˆ i ) = p(kˆ s , kˆ i ) dV

dP =

Si

dV

λ2 Gt (kˆ i )Gr (kˆ s ) ˆ ˆ p(k s , k i ) (4π )2 R2i R2r

λ2 [Gt (kˆ i )]2 dV p(−kˆ i , kˆ i ) (4π )2 R4

(N0 )

(kˆ s , kˆ i ) = N0 σd (kˆ s , kˆ i )

2 3 3 r − 1 (Vfk)(ka) = N0 σd = 4π r + 2

Similarly, we define κs = scattering cross section per unit volume

(9)

κa = absorption cross section per unit volume

(10)

κe = extinction cross section per unit volume

(11)

Then

d s p(kˆ s , kˆ i )

(12)

(4) where integration is over 4앟 scattered directions. Using independent scattering, we have κs = n0 d s σd (kˆ s , kˆ i ) = n0 σs (13) 4π

κa = n0 σa

(14)

(5) For a spherical particle with radius a and relative permittivity ⑀r,

(6)

σa = v0 kr

2 3 r + 2

where ⑀r⬙ is the imaginary part of ⑀r. The extinction is the sum of scattering and absorption. κe = κs + κa = n0 (σs + σa )

Equations (5) and (6) are the radar equation for a conglomeration of scatters. In independent scattering, we assume that the scattering cross sections of particles are additive. For the case that the particles scatter independently, and assuming that the N0 particles are identical, σd

(N ) σd 0

4π

Equation (5) is the radar equation for bistatic scattering of a volume of scatterers. For monostatic radar, the scattered direction is opposite to that of the incident direction kˆs ⫽ ⫺kˆi and we have for Rr ⫽ Ri ⫽ R Pr = Pt

Vf (4π/3)a3

and

κs =

where Gr(kˆs) is the gain of the receiver in direction kˆs. Putting together Eqs. (1) to (4) and integrating over volume dV gives the receiver power Pr as

Pr = Pt

N0 =

(3)

where Rr is the distance between dV and the receiver, and Ar(kˆs) is the effective receiver area of the receiving antenna. Gr (kˆ s )λ2 Ar (kˆ s ) = 4π

Let f ⫽ (4앟/3)a3n0 be the fractional volume occupied by the particles, then

(2)

which means d(N0) is proportional to dV and p(kˆs, kˆi) is the differential cross section per unit volume. The measured power at the receiver due to dV is (N ) σ 0 (kˆ s , kˆ i ) Ar (kˆ s ) d R2r

4 6 r

(7)

(15)

The parameters s, a, and e are also known respectively as scattering coefficient, absorption coefficient, and extinction coefficient. Consider intensity I that has dimension of power per unit area incident on a slab of thickness ⌬z and cross-section area A. Then the power extinguished by the scatterers is ⌬P ⫽ ⫺(intensity) (extinction cross section per unit volume) (volume)

where d(kˆs, kˆi) is the differential cross section of one particle. From Eqs. (2) and (7) and from N0 ⫽ n0dV, we have

P = −Iκe A z

p(kˆ s , kˆ i ) = n0 σd (kˆ s , kˆ i )

Hence ⌬I/⌬z ⫽ ⫺eI giving the solution I ⫽ I0e⫺es, where s is the distance traveled by the wave. Thus e represents attenu-

(8)

= I A

216

MICROWAVE REMOTE SENSING THEORY

ation per unit distance due to absorption and scattering. If attenuation is inhomogeneous, we have an attenuation factor of γ =

κe ds

where ds is the differential distance the wave travels. Attenuation can be included in the radar equation so that

Pr = Pt

dV

λ2 Gt (kˆ i )Gr (kˆ s ) ˆ ˆ p(k s , k i ) exp(−γi − γr ) (4π )2 R2i R2r

(16)

Bistatic Scattering Coefficients For active remote sensing of the surface of the earth, the radar equation is Gt σ Gr λ 2 Pr exp(−γr ) = exp(−γt ) A 2 Pt 4πr2t 4πr 4π

Thus, in terms of scattering characteristics of the surface of the earth, the quantity of interest to be calculated is A. For the case of terrain and sea returns, the cross section is often normalized with respect to the area A that is illuminated by radar. The bistatic scattering coefficient is defined as

where γi =

κe ds

γβ α (θs , ϕs ; θi , ϕi ) = lim

γr =

κe ds

(18)

is the attenuation from dV to receiving antenna. Particle Size Distribution. In many cases, the particles obey a size distribution n(a) so that the number of particles per unit volume with size between a and a ⫹ da is n(a)da. Thus n0 =

∞

n(a) da

(19)

0

Within the approximation of independent scattering,

∞

κs =

n(a)σs (a) da

(20)

where s(a) is the scattering cross section for a particle of radius a. Also

∞

p(kˆ s , kˆ i ) =

(27)

where Es웁 denotes the 웁 polarization component of the scattered electric field and E움i is the incident field with 움 polarization. Given the bistatic scattering coefficient 웂웁움, then the scattering cross section is A ⫽ A웂웁움. Notice that A cos i is the illuminated area projected onto the plane normal to the incident direction. From Fig. 2, the incident and scattered directions kˆi and kˆs can be written as follows:

kˆ i = sin θi cos ϕixˆ + sin θi sin ϕi yˆ − cos θizˆ

(28a)

kˆ s = sin θs cos ϕsxˆ + sin θs sin ϕs yˆ + cos θs zˆ

(28b)

In the backscattering direction s ⫽ i and s ⫽ 앟 ⫹ i, the monostatic (backscattering) coefficient is defined as, σβ α (θi , ϕi ) = cos θi γβ α (θs = θi , ϕs = π + ϕi ; θi , ϕi )

0

κa =

4πr2 |Eβs |2

r→∞ |E i |2 A cos θ α i

(17)

is the attenuation from transmitting antenna to dV and

(26)

(29)

Stokes Parameters Consider a time harmonic elliptically polarized radiation field with time dependence exp(⫺i웆t) propagating in the kˆ direction with complex electric field given by

n(a)σa (a) da

(21)

E = Evvˆ + Ehhˆ

n(a)σd (kˆ s , kˆ i ; a) da

(22)

ˆ denote the two orthogonal polarizations with Where vˆ and h ˆk, vˆ, and h ˆ following the right-hand rule. Thus

(30)

0 ∞ 0

For Rayleigh scattering by spheres

2 ∞ 4 r − 1 ˆ ˆ ˆ p(k s , k i ) = k |k s × (kˆ s × eˆ i )|2 n(a)a6 da r + 2 0 2 ∞ 8π 4 r − 1 k κs = n(a)a6 da 3 r − 2 0 2 ∞ 4π p 3 k κa = n(a)a3 da 3 p + 2 0

v (t) = Re(Ev e−iωt )

(31a)

h (t) = Re(Eh e−iωt )

(31b)

(23) z ^

ks

^

ki

(24)

θs

θi

(25)

A

φi

A typical example is rainfall where the raindrops are described by a size distribution known as the Marshall–Palmer size distribution of exponential dependence n(a) ⫽ nDe⫺움a, with nD ⫽ 8 ⫻ 106 m⫺4 and 움 ⫽ (8200/P0.21) m⫺1, and P is the precipitation rate in millimeters per hour.

x

y

φs

Figure 2. Incident and scattered directions in calculating bistatic scattering coefficients. The incident wave in the direction of kˆi impinges on the target and is scattered in the direction of kˆs.

MICROWAVE REMOTE SENSING THEORY

(C) The third polarization description is that of Stokes parameters. There are four Stokes parameters. They are

εh εβ

εα b

Eo χ ψ

|Ev |2 η |E |2 Ih = h η 2 U = Re(Ev Eh∗ ) η 2 V = Im(Ev Eh∗ ) η Iv =

a εv

Figure 3. Elliptical polarization ellipse with major and minor axes. The trace of the tip of the E vector at a given point in space as a function of time for an elliptically polarized wave.

Ev = Ev0 eiδ v

(32a)

Eh = Eh0 e

(32b)

iδ h

There are three independent parameters that describe the elliptical polarization. We shall describe three ways of describing polarization. (A) The three parameters are Ev0, Eh0, and phase difference 웃 ⫽ 웃v ⫺ 웃h. (B) For a general elliptically polarized wave, the ellipse may not be upright. It is tilted at an angle with respect to v and h (Fig. 3). The second description is using E0, , and . Where is the ellipticity angle and is the orientation angle. Let a and b be the lengths of semimajor axis and semiminor axis, respectively, and LH and RH stand for left hand and right hand polarized respectively. Let be defined so that tan χ = b/a if LH

(33a)

tan χ = −b/a if RH

(33b)

p

a2 + b 2 = E0

(34)

Then for LH b = E0 sin χ

(35a)

a = E0 cos χ

(35b)

and for RH b = −E0 sin χ

(36a)

a = E0 cos χ

(36b)

By performing a rotation of axis, it follows that Ev = iE0 cos χ cos ψ − E0 sin χ sin ψ

(37a)

Eh = iE0 cos χ sin ψ + E0 sin χ cos ψ

(37b)

Equations (37a) and (37b) give the relation between polarization descriptions of (A) and (B).

(38a) (38b) (38c) (38d)

Instead of Iv and Ih, one can also define two alternative Stokes parameters.

Let

Also,

217

I = Iv + Ih

(39a)

Q = Iv − Ih

(39b)

By substituting Eqs. (32a) and (32b) into Eqs. (38a– d), we have 2 Ev0 η E2 Ih = h0 η 2 U = Ev0 Eh0 cos δ η 2 V = Ev0 Eh0 sin δ η

Iv =

(40a) (40b) (40c) (40d)

From Eqs. (39) and (40) I 2 = Q2 + U 2 + V 2

(41)

The relation in Eq. (41) means that for elliptically polarized waves, there are only three independent parameters out of the four Stokes parameters. I=

E02 η

(42)

Q = I cos 2χ cos 2ψ

(43a)

U = I cos 2χ sin 2ψ

(43b)

V = I sin 2χ

(43c)

Equations (43a) to (43c) can be conveniently expressed using the Poincare sphere (Fig. 4) with I as the radius of the sphere and Q, U, and V representing, respectively, the Cartesian axes x, y, and z. From Eqs. (43a) to (43c), it follows that 2 is the latitude coordinate and 2 is the longitude coordinate. From Eqs. (43a) to (43c), Eq. (41) also follows readily. Thus, in elliptical polarization, the polarization is represented by a point on the surface of the Poincare sphere. Partial Polarization. For fluctuating fields, the complex phasors Ev and Eh fluctuate. For random media scattering, Ev and Eh are measured for many pixels and their values fluctu-

218

MICROWAVE REMOTE SENSING THEORY

We consider a dielectric medium of permittivity ⑀ and dimensions a, b, and d. The dimensions a, b, and d are large enough so that we assure the fields are zero at the boundaries. We next count the modes of the medium. The mode condition is 2 m 2 n 2 l 1 = νx2 + νy2 + νz2 ν2 = + + (46) µ 2a 2b 2d

V

I 2χ

U

2ψ

Q Figure 4. Poincare sphere. The north and south poles represent lefthanded and right-handed circular polarization, respectively. The spherical surface represents elliptically polarized waves and the points inside the sphere represent partially polarized waves.

ate from pixel to pixel. In these cases, the Stokes parameters are defined with averages taken

|Ev |2 η |Eh |2 Ih = η 2 U = ReEv Eh∗ η 2 V = ImEv Eh∗ η Iv =

(44a)

where l, m, and n ⫽ 0, 1, 2, . . .. The number of modes in a frequency interval d can be determined using Eq. (46). Each set of l, m, and n corresponds to a specific cavity mode. Thus, the volume of one mode in space is 1/8abd(애⑀)3/2 ⫽ 1/8V(애⑀)3/2 with V being the physical volume of the resonator. If a quarter-hemispherical shell has a thickness d and radius , then the number of modes contained in the shell is N(ν) dν =

where the factor of 2 accounts for the existence of transverse electric field (TE) and transverse magnetic field (TM) modes. If there are n photons in a mode with frequency then the energy E ⫽ nh. Using the Boltzmann probability distribution, the probability of a state with energy E is P(E) = Be−E/K T

(48)

(44b) (44c)

where B is a normalization constant, K is Boltzmann’s constant (1.38 ⫻ 10⫺23 J/K), and T is temperature in kelvin. Thus the average energy E in a mode with frequency is ∞

(44d)

E=

Thus, Q2 + U 2 + V 2 ≤ I 2

4πν 2 dν × 8V (µ)3/2 × 2 = 8πν 2V dν(µ)3/2 (47) 8

(45)

and the polarization corresponds to a point inside the Poincare sphere. Passive Remote Sensing Planck’s Radiation Law. All substances at a finite temperature radiate electromagnetic energy. This electromagnetic radiation is measured in passive remote sensing. According to quantum theory, radiation corresponds to the transition from one energy level to another. There are different kinds of transition, and they include electronic, vibrational, and rotational transitions. For complicated systems of molecules with an enormous number of degrees of freedom, the spectral lines are so closely spaced that the radiation spectrum becomes effectively continuous, emitting photons of all frequencies. To derive the relation between temperature and radiated power, consider an enclosure in thermodynamic equilibrium with the radiation field it contains. The appropriate model for the radiation in the enclosure is an ideal gas of photons. Photons are governed by Bose–Einstein statistics. The procedure for finding the energy density spectrum of the radiation field consists of (a) finding the allowed modes of the enclosure, (b) finding the mean energy in each mode, and (c) finding the energy in a volume V and frequency interval d.

∞

EP(E)

n=0 ∞ n=0

= P(E)

nhνe−nhν /K T

n=0 ∞

= e

−nhν /K T

hν ehν /K T − 1

(49)

n=0

The amount of radiation energy per unit frequency interval and per unit volume is w() ⫽ N()E/V. Hence, w(ν) =

8πhν 3 (µ)3/2 ehν /K T − 1

(50)

To compute radiation intensity, consider a slab of area A and infinitesimal thickness d. Such a volume would contain radiation energy W = 8πAd(µ)3/2

hν 3 ehν /K T

−1

(51)

per unit frequency interval. The radiation power emerging in direction within solid angle d⍀ is 2I cos Ad⍀, where I is the specific intensity per polarization and the radiation pulse will last for a time interval of d兹애⑀ /cos . Thus, √ √ d µ = Id µ8πA W = d 2AI cos θ (52) cos θ Equating Eqs. (51) and (52), I = µ

hν 3 ehν /K T − 1

(53)

MICROWAVE REMOTE SENSING THEORY

In the Rayleigh–Jean’s approximation h /KT Ⰶ 1. This gives for a medium with permeability 애 and permittivity ⑀ I=

KT µ λ 2 µ0 0

(54)

where ⫽ c/ is the free-space wavelength. In free space I=

KT λ2

(55)

for each polarization. The specific intensity given by Eq. (55) has dimension in power per unit area per unit frequency interval per unit solid angle (W ⭈ m⫺2 ⭈ Hz⫺1 ⭈ Sr⫺1). The Rayleigh–Jean’s law can be used in microwave frequencies. Brightness Temperatures. Consider thermal emission from a half space medium of permittivity ⑀1. In passive remote sensing, the radiometer acts as a receiver of the specific intensity I웁 emitted by the medium under observation. The specific intensity is I웁(0, 0) where 웁 denotes the polarization and (0, 0) denotes the angular dependence. From Eq. (54), the specific intensity inside the medium that is at temperature T is I=

KT 1 λ2 0

(56)

The specific intensity has to be transmitted through the boundary. Based on energy conservation, the received emission is, Iβ (θ0 , ϕ0 ) =

KT [1 − r10β (θ1 )] λ2

(57)

In Eq. (57), r10웁(1) denotes reflection when wave is incident from medium 1 onto medium 0. In Eq. (57), 1 and 0 are related by Snell’s law. The measured specific intensity are often normalized to get the brightness temperatures TBβ (θ0 , ϕ0 ) = Iβ (θ0 , ϕ0 )

λ2 K

(58)

From Eqs. (57) and (58), we obtain for a half-space medium TBβ (θ0 , ϕ0 ) = T[1 − r10β (θ1 )]

(59)

It is convenient to define emissivity e웁(0, 0) as eβ (θ0 , ϕ0 ) =

TBβ (θ0 , ϕ0 ) T

(60)

so that eβ (θ0 , ϕ0 ) = 1 − r10β (θ1 )

(61)

219

That emissivity is equal to one minus reflectivity and is a result of energy conservation and reciprocity. Kirchhoff ’s Law. Kirchhoff ’s law generalizes the concept of emissivity equal to one minus reflectivity to the case where there is bistatic scattering from rough surface and volume inhomogeneities.

eβ (θi , ϕi ) = 1 −

2π 1 π /2 dθ sin θ dϕγαβ (θ, ϕ; θi , ϕi ) 4π α 0 0 (64)

The equation above is a formula that calculates the emissivity from the bistatic scattering coefficient 웂. It also relates active and passive remote-sensing measurements. Emissivity of Four Stokes Parameters. In the following, we express emissivities in terms of bistatic scattering coefficients for all four Stokes parameters. The derivation is based on the result of fluctuation-dissipation theorem (8).

π /2 1 2π TBv (ˆs o ) = T 1 − dϕs dθs γβ v (ˆs , sˆ ob ) (65a) 4π β =v,h 0 0 π /2 1 2π dϕs dθs γβ h (ˆs , sˆ ob ) (65b) TBh (ˆs o ) = T 1 − 4π β =v,h 0 0 UB (ˆs o ) = TBv (ˆs o ) + TBh (ˆs o ) π /2 1 2π − 2T 1 − dϕs dθs γβ p (ˆs , sˆ ob ) 4π β =v,h 0 0 π /2 T 2π dϕs dθs [γβ p (ˆs, sˆ ob ) − γβ v (ˆs, sˆ ob ) = 4π β =v,h 0 0 (65c) − γβ h (ˆs , sˆ ob )] VB (ˆs o ) = TBv (ˆs o ) + TBh (ˆs o ) π /2 1 2π dϕs dθs γβ R (ˆs , sˆ ob ) − 2T 1 − 4π β =v,h 0 0 π /2 T 2π = dϕs dθs [γβ R (ˆs , sˆ ob ) 4π β =v,h 0 0 (65d) − γβ v (ˆs , sˆ ob ) − γβ h (ˆs , sˆ ob )] In Eqs. (65), 웂웁p is the bistatic scattering coefficient of a linearly polarized wave with polarization at an angle 45⬚ with respect to vertical and horizontal polarization wave, and 웂웁R is the bistatic scattering coefficient of an incident wave that is right-hand circular polarized. The measurement of the third and fourth Stokes parameters in microwave thermal emission from the ocean can be used to determine ocean wind (29,30).

The reflection r10웁 obeys a symmetry relation that is a result of reciprocity and energy conservation. r10β (θ1 ) = r01β (θ0 )

(62)

where r01웁(0) denote reflection for wave when incident from region 0 to region 1. Thus, emissivity is, from Eqs. (61) and (62) eβ (θ0 , ϕ0 ) = 1 − r01β (θ0 )

(63)

VOLUME SCATTERING AND SURFACE SCATTERING APPROACHES The subject of radiative transfer (7) is the analysis of radiation intensity in a medium that is able to absorb, emit, and scatter radiation. Radiative transfer theory was first initiated by Schuster in 1905 in an attempt to explain the appearance of absorption and emission lines in stellar spectra. Our inter-

220

MICROWAVE REMOTE SENSING THEORY

est in the radiative transfer theory lies in its application to the problem of remote sensing from scattering media. In the active and passive remote sensing of low absorption medium such as snow and ice, the effects of scattering due to medium inhomogeneities play a dominate role. Two distinct theories are being used to deal with the problem of incorporating scattering effects: wave theory and radiative transfer theory. In the wave theory, one starts out with Maxwell’s equations, introduces the scattering and absorption characteristics of the medium, and tries to find solutions for the quantities of interest, such as brightness temperature, or backscattering cross sections. We take such an approach later. Radiative transfer theory, on the other hand, starts with the radiative transfer equations that govern the propagation of energy through the scattering medium. It has an advantage in that it is simple and, more importantly, includes multiple scattering effects. Although the transfer theory was developed on the basis of radiation intensities, it contains information about the correlation of fields (9). The mutual coherence function is related to the Fourier transform of the specific intensity. In this section, we focus on vector radiative transfer equations including the polarization characteristics of electromagnetic propagation. The extinction matrix, phase matrix, and emission vector for these two types of scattering media are obtained. The goal is to study the scattering and propagation characteristics of the Stokes parameters in radiative transfer theory and use the theory to calculate bistatic scattering cross sections and brightness temperatures.

Radiative Transfer Theory Scalar Radiative Transfer Theory Radiative Transfer Equation. Consider a medium, consisting of a large number of particles (Fig. 5). We have I(r, sˆ) at all r and for all sˆ due to scattering. We consider a small volume element dV ⫽ dAdl, and dl is along the direction sˆ. The small volume element is centered at r. We consider the differential change in specific intensity I(sˆ) as it passes through dV. The differential change of power in direction sˆ is dP = −I(r, sˆ )dA d + I(r + dlsˆ , sˆ ) dA d

We now look at a radiative change in a medium containing many particles. The volume dV contains many particles that are randomly positioned. The volume dV is much bigger than 3 so that the random phase prevails and the input and output relation of dV can be expressed in terms of intensities instead of fields. There are three kinds of changes that will occur to I(r, sˆ) in the small volume element. 1. Extinction that contributes a negative change 2. Emission by the particles inside the volume dV that contributes a positive change 3. Bistatic scattering form direction sˆ⬘ into direction sˆ that contributes a positive change

dA dl

(66)

^ Iout(s)

Many particles inside

Iin(s) I(s′)

Figure 5. Specific intensity I(sˆ) in the direction sˆ in and out of an elemental volume. Many particles are inside the elemental volume. Each particle absorbs power and scatters power which leads to a decrease of the specific intensity in direction sˆ. At the same time, the specific intensity is enhanced by the emission of particles as well as the energy scattered into the direction sˆ from the other directions sˆ⬘.

MICROWAVE REMOTE SENSING THEORY

For the extinction where e is the extinction cross section per unit volume of space, the differential change of power from extinction is dP(i) = −κe dV I(r, sˆ ) d

(67)

Let ⑀(r, sˆ) be emission power per unit volume of space per unit solid angle per unit frequency; then dP(ii) = (r, sˆ ) dV d

2. I(r, sˆ) ⫽ I(x, y, z, , ). However, by symmetry, I is independent of x, y, and . Thus we let I(z, ) be the unknown. 3. Note that I(z, ) is 0 ⱕ ⱕ 180⬚. We divide into upwardand downward-going specific intensity. For 0 ⬍ ⬍ 앟/2

(68)

To derive the change due to bistatic scattering, we note that p(sˆ, sˆ⬘) is the bistatic scattering cross section per unit volume of space. Then if I(r, sˆ⬘) is the specific intensity in direction sˆ⬘, and since I(r, sˆ⬘) exists in all directions sˆ⬘, dP(iii) = I(r, sˆ )d p(ˆs , sˆ ) dV d

(69)

(72a)

Id (z, θ ) = I(z, π − θ )

(72b)

sˆ = sin θ cos ϕxˆ + sin θ sin ϕ yˆ + cos θ zˆ dIu for Iu cos θ dz sˆ · ∇I(r, sˆ ) = − cos θ dId for I d dz

where integration is over 4앟 directions. Equating the sum of Eqs. (67) to (69) to Eq. (66) gives

(73)

Thus the radiative transfer equations become

− I(r, sˆ ) dA d + I(r + dlsˆ , sˆ ) dA d =

KT dIu = −κa Iu + κa 2 dz λ dId KT = −κa Id + κa 2 − cos θ dz λ cos θ

(70)

4π

where dI/ds is the rate of change of I(r, sˆ) per unit distance in direction sˆ. Thus the radiative transfer equation with a thermal emission term at microwave frequencies is d kT I(r, sˆ ) = −κe I(r, sˆ ) + κa 2 + d p(ˆs , sˆ )I(r, sˆ ) ds λ

Iu (z, θ ) = I(z, θ )

Since

4π

dI dl dA d

ds = −κe dV I(r, sˆ ) d + (r, sˆ ) dV d

I(r, sˆ ) d p(ˆs , sˆ ) dV d

+

221

(71)

If one uses independent scattering, then as indicated previously, we have e ⫽ n0t, a ⫽ n0a, and p(sˆ, sˆ⬘) ⫽ n0兩f(sˆ, sˆ⬘)兩2. Passive Microwave Remote Sensing of a Layer Nonscattering Medium. Passive microwave remote sensing of the earth measured the thermal emission from the atmosphere and the earth with a receiving antenna known as radiometer (Fig. 6). Let the atmosphere and the earth be at temperatures T and Tc, respectively. Also let a be the absorption coefficient of the atmosphere. We make the following assumption and observations:

(74a) (74b)

The boundary condition for the radiative transfer equations are as follows: At z ⫽ 0, top of the atmosphere Id (z = 0) = 0

(75)

At z ⫽ ⫺d, the boundary separately the atmosphere and the earth surface, Id (z = −d) = rId (z = −d) +

KT2 (1 − r) λ2

(76)

where KT2 / 2 is the black-body specific intensity from the earth. The Fresnel reflectivity r depends on as well as polarization, as noted previously. The solution of Eqs. (74a) and (74b) can be expressed in terms of the sum of particular and homogeneous solutions.

KT + Ae−κ a secθ z λ2 KT Id = 2 + Beκ a secθ z λ

Iu =

1. The particles are absorptive, and absorption dominates over scattering. We thus set p(sˆ, sˆ⬘) ⫽ 0 so that e ⫽ a.

(77a) (77b)

where A and B are constants to be determined. Imposing the boundary condition [Eq. (75)] and Eq. (77b) gives Z

B=

Radiometer θ κa’T T2

Iu

Vacuum Id

Atmosphere Earth

Figure 6. Thermal emission from the atmosphere and the earth. The emission is received by the receiving antenna of the radiometer.

−KT λ2

and

KT −κ a dsecθ KT e + r 2 e−κ a dsecθ (1 − eκ a dsecθ ) λ2 λ KT2 + 2 (1 − r)e−κ a dsecθ λ

A= −

(78)

222

MICROWAVE REMOTE SENSING THEORY

Putting Eq. (78) in Eq. (77) gives Iu. The measured specific intensity by the radiometer is Iu(z ⫽ 0), and

Iu (z = 0) =

KT KT (1 − e−κ a dsecθ ) + 2 re−κ a dsecθ (1 − eκ a dsecθ ) λ2 λ KT2 −κ a dsecθ (79) + 2 (1 − r)e λ

The first term is the emission of a layer of thickness d with absorption coefficient a. The second term is the downward emission of the layer that is reflected by the earth. It is further attenuated as it travels upward to the radiometer. The last term is the upward emission of the earth that is attenuated by the atmosphere. It is convenient to normalize the measured I to a quantity with units of temperature. The brightness temperature TB is defined by

TB =

measured I K/λ2

= T (1 − e−κ a dsecθ ) + Tre−κ a dsecθ (1 − eκ a dsecθ )

(80)

+ T2 (1 − r)e−κ a dsecθ Vector Radiative Transfer Equation Phase Matrix of Independent Scattering. For vector electromagnetic wave scattering, the vector radiative transfer equation has to be developed for the Stokes parameters. We first treat scattering of waves by a single particle (e.g., raindrop, ice grain, leaf, etc.). The scattering property of the particle depends on its size, shape, orientation, and dielectric properties. Consider a plane wave E = (vˆ i Evi + hˆ i Ehi )eik i ·r = eˆ i E0 eik i ·r

(81)

impinging upon the particle. In spherical coordinates kˆ i = sin θi cos ϕixˆ + sin θi sin ϕi yˆ + cos θizˆ

(82)

vˆ i = cos θi cos ϕi xˆ + cos θi sin ϕi yˆ − sin θizˆ

(83)

hˆ i = − sin ϕi xˆ + cos ϕi yˆ

(84)

In the direction kˆs, the far-field scattered wave Es will be a spherical wave and is denoted by E s = (vˆ s Evs + hˆ s Ehs )eik s ·r

(85)

kˆ s = sin θs cos ϕsxˆ + sin θs sin ϕs yˆ + cos θs zˆ

(86)

vˆ s = cos θs cos ϕs xˆ + cos θs sin ϕs yˆ − sin θs zˆ

(87)

hˆ s = − sin ϕs xˆ + cos ϕs yˆ

(88)

with

The scattered field will assume the form Es =

eikr F(θs , ϕs ; θi , ϕi ) · eˆ i E0 r

(89)

where F(s, s; i, i) is the scattering function matrix. Hence Evs Evi eikr f vv (θs , ϕs ; θi , ϕi ) f vh (θs , ϕs ; θi , ϕi ) = · r f hv (θs , ϕs ; θi , ϕi ) fhh (θs , ϕs ; θi , ϕi ) Ehs Ehi (90) with f ab (θs , ϕs ; θi , ϕi ) = aˆ s · F (θs , ϕs ; θi , ϕi ) · bˆ i

(91)

and a,b ⫽ v,h. To relate the scattered Stokes parameters to the incident Stokes parameters, we define Is =

1 L(θs , ϕs ; θi , ϕi ) · I i r2

(92)

where Is and Ii are column matrices containing the scattering and incident Stokes parameters, respectively. Iv s I Is = hs (93) Us Vs Iv i I Ii = hi (94) Ui Vi and L(s, s; i, i) is the Stokes matrix. | f vh |2 | f vv |2 | f |2 | f hh |2 hv L(θs , ϕs ; θi , ϕi ) = ∗ ∗ 2Re( f vv f hv ) 2Re( f vh f hh ) ∗ ∗ 2Im( f vv f hv ) 3Im( f vh f hh ) ∗ Re( f vh f vv ) ∗ Re( f hh f hv ) ∗ ∗ + f vh f hv ) Re( f vv f hh ∗ ∗ Im( f vv f hh + f vh f hv )

∗ −Im( f vh f vv ) ∗ −Im( f hv f hh ) ∗ ∗ −Im( f vv f hh − f vh f hv ) ∗ ∗ Re( f vv f hh − f vh f hv ) (95)

Because of the incoherent addition of Stokes parameters, the phase matrix is equal to the average of the Stokes matrix over the distribution of particles in terms of size, shape, and orientation. Extinction Matrix. For nonspherical particles, the extinction matrix is generally nondiagonal. The extinction coefficients can be identified with the attenuation of the coherent wave, which can be calculated by using Foldy’s approximation (10). Let Ev and Eh be, respectively, the vertically and horizontally polarized components of the coherent wave. Then the following coupled equations hold for the coherent field along the propagation direction (, ). Let the direction of propagation be denoted by sˆ with sˆ(, ) ⫽ sin cos xˆ ⫹ sin sin yˆ ⫹ cos zˆ. dEv = (ik + Mvv )Ev + Mvh Eh ds

(96)

dEh = MhvEv + (ik + Mhh )Eh ds

(97)

MICROWAVE REMOTE SENSING THEORY

where s is the distance along the direction of propagation. Solving Eqs. (96) and (97) yields two characteristic waves with defined polarization and attenuation rates. Thus, for propagation along any particular directions (, ), there are only two attenuation rates. In Eqs. (96) and (97)

M jl =

i2πn0 f jl (θ, ϕ; θ, ϕ) k

j, l = v, h

dIh = 2Re(Mhh )Ih + Re(Mhv )U − Im(Mhv )V ds

(99) (100)

dV = − 2Im(Mhv )Iv + 2Im(Mvh )Ih + [Im(Mvv ) − Im(Mhh )]U ds (102) + [Re(Mvv ) + Re(Mhh )]V Identifying the extinction coefficients in radiative transfer theory as the attenuation rates in coherent wave propagation, we have the following general extinction matrix for nonspherical particles.

−2Re Mvv 0 κe = −2Re Mhv 2Im Mhv

0 −2Re Mhh −2Re Mvh −2Im Mvh

−Re(Mvh ) −Re(Mhv ) −(Re Mvv + Re Mhh ) −(Im Mvv − Im Mhh )

−Im(Mvh ) Im(Mhv ) (Im Mvv − Im Mhh ) −(Re Mvv + Re Mhh )

Flat surface Figure 7. Waves reflected by a flat surface. The reflected waves have the same phase in the specular direction.

dU = 2Re(Mhv )Iv + 2Re(Mvh )Ih + [Re(Mvv ) + Re(Mhh )]U ds (101) − [Im(Mvv ) − Im(Mhh )]V

Equal phase front

(98)

where the angular bracket denotes the average to be taken over the orientation and size distribution of the particles. Using the definition of Stokes parameters Iv, Ih, U, and V as well as Eqs. (96) and (97), the differential equations can be derived for dIv /ds, dIh /ds, dU/ds, and dV/ds dIv = 2Re(Mvv )Iv + Re(Mvh )U + Im(Mvh )V ds

223

(103)

Emission Vector. In this section, we list the emission vector for passive remote sensing of nonspherical particles. The fluctuation–dissipation theorem is used to calculate the emission of a single nonspherical particle. Generally, all four Stokes parameters in the vector source term are nonzero and are proportional to the absorption coefficient in the backward direction (11). The emission term can be inserted into the vector radiative transfer equations, which assume the following form

where e(sˆ) is the extinction matrix and P(sˆ, sˆ⬘) is the phase matrix. κa1 (ˆs ) κ (ˆs ) r(s) ˆ = a2 (105) −κa3 (ˆs ) −κa4 (ˆs ) κa1 (ˆs ) = κe11 (ˆs ) − d [P11 (ˆs , sˆ ) + P21 (ˆs , sˆ )] (106a) (106b) κa2 (ˆs ) = κe22 (ˆs ) − d [P12 (ˆs , sˆ ) + P22 (ˆs , sˆ )] κa3 (ˆs ) = 2κe13 (ˆs ) + 2κe23 (ˆs ) − 2 d [P13 (ˆs , sˆ ) + P23 (ˆs , sˆ )] (106c) κa4 (ˆs ) = −2κe14 (ˆs ) − 2κe24 (ˆs ) + 2 d [P14 (ˆs , sˆ ) + P24 (ˆs , sˆ )] (106d) where eij and Pij are, respectively, the ij elements of e and P with i, j ⫽ 1, 2, 3, or 4. The vector radiative transfer theory has been applied extensively to microwave remote sensing problems (2,5,12). Random Rough Surface Scattering Consider a plane wave incident on a flat surface (Fig. 7). We note that the wave is specularly reflected because the specular reflected waves are in phase with each other. The reflected wave only exists in the specular direction. Imagine now that the surface is rough. It is clear from Fig. 8 that the two reflected waves have a pathlength difference of 2h cos i. This will give a phase difference of ϕ = 2kh cos θi

(107)

If h is small compared with a wavelength, then the phase difference is insignificant. However, if the phase difference is significant, the specular reflection will be reduced due to interference of the reflected waves that can partially cancel each other. The scattered wave is diffracted into other direc-

θi θi h

dI(r, sˆ ) = −κ e (ˆs ) · I(r, sˆ ) + r(ˆs )CT (r) + d P(ˆs , sˆ ) · I(r, sˆ ) ds (104)

Figure 8. Waves scattered by a rough surface. Two reflected waves have a pathlength difference of 2h cos i that lead to a phase difference of ⌬ ⫽ 2kh cos i.

224

MICROWAVE REMOTE SENSING THEORY

tions. A Rayleigh condition is defined such that the phase difference is 90⬚. Thus for h<

λ cos θi 8

(108)

the surface is regarded as smooth and if h>

λ cos θi 8

(109)

The Fourier transform then becomes ∞ 1 FL (kx ) = dxe−ik x x f L (x) 2π −∞

(116)

The two sets [Eqs. (114) and (116)] should agree for large L. Physically, the domain of the rough surface is always limited by the antenna beamwidth. For a stationary random process f (x1 ) f (x2 ) = h2C(x1 − x2 )

(117)

the surface is regarded as rough. For random rough surface, h is regarded as the root mean square (rms) height. Consider an incident wave inc(r) impinging upon a rough surface. The wave function obeys the wave equation:

where h is the rms height and C is the correlation function. Some commonly used correlation functions are Gaussian correlation function

(∇ 2 + k2 )ψ = 0

C(x) = exp(−x2 /l 2 )

(110)

The rough surface is described by a height function z ⫽ f(x, y). Two common boundary conditions are those of the Dirichlet problem and the Neumann problem. For the Dirichlet problem, at z ⫽ f(x, y) ψ =0

∂ψ/∂n = 0

(112)

where ⭸ /⭸n is the normal derivative. In this section, we illustrate the analytic techniques for scattering by such surfaces. For electromagnetic wave scattering by a one-dimensional rough surface z ⫽ f(x), the Dirichlet problem corresponds to that of a TE wave impinging upon a perfect conductor, where is the electric field. The Neumann problem corresponds to that of a TM wave impinging upon a perfect electric conductor and is the magnetic field. In the section on numerical simulations, the cases of dielectric surfaces are simulated. The simplified perfect conductor has been used in studying active remote sensing of the ocean that is reflective to electromagnetic waves. Statistics, Correlation Function, and Spectral Density. For a one-dimensional random rough surface, we let z ⫽ f(x), where f(x) is a random function of x with zero mean f (x) = 0

(113)

We define the Fourier transform F (kx ) =

1 2π

∞ −∞

dxe−ik x x f (x)

(114)

Strictly speaking, if the surface is infinite, the Fourier transform does not exist. To circumvent the difficulty one can use the Fourier Stieltjes integral (1), or one can define truncated function

f L (x) =

f (x) |x| ≤ L/2 0

and exponential correlation function C(x) = exp(−|x|/l)

|x| ≥ L/2

(115)

(119)

In Eqs. (118) and (119) l is known as the correlation length. In the spectral domain

(111)

For the Neumann problem, the boundary condition is at z ⫽ f(x, y)

(118)

F (kx ) = 0

(120)

and

h2C(x1 − x2 ) =

∞ −∞

dk1x

∞ −∞

dk2x eik 1x x 1 −ik 2x x 2 F (k1x )F ∗ (k2x ) (121)

Since the left-hand side depends only on x1 ⫺ x2, we have F (k1x )F ∗ (k2x ) = F (k1x )F (−k2x ) = δ(k1x − k2x )W (k1x ) (122) and h2C(x) =

∞ −∞

dkx eik x xW (kx )

(123)

where W(kx) is known as the spectral density. Since f(x) is real, we have used the relation. F ∗ (kx ) = F (−kx )

(124)

C(x) = C(−x)

(125)

Since

and C(x) is real, it follows that W(kx) is real and is an even function of kx. Instead of using a correlation function to describe the Gaussian random process, one can also use the spectral density. For Gaussian correlation function of Eq. (118)

h2 l k2 l 2 W (kx ) = √ exp − x 4 2 π

(126)

and for exponential correlation of Eq. (119) W (kx ) =

h2 l π (1 + k2x l 2 )

(127)

MICROWAVE REMOTE SENSING THEORY

where kz ⫽ (k2 ⫺ kx2)1/2. The Rayleigh hypothesis (13) has been invoked in Eq. (132) as the scattered wave is expressed in terms of a spectrum of upward plane waves only. To calculate Es(kx), we match the boundary condition from Eq. (130),

z

θ i θs

z = f (x)

eik ix x−ik iz f (x) +

x Figure 9. The wave scattering from a slightly rough surface. The incident and scattered directions and the rough surface profile are shown.

Small Perturbation Method. The scattering of electromagnetic waves from a slightly rough surface can be studied using a perturbation method (9). It is assumed that the surface variations are much smaller than the incident wavelength and the slopes of the rough surface are relatively small. The small perturbation method (SPM) makes use of the Rayleigh hypothesis to express the reflected and transmitted fields into upward- and downward-going waves, respectively. The field amplitudes are then determined from the boundary conditions. An investigation of the validity of the Rayleigh hypothesis can be found in Ref. 13. A renormalization method can also be used to make corrections on the small perturbation method (14,15). In this section, we use the small perturbation method to carry out scattering up to second order and show that energy conservation is exactly obeyed. The incoherent wave is calculated up to first order to obtain bistatic scattering coefficients. Dirichlet Problem for One-Dimensional Surface. We first illustrate the method for a simple one-dimensional random rough surface with height profile z ⫽ f(x) and 具f(x)典 ⫽ 0. The scattering is a two-dimensional problem in x ⫺ z without y variation. Consider an incident wave impinging on such a surface with the Dirichlet boundary condition (Fig. 9). Let Einc = eik ix x−ik iz z

(129)

Einc + Es = 0

−∞

dkx eik x x+ik z z Es (kx ) = 0

(133)

(134)

Thus assuming that 兩kzf(x)兩 Ⰶ 1

k2iz f 2 (x) 1 − ikiz f (x) − + ··· 2 ∞ k2 f 2 (x) + ··· dkx eik x x 1 + ikz f (x) − z + 2 −∞

e

ik ix x

× [Es(0) (kx ) + Es(1) (kx ) + Es(2) (kx ) + · · · ] = 0

(135)

Balancing Eq. (135) to zeroth order gives eik ix x +

∞ −∞

dkx eik x x Es(0) (kx ) = 0

(136)

so that Es(0) (kx ) = −δ(kx − kix )

(137)

If we substitute Eq. (137) into Eq. (132), we get back the zeroth order solution in the space domain as given by Eq. (131). Balancing Eq. (135) to first order gives,

− eik ix x [ikiz f (x)] +

∞ −∞

dkx eik x x [ikz f (x)]Es(0) (kx ) ∞ =− dkx eik x x Es(1) (kx )

(138)

−∞

From Eq. (138), it follows that the first-order solution can be expressed in terms of the zeroth-order solution. Substituting Eq. (137) in Eq. (138) gives ∞ −∞

(130)

∞

Es (kx ) = Es(0) (kx ) + Es(1) (kx ) + Es(2) (kx ) + · · ·

The boundary condition at z ⫽ f(x) is

Perturbation theory consists of expanding the exponential functions of exp[⫺ikizf(x)] and exp[ikzf(x)] in power series. In the spectral domain, we also let

(128)

where kix ⫽ k sin i, kiz ⫽ k cos i. In the perturbation method, one uses the height of the random rough surface as a small parameter. We assume that kh Ⰶ 1, where h is the rms height. For the scattered wave, we write it as a perturbation series. Es = Es(0) + Es(1) + Es(2) + · · ·

225

dkx eik x x Es(1) (kx ) = 2ikiz f (x)eik x x ∞ = 2ikiz dkx F (kx )eik x x+ik ix x

(139)

−∞

The zeroth order scattered wave is the reflection from a flat surface at z ⫽ 0. Thus Es(0) = −eik ix x+ik iz z

(131) Es(1) (kx ) = 2ikiz F (kx − kix )

We let the scattered field to be represented by Es (r) =

∞ −∞

dkx eik x x+ik z z Es (kx )

The second equality is a result of using the Fourier transform of f(x) from Eq. (114). Thus

(132)

(140)

The result of Eq. (140) can be interpreted as follows. For the wave to be scattering from incident direction kix to scattered direction kx, the surface has to provide the spectral component

226

MICROWAVE REMOTE SENSING THEORY

of kx ⫺ kix. This is characteristic of Bragg scattering. Balancing Eq. (135) to second order gives

e

ik ix x

k2iz f 2 (x) + 2

∞ −∞

dkx e

−

∞ −∞

ik x x

k2z f 2 (x) (0) Es (kx ) 2

dkx eik x x ikz f (x)Es(1) (kx ) ∞ dkx eik x x Es(2) (kx ) =

Ss · zˆ = Re

(141)

Using Eq. (137), the first two terms in Eq. (141) canceled each other. Thus the second-order solution can be expressed in terms of the zeroth-order and first-order solutions. Substituting Eqs. (140) and (137) in Eq. (141) gives the second-order solution

∞ −∞

dkx kz kiz F (kx − kx )F (kx − kix )

(143) (144)

The dirac function 웃(kx ⫺ kix) in Eqs. (137) and (144) indicate that the coherent wave is in the specular direction. Substituting Eqs. (137), (143) and (144) in Eq. (132) gives the coherent field, to second order,

Es(0) (r)

= −e ∞

Es(2) (r) = eik ix x+ik iz z 2kiz

ik ix x+ik iz z

−∞

dkx kzW (kix − kx )

(145)

1 cos θi 2η

(0)* (1)* (0) (2)* Suppose we include up to E(0) ⫹ 具E(1) s Es s Es 典 ⫹ Es 具Es 典 ⫹ (0)* 具E 典Es in Eq. (149). That is, we include the intensity due to the product of first-order fields and the product of the zerothorder field and the second-order field. Thus the power per unit area outflowing from the rough surface that is associated with the coherent field 具S ⭈ zˆ典C is

∂ ∂ i ∗ E (0) (r) Es(0) (r) + Es(0) (r) Es(2) (r) 2ωµ s ∂z ∂z ∗ ∂ (150) + Es(2) (r) Es(0) (r) ∂z

Ss · zˆ C = Re

Ss · zˆ C =

∞ kiz 1 − 4kiz dkx (Re kz )W (kix − kx ) 2ωµ −∞

(151)

Note that W, the spectra density, is real. Since kz is imaginary for evanescent waves, the integration limits of Eq. (151) are replaced by ⫺k to k, because kz is imaginary for the evanescent waves and the integrand does not contribute. This means that evanescent waves do not contribute to the average power flow.

cos θi Ss · zˆ C = 2η

1 − 4kiz

k −k

dkx kzW (kix − kx )

(152)

For the incoherent wave power flow, we use the first-order scattering fields. In the spectral domain, we have the incoherent field 具S ⭈ zˆ典IC

∞ 1 Ss · zˆ IC = Re dkx 2ωµ −∞ ∞ ∗ i(k x −k ∗ x )z E (1) (k )E (1) (k∗ ) dkx ei(k x −k x )x k∗ x s z e s x −∞

(153)

(146) (147)

Since ∗

Es(1) (kx )Es(1) (kx ) = 4k2izW (kx − kix )(kx − kx )

Because the two terms in Eq. (145) are opposite in signs, the result of Eq. (145) indicates that the coherent reflection is less than that of the flat-surface case. Bistatic Scattering. To study energy transfer in scattering, we note that incident wave has power per unit area Sinc · zˆ = −

(149)

Putting Eqs. (145) to (147) into Eq. (150) gives

−∞

Es (r) = Es(0) (r) + Es(2) (r)

i ∂E ∗ Es s 2ωµ ∂z

(142)

where k⬘z ⫽ (k2 ⫺ k⬘2x )1/2. Equation (142) has the following simple physical interpretation. Second-order scattering consists of scattering from the incident direction kix into an intermediate direction k⬘x as provided by spectral component k⬘x ⫺ kix of the rough surface. This is followed by another scattering from k⬘x to direction kx as provided by spectral component kx ⫺ k⬘x of the rough surface. Since k⬘x is an arbitrary direction, an integration over all possible k⬘x is needed in Eq. (142). Coherent Wave Reflection. The coherent wave is obtained by calculating the stochastic average. We note that, Es(1) (kx ) = 0 ∞ Es(2) (kx ) = 2δ(kx − kix )kiz dkx kzW (kix − kx )

(2) s

−∞

Es(2) (kx ) = 2

The power per unit area outflowing from the rough surface is

(148)

flowing into the rough surface, where is the wave impedance. The negative sign in Eq. (148) indicates that the Poynting vector has a negative zˆ component.

(154)

It then follows that Ss · zˆ IC =

2 cos θi kiz η

k −k

dkx kzW (kx − kix )

(155)

Comparing Eq. (155) with Eq. (152) shows that the incoherent power flow exactly cancels the second term of Eq. (152) giving the relation Ss · zˆ =

cos θi 2η

(156)

MICROWAVE REMOTE SENSING THEORY

that exactly obeys energy conservation. Thus if we define the incoherent wave s = Es − Es

(157)

s (kx )s∗ (kx ) = I(kx )δ(kx − kx )

(158)

Einc Vj εj

We define the power flow per unit area of the incoherent wave as Ss · zˆ =

1 2ωµ

dkx kz I(kx )

(159) V

Casting in terms of angular integration, we let

Ss · zˆ =

k 2η

π /2 π /2

kx = k sin θs

(160)

kz = k cos θs

(161)

dθs cos2 θs I(kx = k sin θs )

(162)

Thus if we divide Eq. (162) by the incident power per unit area of Eq. (148), we can define the incoherent bistatic scattering coefficients. σ (θs ) =

Vl εl

k −k

k cos2 θs I(kx = k sin θs ) cos θi

(163)

Note that the integration of (s) over s will combine with the reflected power of the coherent wave to give an answer that obeys energy conservation. (1) For first-order scattering, ⑀(1) s (kx) ⫽ Es (kx), so that from Eqs. (155) and (158) I(kx ) = 4k2izW (kx − kix )

227

(164)

Figure 10. An incident field Einc(r) incidents upon N nonoverlap, small spheroids that are randomly positioned and oriented in a volume V.

volume. This has been demonstrated in controlled laboratory experiments (16). Well-known analytical approximations in multiple-scattering theory include Foldy’s approximation, the quasicrystalline approximation (QCA), and the quasicrystalline approximation with coherent potential (QCA-CP) (2). The last two approximations depend on the pair distribution function of particles positions in which the Percus–Yevick (PY) approximation is used to describe the correlation of positions of particles of finite sizes (2,17). Because of the recent advent of computers and computational methods, the study of scattering by dense media has recently relied on exact solutions of Maxwell’s equations through Monte Carlo simulations. Such simulations can be done by packing several thousands of particles randomly in a box, and then solving Maxwell’s equations. The simulations are performed over many samples (realizations) and the scattering results are averaged over these realizations. The results give information about the collective scattering effects of many particles closely packed together.

and Eq. (163) assumes the form σ (θs ) = 4k3 cos2 θs cos θiW (k sin θs − k sin θi )

(165)

The backscattering coefficient is for s ⫽ ⫺i σ (−θi ) = 4k3 cos3 θiW (−2k sin θi )

(166)

The small perturbation method has been used for the three-dimensional scattering problem (2,4) and also for dielectric surfaces. It has been used extensively for rough surface scattering problems in soils and ocean scattering (2,12). MONTE-CARLO SIMULATIONS OF WAVE SCATTERING FROM DENSE MEDIA AND ROUGH SURFACES

Formulation. Let an incident electric field Einc(r) impinge upon N number of randomly positioned small spheroids (Fig. 10). Spheroid j is centered at rj and has permittivity ⑀j, j ⫽ 1, 2, 3, . . ., N. The discrete scatterers are embedded in a background with permittivity ⑀. Particle j occupies region Vj. Let ⑀p(r) be the permittivity as a function of r, j for r in V j p (r) = (167) for r in the background Thus the induced polarization is (168)

p (r) −1

(169)

where

Scattering of Electromagnetic Waves from Dense Distributions of Nonspherical Particles Based on Monte Carlo Simulations For wave propagation in a medium consisting of randomly distributed scatterers, the classical assumption is that of independent scattering, which states that the extinction rate is equal to n0e where n0 is the number of particles per unit volume and e is the extinction cross section of one particle. This classical assumption is not valid for a dense medium that contains particles occupying an appreciable fractional

P(r) = χ (r)E(r)

χ (r) =

is the electric susceptibility. The total electric field can be expressed in terms of the volume integral equation as

E(r) = E inc (r) + k

2

g(r, r

dr

)

P(r ) − ∇

dr

∇

g(r, r ) P(r ) (170)

228

MICROWAVE REMOTE SENSING THEORY

where

where

ik|r−r |

e 4π|r − r |

g(r, r ) =

(171)

−

N j=1

N Vj

j=1

∇

dr

∇

Vj

dr

g(r, r ) P j (r )

g(r, r ) P j (r )

Nb

a jα f jα (r)

Vj

(173)

Here the spheroid is assumed to be small, we choose the basis functions in Eq. (173) to be the electrostatic solution of that of a spheroid. Let the jth spheroid be centered at rj with (174)

and the symmetry axes of the spheroid be xˆbj, yˆbj, and zˆbj, with respective semiaxes lengths be aj, bj, and cj. The orientation of the symmetry axis zˆbj is

is the electric field induced by the polarization Pj(r) of the spheroid j. Of particular importance is the internal field created by Pj(r) on itself. Because of the smallness of the spheroid, an electrostatic solution can be sought and we have, for r in Vj

C j1

(177)

f j3

(178)

Nb N j=1 α=1

ξ

+1 ξ0 − 1 0

− 1 (ξ02 − 1)

ξ2 − 1 1 p − ξ0 ξ0 − 0 ln =− 2 2

ξ

+1 ξ0 − 1 0

dr f lβ (r) · E(r)

Vl

dr f lβ (r) ·

+

dr f lβ (r) · E inc (r) +

Nb N

a jα

j=1 α=1 j = 1

Nb

Vl

dr f lβ (r) · q jα (r)

alα

α=1

Vl

alα f lα (r)

α=1

Vl

Nb

Vl

(182) dr f lβ (r) · qlα (r)

dr f lβ (r) · E inc (r) +

Nb N j=1 α=1 j = 1

a jα

Vl

dr f lβ (r)

· q jα (r) + alβ Clβ

The three basis functions are those of the dipole solutions, which are constant vectors. In Eqs. (176) to (178), v0j ⫽ 4앟aj2cj /3. If the particles are closely packed, the near-field interactions have large spatial variations over the size of a spheroid that may induce quadrupole fields inside the spheroid. However, the non-near-field interactions have small spatial variations over the size of a spheroid and only induce dipole fields inside the spheroid. Substituting Eqs. (173) into (172), we get

E(r) = E inc (r) +

Vl

=

1 = yˆ b j √ v0 j

ξ0 ln 2

a jβ =

=

1 f j2 = xˆ b j √ v0 j

The coefficients Cj움, 움 ⫽ 1, 2, 3, are constants depending on particle size, shape, and permittivity. An approximation sign is used in Eq. (181) to indicate the low-frequency approximation. Next, we apply Galerkin’s method (18) to Eq. (179)

The first three normalized basis functions for electric fields are (176)

p − =−

C j2 = C j3

(175)

1 f j1 = zˆ b j √ v0 j

(181)

where j is the particle index and 움 is the basis function index (19) and

=

zˆ b j = sin β j cos α j xˆ + sin β j sin α j yˆ + cos β j zˆ

dr ∇ g(r, r ) · f jα (r )(r j − 1)

q jα (r) ∼ = C jα f jα (r)

α=1

r j = x j xˆ + y j yˆ + z j zˆ

(180)

−∇

(172)

To solve Eq. (172) by the method of moments (18), we expand the electric field Ej(r) inside the jth spheroid in a set of Nb basis functions

E j (r) =

dr g(r, r ) f jα (r )(r j − 1)

Vj

is the scalar Green’s function with wavenumber of the background medium k ⫽ 웆兹애⑀. The induced polarization P(r) is nonzero over the particles only. Let Pj(r) be the polarization density inside the particle j. Then the volume integral equation [Eq. (170)] becomes

E(r) = E inc (r) + k2

q jα (r) = k2

a jα q jα (r)

(179)

This gives

alβ =

1 (1 − Clβ ) +

Nb N j=1 α=1 j = 1

Vl

dr f lβ (r) · Einc (r)

a jα

Vl

(183)

dr f lβ (r) · q jα (r)

Equation (183) contains the full multiple scattering effects among the N number of spheroids under the small spheroid assumption. Because of the small spheroid assumption, only

MICROWAVE REMOTE SENSING THEORY

the dipole term contributes to the first term in Eq. (183) which is the polarization induced by the incident field. Thus dr f lβ (r) · E inc (r) = v0l f lβ · E inc (rl ) (184)

229

The scattered field is decomposed into vertical and horizontal polarization E s = Evs vˆs + Ehs hˆ s

(191)

Vl

After the coefficients al웁, l ⫽ 1, 2, . . ., N, and 웁 ⫽ 1, 2, 3 are solved, the far-field scattered field in the direction (s, s) is expressed as

eikr (vˆ s vˆ s + hˆ s hˆ s ) 4πr Nb N · a jα (rj − 1)

E s (r) = k2

−ik s ·r

dr e

Vj

j=1 α=1

(185)

f jα (r )

where ⑀rj ⫽ ⑀j / ⑀, kˆs ⫽ sin s cos sxˆ ⫹ sin s sin syˆ ⫹ cos szˆ is the scattered direction; vˆs ⫽ cos s cos sxˆ ⫹ cos s sin syˆ ⫺ sin szˆ, hˆs ⫽ ⫺sin sxˆ ⫹ cos syˆ are, respectively, the vertical and horizontal polarizations. Under the small spheroid assumption, only the dipole fields will contribute to the far-field radiation in Eq. (185). Thus, we have ikr

e (vˆ svˆ s + hˆ shˆ s ) · E s (r) ∼ = k2 4πr

Nb N

a jα (rj − 1)v0 j f jα e−ik s ·r j

j=1 α=1

In the simulations of random discrete scatterer scattering, there is a strong component of coherent scattered field that is dependent on the shape of the box. To calculate the extinction rates and the scattering phase matrices, we need to subtract out the coherent field to get the incoherent fields. The incoherent fields contribute to the extinction rates and the scattering phase matrices. The simulations are performed for Nr realizations. We performed Nr ⫽ 50 realizations for this article. Let be the realization index. Then the coherent scattered field is

E =

(192)

and the incoherent field is σ

σs = rE s − E s

(193)

which also can be decomposed into vertical and horizontal polarization

(186) Numerical Simulation. In this section, we show the results of the numerical simulations by using N ⫽ 2000 spheroids and up to f ⫽ 30% by volume fraction. The relative permittivity used for the spheroids is 3.2 and the size parameter of the spheroids used is such that ka ⫽ 0.2. For dipole interactions, we replace the integral in the last term of Eq. (183) as follows. dr f lβ (r) · q jα (r) = (rj − 1)v0 j v0l k2 f lβ · G(rl , r j ) · f jα (187)

Nr σ r E Nr σ =1 s

σ σ ˆ σs = vs vˆ s + hs hs

(194)

The averaged N particle bistatic scattering cross sections are

σv s N =

Nr 1 | σ |2 Nr σ =1 vs

(195)

σh s N =

Nr 1 | σ |2 Nr σ =1 hs

(196)

Vl

where

G(rl , r j ) =

I+

∇∇ k2

g(r, r )

(188)

is the dyadic Green’s function. In the simulations, all the spheroids are prolate and are identical in size with c ⫽ ea, where e is the elongation ratio of the prolate spheroid. The size of the box in which the spheroids are placed is V = Nv/ f

(189)

where f is the fractional volume, and v ⫽ 4앟a2c/3 is the volume of one spheroid. To create the situation of random phase, it is important that the size of the box has to be larger than a wavelength. An incident electric field of E inc (r) = ye ˆ ikz

(190)

is launched onto the box containing the N spheroids. The matrix of Eq. (183) is solved by iteration. After the matrix equation is solved, the scattered field is calculated by Eq. (186).

The N particle bistatic scattering cross sections of Eqs. (195) and (196) contain the collective scattering effects of the particles. For the simulations, the particles are not absorptive. Thus the extinction rate is the same as the scattering rate. The extinction rate is κe =

1 V

π 0

2π

dθs sin θs 0

dϕs (σvsN + σhsN )

(197)

The 1/V factor in Eq. (197) is due to the fact that the extinction rate for a random medium is the extinction cross section per unit volume of space and the V in this case is the size of the box. The random positions of the spheroids are generated by a shuffling method facilitated by contact function (20,21). In Fig. 11, we illustrate the extinction coefficients normalized by the wavenumber k as a function of fractional volume. We consider the case consisting of aligned prolate spheroids with ka ⫽ 0.2 and e ⫽ 1.8. In such a medium, a vertically polarized incident wave with the incident polarization aligned with the symmetry axis of the prolate spheroids has a higher extinction rate than that of the horizontally polarized incident wave. The extinction rate is polarization dependent. In the same figure, we also show the extinction rate for the case when the spheroids are randomly oriented. For random orien-

230

MICROWAVE REMOTE SENSING THEORY

6

CASE 1. s ⫽ 0⬚ and s ⫽ 180⬚. For this case, we define the phase matrix elements of

x 10–4

σvsN V σhsN P21 (θs ) = V P11 (θs ) =

Extinction / k

5 4 3 2 1 0

0.05

0.1 0.15 0.2 Fractional volume

0.25

0.3

Figure 11. Extinction rate as a function of fractional volume of particles. Relative permittivity of particles ⑀r ⫽ 3.2. For spheroids ka ⫽ 0.2 and e ⫽ 1.8. For spheres ka ⫽ 0.2. The dotted curve is for the medium with spheres; ⫹, the medium with randomly oriented spheroids. o and x, the medium with aligned spheroids but with incident wave being vertically and horizontally polarized, respectively.

tation, the probability density function of orientation p(웁, 움) is p(β, α) =

sin β 4π

for 0 ⱕ 웁 ⱕ 앟, and 0 ⱕ 움 ⱕ 2앟. The sin 웁 is a result of the smaller solid angle at small 웁. The normalization of the probability density function is such that

2π

π

dβ p(β, α) = 1

dα 0

0

The attenuation for the randomly oriented case is between those of vertically and horizontally polarized incidence of the aligned case. The extinction rates are also compared with those of a medium with spherical particles of ka ⫽ 0.2 and e ⫽ 1. The spherical case predicts a much lower attenuation than the spheroidal case, even though the medium has the same fractional volume. Next we illustrate the scattering phase matrices. The phase matrices are bistatic scattering cross sections per unit volume of a conglomeration of particles. We consider the incident wave and polarization as given by Eq. (190). The spheroids are randomly oriented in the following illustrations. We also compare with the results of independent scattering. The independent scattering results are obtained by including only the first term inside the curly bracket of Eq. (183). That is, alβ =

1 (1 − Clβ )

Vl

dr f lβ (r) · E inc (r)

(198)

(199) (200)

In this case, the incident polarization is perpendicular to the scattering plane formed by the incident and scattered directions. The quantities P11 and P21 correspond to copolarization and cross-polarization, respectively. In Figs. 12(a) and 12(b), we plot P11 and P21, respectively, as a function of s. We give the results of s ⫽ 0⬚ and s ⫽ 180⬚ in the same figure. The following definition is used. For s ⫽ 0⬚, we have a scattering angle between 0⬚ and 180⬚ and the scattering angle is equal to s. For s ⫽ 180⬚, the scattering angle is equal to 360⬚ ⫺ s covering the range of scattering angles between 180⬚ and 360⬚. CASE 2. s ⫽ 90⬚ and s ⫽ 270⬚. For this case, we define the phase matrix elements of σhsN V σvsN P22 (θs ) = V

P12 (θs ) =

(201) (202)

In this case, the incident polarization is in the scattering plane formed by the incident and scattered directions. The quantities P22 and P12 correspond to copolarization and cross polarization, respectively. In Figs. (12c) and Figs. (12d), we plot P12 and P22, respectively, as a function of s. The scattering angle is between 0⬚ and 360⬚. The following definition is used. For s ⫽ 90⬚, we have the scattering angle between 0⬚ and 180⬚ the scattering angle is equal to s. For s ⫽ 270⬚, the scattering angle is equal to 360⬚ ⫺ s covering the range of scattering angles between 180⬚ and 360⬚. In Fig. 12, we show the results of P11, P21, P12, and P22 for the fractional volume of 30%. The results of independent scattering are also shown for comparison. The dimension of phase matrix is bistatic cross section per unit volume which is inverse distance. The unit is such that wavelength is equal to unity. We note that the copolarization, P11 and P22, are smaller than those of independent scattering, whereas the cross-polarization, P21 and P12, are higher than those of independent scattering. We also note that the simulation results fluctuate because of the random phase situation, whereas that of independent scattering are smooth curves. The fluctuations are characteristic of random scattering as the bistatic scattering cross section per unit volume will fluctuate from sample to sample. We also note that P22 has angular dependence that is the characteristic of Rayleigh scattering. Simulations of Scattering by Random Rough Surface The classical analytic approaches of solving random rough surface scattering problems based on the Kirchhoff approximation and small perturbation method are restricted in domain of validity (1,2). Recently, there has been an increasing interest in the Monte-Carlo simulations of random rough surface scattering. One method is the integral equation method in which an integral equation is then converted to a matrix

MICROWAVE REMOTE SENSING THEORY

1.5

research in recent years (22,23). Fast numerical methods have been developed to solve such problems (24–28). In this section, we shall use a standard method of simulation and illustrate the results. The numerical results yield many interesting features that are beyond the validity of classical methods.

x10–3

P11

1 0.5 0

0

50

100 150 200 250 Scattering angle (degree)

300

350

(a) 2.5

x10–5

Integral Equation. We confine our attention to numerical simulations of scattering by one-dimensional random rough surface. Consider an incident wave inc(r) impinging upon a one-dimensional random rough surface with a height profile z ⫽ f(x). The wave function (r) above the surface is ψ (r) = ψinc (r) + ψs (r)

P21

2

1 0

50

100 150 200 250 Scattering angle (degree)

300

350

ψinc (r ) +

dsnˆ · [ψ (r)∇g(r, r ) − g(r, r )∇ψ (r)] S

ψ (r ) r ∈ V0 = 1 ψ (r ) r ∈ S 2 0 r ∈ V1

(b) x10–5

P12

1.5

g(r, r ) =

1 0

50

100 150 200 250 Scattering angle (degree)

300

350

(c) x10–3

(204b) (204c)

S

0 r ∈ V0 = − 1 ψ1 (r ) r ∈ S 2 −ψ1 (r ) r ∈ V1

P22

50

100 150 200 250 Scattering angle (degree)

300

350

(d)

Figure 12. Phase matrix as a function of scattering angle for ka ⫽ 0.2, fractional volume f ⫽ 30%, elongation ratio e ⫽ 1.8, relative permittivity of particles ⑀r ⫽ 3.2. The spheroids are randomly oriented. In the simulations, N ⫽ 2000 particles are used and the results are averaged over Nr ⫽ 50 realizations. (a) P11, (b) P21, (c) P21, and (d) P22. o, the dense medium results. x, the independent scattering results.

equation by the method of moments, and the resulting equation is solved with a full matrix inversion. Many practical problems such as the near-grazing incidence or two-dimensional rough surface are considered large-scale rough surface problems. For such large-scale rough surface problems, an efficient numerical method is needed. The simulation of largescale rough surface problem have been a subject of intensive

(205)

dsnˆ · [ψ1 (r)∇g1 (r, r ) − g1 (r, r )∇ψ1 (r)]

0.5

0

i (1) H (k|r − r |) 4 0

is the two-dimensional Green’s function. The zero in Eq. (204c) corresponds to the extinction theorem. Note that in Eqs. (204a) and (204b), r is on the surface. The transmitted wave 1(r) in the lower medium satisfies

1

0

(204a)

where

2

1.5

(203)

where s(r) is the scattered wave. The wave function obeys the following surface integral equation

1.5

2.5

231

(206a) (206b) (206c)

where g1 (r, r ) =

i (1) H (k1 |r − r |) 4 0

(207)

is Green’s function of the lower medium. The wave functions (r) and 1(r) are related by boundary condition on surface S. For the TE wave, the boundary condition is

ψ (r) = ψ1 (r)

(208a)

nˆ · ∇ψ (r) = nˆ · ∇ψ1 (r)

(208b)

For the TM wave, the boundary condition is

ψ (r) = ψ1 (r)

(209a)

1 1 nˆ · ∇ψ1 (r) nˆ · ∇ψ (r) = 1

(209b)

232

MICROWAVE REMOTE SENSING THEORY

By rewriting Eq. (204b) and applying the boundary condition to Eq. (206b), we have 1 ψinc (r ) + ds[ψ (r)nˆ · ∇g(r, r ) − g(r, r )nˆ · ∇ψ (r)] = ψ (r ) 2 S (210a) 1 ds[ψ (r)nˆ · ∇g1 (r, r ) − g1 (r, r )ρnˆ · ∇ψ (r)] = − ψ (r ) 2 S (210b)

g ψinc[x, f (x)] = √ 2 π

where

1 ρ = 1

for the TE wave for the TM wave

(211)

and ds ⫽ [1 ⫹ (df /dx)2]1/2 dx. Let u(r, r ) =

that the surface current is truncated at x ⫽ ⫾L/2, so that the surface current is forced to zero for 兩x兩 ⬎ L/2. If this is an abrupt change, artificial reflection from the two endpoints will occur. To avoid these artificial reflections, one common way is to taper the incident wave so that the incident wave decays to zero gradually and are exponentially small outside the domain. A way to taper the incident wave is in the spectral domain. Let

p

1 + (df /dx)2nˆ · ∇ (r, r )

amn u(xn ) +

n=1 N

bmn ψ (xn ) = ψinc (xm )

(212a)

+

N

(1) bmn ψ (xn )

=0

(1) bmn

(k x −k ) 2 g 2 ix 4

(217)

∂ψ ∗ 1 Im ψinc inc 2ηk ∂z

(218)

The power received is Pinc = −

(212b)

∞ −∞

dxSinc · zˆ

(219)

n=1

amn

(1) amn

Sinc · zˆ = −

where xm ⫽ (m ⫺ 0.5)⌬x ⫺ L/2, m ⫽ 1, 2, . . ., N. The matrix (1) elements amn, bmn, a(1) mn, and bmn are given by

bmn

−∞

dkx ei(k x x−k z z) e−

n=1

(1) amn ρu(xn )

n=1

N

∞

where kix ⫽ k sin i, kz2 ⫽ k2 ⫺ kx2, k is the wavenumber of the free space, and g is the parameter that controls the tapering of the incident wave. The advantage of using Eq. (217) is that it obeys the wave equation exactly because it is a spectrum of plane waves. To calculate the power impinging upon the surface, we have

where r ⫽ r[x, f(x)] and r⬘ ⫽ r⬘[x⬘, f(x⬘)]. Using the method of moment (18), we can discretize the above equations as N

i (1) m = n x H0 (krmn ) 4 = x i H (1) [kxγm /(2e)] m = n 4 0

By substituting Eq. (218) into (219) and integrating over dx, it follows readily that only propagating waves contribute to power. Thus g2 Pinc = 4ηk

(213)

ik f (xn )(xn − xm ) − [ f (xn ) − f (xm )] (1) −x H1 (krmn ) 4 rmn m = n = 1 (x ) f x m − m=n 2 4π γm2 (214) i (1) m = n x H0 (k1 rmn ) 4 = (215) x i H (1) [k1 xγm /(2e)] m = n 4 0 ik1 f (xn )(xn − xm ) − [ f (xn ) − f (xm )] (1) H1 (k1 rmn ) −x 4 rmn m = n = (x ) f x 1 m − − m=n 2 4π γm2 (216)

where rmn ⫽ 兵(xn ⫺ xm)2 ⫹ [f(xn) ⫺ f(xm)]2其1/2 and 웂m ⫽ 兵1 ⫹ [f⬘(xm)]2其1/2, e ⫽ 2.71828183, H(1) 1 is the first-order Hankel function of the first kind, and f⬘(xm) and f⬙(xm) represent the first and second derivative of f(x) evaluated at xm, respectively. Incident Waves and Scattered Waves. In numerical simulations, the rough surface is truncated at x ⫽ ⫾L/2. This means

(kx − kix )2 g2 dkx kz exp − 4 −k k

(220)

Scattered Waves. After the surface fields (r) and n ⭈ ⵜ(r) are calculated by numerical methods, we can calculate the scattered waves and the transmitted waves using Eqs. (204a) and (206c), respectively. From Eq. (204a), the scattered wave is

ds[ψ (r)nˆ · ∇g(r, r ) − g(r, r )nˆ · ∇ψ (r)]

ψs (r ) =

(221)

S

For the far field

r

π 2 i g(r, r ) = exp −i exp(ikr ) exp 4 πkr 4 [−ik(sin θs x + cos θs z)]

(222)

Putting Eq. (222) and into (221), we have ψs (r ) =

i 4

r

π 2 exp −i exp(ikr )ψs(N ) (θs ) πkr 4

(223)

where

ψs(N ) (θs ) =

df sin θs − cos θs dx −u(x) + ψ (x)ik dx −∞ (224) exp[−ik(x sin θs + f (x) cos θs )]

∞

MICROWAVE REMOTE SENSING THEORY

TE wave

5 0

TE wave 5

SPM Numerical result

–5

–10

–10

–15

–15

–20

–20

–25

–25

–30

–30

–80 –60 –40 –20

0

20

40

60

–35

80

Figure 13. Numerical results of wave scattering from a dielectric slightly rough surface and comparison with the small perturbation method (SPM) for the case of rms height of 0.05 wavelength, correlation length of 0.35 wavelength and relative dielectric constant of 5.6 ⫹ i0.6 at an incidence angle of 30⬚. The numerical results are averaged over 200 realizations. The results show good agreement between the numerical method and SPM for the case of small rms height for TE wave incidence.

The Poynting’s vector in direction kˆs is Ss (r ) = −

Ps =

1 Im[ψs (r)∇ψs∗ (r )] 2ηk

=

0

π /2

−π /2 π /2 −π /2

–80 –60 –40 –20

0

20

40

60

80

Figure 15. The convergence test with respect to number of realizations for the TE case and comparison with the small perturbation method for the case of rms height of 0.3 wavelength, correlation length of 1 wavelength and relative dielectric constant of 5.6 ⫹ i0.6 at the incidence angle of 30⬚. The numerical results show that we need many realizations for the convergence of the averaged bistatic scattering coefficients. Also SPM cannot give accurate results for the moderate rms height.

To define the bistatic scattering coefficient (s), we have Ps = Pinc

(225)

The total power scattered above the surface is

5

SPM 200 realizations 20 realizations 1 realization

0

–5

–35

233

π /2 −π /2

dθs σ (θs )

(227)

Thus

dθs rSs (r )

σ (θs , θi ) = (226)

1 1 2 |ψ (N ) (θs )|2 2η 16 πk s

4πg2

k −k

|ψs(N ) (θs )|2 dkx kz exp[−(kx − k sin θi )2 g2 /2]

(228)

Results of Numerical Simulations. In this section, results of numerical simulations are illustrated. First, we show the bistatic scattering coefficient of a rough surface with small rms height and slope and compare them with that from the small perturbation method. Next, we illustrate the convergence with respect to the number of realizations. After that, wave

TM wave SPM Numerical result

–5 –10

5

–15

0

–20

–5

–25

–10

–30

–15

–35

TM wave SPM Numerical result

–20 –80 –60 –40 –20

0

20

40

60

80

Figure 14. Numerical results of wave scattering from a dielectric slightly rough surface and comparison with the small perturbation method (SPM) for the case of rms height of 0.05 wavelength, correlation length of 0.35 wavelength and relative dielectric constant of 5.6 ⫹ i0.6 at incidence angle of 30⬚. The numerical results are averaged over 200 realizations. The results show good agreement between the numerical method and SPM for the case of small rms height for TM wave incidence.

–25 –30 –35

–80 –60 –40 –20

0

20

40

60

80

Figure 16. Numerical results of wave scattering from the dielectric rough surface with a moderate rms height and comparison with the small perturbation method. TM case.

234

MICROWAVE REMOTE SENSING THEORY

Backscattering enhancement

0.1 0.09 0.08

Numerical results (TE) SPM (TE) Numerical results (TM) SPM (TM)

TE wave TM wave –20

0.07 0.06

–30

0.05

–40

0.04 –50

0.03 0.02

–60

0.01 0

–70

–80 –60 –40 –20 0 20 40 Scattering angles (deg)

60

80

Figure 17. Numerical results of wave scattering from a dielectric rough surface with a large rms height. Backscattering enhancement is shown for both TE and TM waves. This result indicates the importance of multiple scattering effects.

scattering from a very rough surface is calculated so that backscattering enhancement (27) can be observed. The variations of emissivities with incidence angles and permittivities are plotted too. Finally, backscattering coefficients at low grazing angle are shown. In Figs. 13 and 14, we plot the numerical results averaged over 200 realizations for TE and TM waves, respectively, for the case of rms h ⫽ 0.05, correlation length 0.35, incident angle 30⬚, and 5.6 ⫹ i0.6 of relative dielectric constant. We also show the results of using the small perturbation method (SPM). We see that the two results are in very good agreement. Because of the small height, we see a distinct angular peak in the specular direction of s ⫽ i ⫽ 30⬚. The peak is due to specular reflection of the coherent wave. Because of its small slope, (s) decreases rapidly away from the specular direction. In Fig. 15, we test the convergence with respect to the number of realizations for the TE case. We show the results averaged over 1, 20, and 200 realizations, respectively.

1 0.95 0.9 0.85 0.8 0.75 0.7 TE wave TM wave

0.65 0.6

0

10

30 40 20 Incidence angles (deg)

50

60

Figure 18. The variation of emissivities with the incident angles.

–80 –90 80 80.5 81 81.5 82 82.5 83 83.5 84 84.5 85 Incidence angle (deg) Figure 19. The variation of backscattering coefficients as a function of incidence angles from 80⬚ and 85⬚ and comparison with the small perturbation method.

The rms height of rough surface is 0.3, correlation length is 1.0, the relative dielectric constant of lower medium is 5.6 ⫹ i0.6, and the incident angle is 30⬚. For one realization, there are angular fluctuations of the bistatic scattering coefficients, which is a result of constructive and destructive interferences as a function of s. With the increasing of the number of realizations, the curve becomes smoother and smoother. SPM results are also shown in the figure. In this case, because of larger rms height, the two results are different. That means we can not use SPM for larger rms height. In Fig. 16, we plot the results with the same parameters as in Fig. 15 for the TM case. The results indicate that there are large differences between numerical simulation and SPM. In Fig. 17, the bistatic scattering coefficients are shown for the case with large rms height and slope for both TE and TM waves. The case is with rms h ⫽ 0.5, correlation length 1.0, and relative dielectric constant 5.6 ⫹ i0.6 at incidence angle of 10⬚. The backscattering enhancement is observed for both TE and TM waves. In passive remote sensing, the emissivity is an important parameter. It relates the brightness temperatures to the physical temperature. The emissivity can be calculated by integrating bistatic scattering coefficients over scattering angles and subtracting the result from unity. In Fig. 18, the variation of emissivities with the incident angles are illustrated for the case of rms h ⫽ 0.3, correlation length 1.0, and dielectric constant 5.6 ⫹ i0.6 for TE and TM cases, repectively. We can see that the emissivity of the TE wave decreases as the incidence angle increases. For the TM wave, the emissivity increases as the incidence angle increases. In Fig. 19, we study the case of close-to-grazing incidence by plotting the TE and TM backscattering coefficients as a function of incidence angles from 80⬚ and 85⬚. The results of the SPM are also shown. Both TE and TM backscattering coefficients decreases as a function of incidence angle. There are large differences between numerical simulations and the SPM.

MICROWAVE SWITCHES

BIBLIOGRAPHY 1. A. Ishimaru, Wave Propagation and Scattering in Random Media, Vols. 1 and 2, New York: Academic Press, 1978. 2. L. Tsang, J. A. Kong, and R. T. Shin, Theory of Microwave Remote Sensing, New York: Wiley-Interscience, 1985. 3. A. G. Voronovich, Wave Scattering from Rough Surfaces, New York: Springer-Verlag, 1994. 4. F. T. Ulaby, R. K. Moore, and A. K. Fung, Microwave Remote Sensing: Active and Passive, Vols. 1 and 2, Reading, MA: AddisonWesley, 1981. 5. F. T. Ulaby and C. Elachi, Radar Polarimetry for Geoscience Applications, Norwood, MA: Artech House, 1990. 6. P. Sheng (ed.), Scattering and Localization of Classical Waves in Random Media, Singapore: World Scientific, 1990. 7. S. Chandrasekhar, Radiative Transfer, New York: Dover, 1960. 8. L. Landau and E. Lifshitz, Electrodynamics of Continuous Media, Oxford, UK: Pergamon, 1960. 9. S. O. Rice, Reflection of EM waves by slightly rough surface, in M. Kline (ed.), The Theory of Electromagnetic Waves, New York: Interscience, 1963. 10. L. L. Foldy, The multiple-scattering of waves, Phys. Rev., 67: 107–119, 1945. 11. L. Tsang, Thermal emission of nonspherical particles, Radio Sci., 19: 966–974, 1984. 12. A. K. Fung, Microwave Scattering and Emission Models and their Applications, Boston: Artech House, 1994. 13. R. F. Millar, The Rayleigh hypothesis and a related least squares solution to scattering problems for periodic surfaces and other scatterers, Radio Sci., 8: 785–796, 1973. 14. A. A. Maradudin, R. E. Luna, and E. R. Mendez, The Brewster effect for a one-dimensional random surface, Waves Random Media, 3 (1): 51–60, 1993. 15. J. J. Greffet, Theoretical model of the shift of the Brewster angle on a rough surface, Opt. Lett., 17: 238–240, 1992. 16. A. Ishimaru and Y. Kuga, Attenuation constant of coherent field in a dense distribution of particles, J. Opt. Soc. Am., 72: 1317– 1320, 1982. 17. K. H. Ding et al., Monte Carlo simulations of pair distribution functions of dense discrete random media with multiple sizes of particles, J. Electro. Waves Appl., 6: 1015–1030, 1992. 18. R. F. Harrington, Field Computation by Moment Methods, New York: MacMillan, 1968. 19. J. Stratton, Electromagnetic Theory, New York: McGraw-Hill, 1941. 20. J. W. Perram et al., Monte Carlo simulations of hard spheroids, Chem. Phys. Lett., 105: 277–280, 1984. 21. J. W. Perram and M. S. Wertheim, Statistical mechanics of hard ellipsoids. I. Overlap algorithm and the contact function, J. Comp. Phys., 58: 409–416, 1985. 22. E. Thorsos and D. Jackson, Studies of scattering theory using numerical methods, Waves Random Media, 1 (3): 165–190, 1991. 23. M. Nieto-Vesperinas and J. M. Soto-Crespo, Monte-Carlo simulations for scattering of electromagnetic waves from perfectly conducting random rough surfaces, Opt. Lett., 12: 979–981, 1987. 24. L. Tsang et al., Monte Carlo simulations of large-scale problems of random rough surface scattering and applications to grazing incidence with the BMIA/canonical grid method, IEEE Trans. Antennas Propag., AP-43: 851–859, 1995. 25. K. Pak et al., Backscattering enhancement of vector electromagnetic waves from two-dimensional perfectly conducting random rough surfaces based on Monte Carlo simulations, J. Opt. Soc. Amer. A, 12 (11): 2491–2499, 1995.

235

26. K. Pak, L. Tsang, and J. T. Johnson, Numerical simulations and backscattering enhancement of electromagnetic waves from twodimensional dielectric random rough surfaces with the sparsematrix canonical method, J. Opt. Soc. Amer. A, 14 (7): 1515– 1529, 1997. 27. J. T. Johnson et al., Backscattering enhancement of electromagetic waves from two-dimensional perfectly conducting random rough surfaces: A comparison of Monte Carlo simulations with experimental data, IEEE Trans. Antennas Propag., AP-44: 748– 756, 1996. 28. V. Jandhyala et al., A combined steepest descent-fast multiple algorithm for the fast analysis of three-dimensional scattering by rough surface, IEEE Trans. Geosci. Remote Sens., 36: 738–748, 1998. 29. L. Tsang, Polarimetric passive microwave remote sensing of random discrete scatterers and rough surfaces, J. Electro. Waves and Appli., 5 (1): 41–57, 1991. 30. S. H. Yueh et al., Polarimetric measurements of sea surface brightness temperature using an aircraft K-band Radiometer, IEEE Trans. Geosci. Remote Sensing, 33 (1): 85–92, Jan. 1995.

LEUNG TSANG QIN LI University of Washington

MICROWAVE RESONATORS. See CAVITY RESONATORS.

Abstract : Microwave Remote Sensing Theory : Wiley Encyclopedia of Electrical and Electronics Engineering : Wiley InterScience

● ● ● ●

My Profile Log In Athens Log In

●

HOME ●

ABOUT US ●

CONTACT US

Home / Engineering / Electrical and Electronics Engineering

●

HELP ●

Recommend to Your Librarian

Microwave Remote Sensing Theory

●

Save title to My Profile

●

Article Titles A–Z

Standard Article

●

Email this page

●

Topics

●

Print this page

Wiley Encyclopedia of Electrical and Electronics Engineering

Leung Tsang1 and Qin Li1 1University of Washington, Seattle, WA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3615 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (726K)

Abstract The sections in this article are Basics of Microwave Remote Sensing Volume Scattering and Surface Scattering Approaches Monte-Carlo Simulations of Wave Scattering From Dense Media and Rough Surfaces

About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3615.htm17.06.2008 15:40:18

Browse this title

Search this title

●

Advanced Product Search

●

Search All Content

●

Acronym Finder

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3614.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Oceanic Remote Sensing Standard Article William J. Emery1 1University of Colorado, Boulder, CO Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3614 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (160K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are The Electromagnetic Spectrum Visible Wavelengths Sensing Ocean Color Imaging Sea Ice and Ice Motion The Thermal Infrared Estimating Sea Surface Temperature Split Window Methods Global Maps and Data Availability Relationship to Ocean Currents. SST Motion Skin SST Versus Bulk SST and Heat Exchange Presently Available Sensors and Future Plans

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3614.htm (1 of 2)17.06.2008 15:42:41

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3614.htm

Filtering out Clouds Various Methods and their Accuracies Passive Microwave Wind Speed Atmospheric Water Vapor Rainfall Merging the Passive Microwave with the Optical Data Active Microwave Radar Altimeters Scatterometer Winds Synthetic Aperture Radar (SAR) Summary About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3614.htm (2 of 2)17.06.2008 15:42:41

OCEANIC REMOTE SENSING

75

OCEANIC REMOTE SENSING The field of satellite remote sensing is a scientific discipline that has matured significantly since its birth in the 1960s. The mission of the early earth-orbiting satellites was to provide information for weather forecasting. Many of the ocean applications of remote sensing grew out of the use of weather satellite data. This continues to be true today, but there have been a number of satellite missions dedicated to the study of the ocean. Unfortunately, one of these dedicated satellite missions called SEASAT stopped transmitting after only 90 days of operation (reported to be due to a major power failure) instead of providing data for 2 to 3 years as planned. During this brief period, however, SEASAT was able to prove the utility of many of the first-time microwave satellite sensors. Only very recently have we managed to deploy similar sensors on a variety of spacecraft. We have chosen to organize the material in this article by dividing the information by sensor wavelength and pointing out the specific applications for each of the wavelength bands. Since many of the applications can be separated by wavelength, this division allows us to really address different applications as we address the individual wavelength bands. We will also discuss situations where more than one single band is needed to estimate the remote sensing application. In most cases the additional channel information is used to correct the parameter estimated in the other channel. THE ELECTROMAGNETIC SPECTRUM The electromagnetic (EM) spectrum quantifies the electromagnetic energy as a function of wavenumber or wavelength (Fig. 1). On this diagram are given the names of the spectral bands, the physical mechanisms leading to the creation of the EM signal, the transmission of this energy through a standard atmosphere, and finally the principal techniques used for remote sensing in these wavelengths. Looking at the visible, near-infrared, thermal infrared, and microwave wavelengths, it is interesting that the atmospheric transmission is a maximum for parts of these bands. Known as ‘‘windows,’’ these are the frequencies that are used for sensing the emitted thermal energy from the earth’s surface. From this spectrum we can see where certain parameters should be sensed. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

76

OCEANIC REMOTE SENSING

Healing Molecular vibrations

Dissociation

Molecular rotations

Electron shifts 105

104

103

102

101

Phenomena detected

Fluctuations in electron and magnetic fields 1

10–1 10–2 10–3 10–4 10–5 10–6 10–7 10–8 10–9 10–10 10–11 10–12 10–13 10–14

Photon energy Photon

10–14 10–15 10–16 10–17 10–18 10–19 10–20 10–21 10–22 10–23 10–24 10–25 10–26 10–27 10–28 10–29 10–30 10–31 10–32 10--33 energy 1020 1019 1018 1017 1016 1015 1014 1013 1012 1011 1010

109

108

107

106

105

104

103

102

101

Frequency

1 (cycles per second)

10–11 10–10 10–9 10–8 10–7 10–6 10–5 10–4 10–3 10–2 10–1 1 nm Gamma X – rays Ultrarays violet “Hard” “Soft”

1 µm Visible light

Infrared

1 mm Microwave

101

10

102

1m

EHF SHF UHF O/KeKvXCSL UHF

103

104

1 km HF

MF

106

1000 km Audio

Radio VHF

105

LF

107

108 AC

Wavelength

Spectral regions Transmission through atmosphere

Total X-ray gamma imaging ray counts, gamma-ray spectrometry

Atomic absorption spectrophotometry, mechanical line scanning

Imaging, Radiometry, single and spectrometry, multilens thermography cameras, various film emulsions, multispectral photography

Passive microwave radiometry, radar imaging

Electromagnetic sensing

Principal techniques for environmental remote sensing

Figure 1. Electromagnetic spectrum and remote sensing measurements.

VISIBLE WAVELENGTHS The visible bands on operational satellites have been designed to depict the weather through the distribution of and motion of clouds. Most weather satellite visible channels are broadband and average over the visible bands. The first operational narrow band visible color sensor was the multispectral scanner (MSS) that flew on the Landsat series of satellites with visible channels at 0.5 to 0.6 m, 0.6 to 0.7 m, 0.7 to 0.8 m, and 0.8 to 1.1 m. Unfortunately, the lack of a frequent repeat coverage by the MSS made it impossible to monitor changes in the ocean color that would reflect biological activity. The first dedicated sensor for monitoring and mapping ocean color was the coastal zone color scanner (CZCS) that flew on NIMBUS 7. With a much larger swath width (앑1636 km) than the MSS the CZCS overlapped at the equator on consecutive orbits, thus providing global coverage. The CZCS operated from 1978 until 1986. It was hoped that the United States would soon fly a new ocean color instrument called SeaWIFS which has unfortunately been delayed for many years. There is a new Japanese instrument flying called the ocean color and thermal sensor (OCTS) with ocean color channels.

SENSING OCEAN COLOR There is considerable data processing required to convert ocean color radiances into relevant biological properties. One of the biggest tasks is to correct for the atmosphere. The pos-

sible terms are depicted in Fig. 2: (a) light rays which upwell from below the sea surface and refract at the surface to point toward the sensor within the sensor’s instantaneous field of view (IFOV) and contribute to Lw, the water leaving radiance, (b) only a portion of the rays at (a) that contribute to Lw actually reach the sensor, (c) the rays of Lw which are scattered by the atmosphere, (d) sun’s rays that reflect into the sensor call ‘‘sun glitter,’’ (e) rays scattered in the atmosphere before reflecting at the sea surface called ‘‘sky glitter,’’ (f) ‘‘glitter’’ rays which are scattered out of the sensor IFOV, (g) ‘‘glitter’’ rays that reach the sensor, (h) sun rays scattered by the atmosphere into the sensor, (i) rays scattered toward the sensor after earlier atmospheric scattering, (j) upwelling radiation that emerges from the water outside the sensor IFOV and scattered into the sensor IFOV, and (k) rays scattered to the sensor having first been reflected at the sea surface. Since only the rays in (a) are desired, corrections must be developed for the rest of the components (1,2). The next step is to relate the ocean color measurements to water quality measurements. Thus we need to know how these parameters affect the optical properties of the ocean and what the spectral characteristics of the different constituents are. To accomplish the latter, we need a reference for pure seawater. Water containing phytoplankton has much more complex spectral characteristics. The dissolved organic matter associated with decayed vegetation is known as ‘‘yellow substance’’ or ‘‘gelbstoff.’’ This title refers to the fact that the spectrum of absorption shows a minimum in the yellow wavelengths. The complexity of computing all of the terms related to biological activity from only visible color channels is further reduced by dividing all waters into two categories: (1) Case 1 waters are seas whose optical properties are domi-

OCEANIC REMOTE SENSING

features in the first image will move to another location in the second image which can be located by finding the maximum cross-correlation (MCC) when the second image is shifted relative to the first. The difference in the locations allows one to compute the ice movements (4,5). Using passive microwave imagery from the 85.5 GHz channel on the SSM/I, we have 12.5 km spatial resolution data and can compute comprehensive, all-weather ice motions from these data. An example for the Southern Ocean is shown here in Fig. 3. A full 7 years of these motion fields can be found at http://polarbear.colorado.edu.

h

Lp g

d

77

f e

THE THERMAL INFRARED

b

i c

Lr

k

Lw

a

j

IFOV

Figure 2. Terms in the calculation of ocean color.

nated by phytoplankton and their degradation products only, and (2) Case 2 waters have non-chlorophyll-related sediments or yellow substance instead of, or in addition to, phytoplankton.

At wavelengths above 10 m we are sensing radiation emitted by the surface depending on temperature as described by Planck’s law. Thermal infrared radiation is used to sense meteorological phenomena at nighttime when there is no sunlight. This application is so important that in the AVHRR the infrared channel data are reversed so that cold clouds (which would appear dark due at low temperatures) appear white, similar to their representation in the visible channel. Since clouds will block the infrared radiation emitted from the surface, it is very important that each image be corrected for cloud cover. This is done in two ways: (a) The clouds are identified and subtracted from the image, and (b) the infrared sea surface temperatures (SSTs) are composited using the maximum temperature (lower temperatures are discarded) over a period of time. Since partial clouds in an image pixel will lower the temperature sensed, we will minimize the effects of the clouds when we composite on the maximum temperature. Thus, satellite SST maps are computed over a period of time, usually 10 days or two weeks. This compositing on the maximum temperature also helps to reduce the effect of atmospheric water vapor which attenuates the infrared signal from the ocean’s surface. ESTIMATING SEA SURFACE TEMPERATURE

IMAGING SEA ICE AND ICE MOTION Satellite imagery has provided the polar research community with important information on the space/time changes of the sea ice that dominates the polar world. Prior to the advent of polar-orbiting satellites there was no source of global weather information that could be used by polar climatologists. Most people believe the polar regions to be cloud-covered 90% of the time. If this were truly the case, we would not need to discuss the use of optical satellite data in mapping sea ice near the poles. The true cloud cover is actually about 75% to 78%, but it must be remembered that clouds move and don’t persistently cover the same region. Thus we can expect any one polar area to be free of clouds at least 20% of the time. This being the case, it is then possible to produce a clear image using temporal compositing. The composite is over some period of time (10 days, 2 weeks, etc.) and is computed using the maximum of the various channel representation. These values can be converted to sea ice concentrations (3). An important application is the computation of sea ice motion from successive visible/infrared imagery from the advanced very high resolution radiometer (AVHRR) and special sensor microwave imager (SSM/I) data. The basic idea is that

The earliest SST maps computed from satellite data used the 8 km spatial resolution scanning radiometer (SR) that flew on early National Oceanic and Atmospheric Administration (NOAA) polar orbiting satellites. Global data from this instrument were used to compute both global and regional maps of SST using optimum interpolation. These analog temperature measurements were later replaced with a fully digital system with a much higher spatial resolution (1 km) in the presentday AVHRR. Also the SST algorithm was changed to take advantage of the channels available with the AVHRR. In the almost two decades that have passed since the first launch of an AVHRR, it has remained the main source of SST information on a global basis. The first step in computing SST is to have an algorithm for converting the infrared pixel values to temperature. Unlike the visible channels the infrared channels of the AVHRR are equipped with a system to ‘‘calibrate’’ the measurements during their collection by the sensor. To compensate for sensor drift on each scan the sensor views two separate ‘‘blackbodies’’ with measured temperatures as well as ‘‘deep space.’’ These three temperatures are used to ‘‘calibrate’’ the pixel values and turn them into temperature.

78

OCEANIC REMOTE SENSING 80 5 cm/s

98

10 cm/s

8

98

2

98

99

8

988

4

99 4

8

988

98

99

4

4

98

8

988

994

99

988

994

Figure 3. Mean (1988–1994) sea ice motion for the Southern Ocean.

0

0

80

SPLIT WINDOW METHODS

parisons have shown the MCSST to produce values as accurate and reliable as any of these new algorithms.

Dual channel methods take advantage of the fact that the atmosphere attenuates infrared energy differently in the two adjacent channels. This approach is generally referred to as the ‘‘split-window’’ since it requires dividing the infrared water vapor window into two parts. The channels most frequently used are the 11 애m and 12 애m bands. Our spectrum (Fig. 1) shows that the 12 애m channel will experience greater atmospheric attenuation when compared to the 11 애m channel. The formulation of the SST using both of these channels is SST = aT4 + b(T4 − T5 )

(1)

where T4 and T5 are the 11 애m and 12 애m channels. In some cases a third channel will be used as well. For the AVHRR, this is channel 3 (앑3.7 애m), which must first be corrected for reflected radiation. The coefficients are usually found by comparisons with SST measured by coincident drifting and moored buoys. The most widely known split-window algorithm is the multichannel SST (MCSST, 6) which uses Eq. (1) with a ⫽ 1.01345 and b ⫽ 2.659762 and an additional term to account for solar zenith angle. There have been other algorithms developed as improvements on the MCSST (7) but many com-

GLOBAL MAPS AND DATA AVAILABILITY The MCSST has been processed into global maps of SST. These maps are available at the Data Active Archive Center (DAAC) of the Jet Propulsion Lab. They can be found along with other information on oceanographic data at http://podaac.www.jpl.nasa.gov/. RELATIONSHIP TO OCEAN CURRENTS. SST MOTION One important application of satellite SST mapping is the computation of surface currents. The assumption must be made that feature displacements by surface currents dominate all other changes in SST. Under this assumption, two sequential images of the same general area are used to find the displacement (see sea ice motion) that makes all of the features match up over time (8,9). By overlapping these areas, maps of ocean surface currents can be made much in the same way that sea ice motion was tracked. SKIN SST VERSUS BULK SST AND HEAT EXCHANGE Infrared satellite sensors can only ‘‘see’’ thermal radiation which, due to the high emissivity of seawater, is emitted from

OCEANIC REMOTE SENSING

the sub-millimeter-thick ‘‘skin’’ of the ocean. This temperature cannot be measured in situ by ships or buoys since touching the ocean’s skin layer will destroy it. The skin SST can be measured using radiometers from ships, and it is hoped that in the future there will be a change to computing skin SST from the infrared satellite data. It is the temperature difference between the skin SST and the subsurface or ‘‘bulk’’ SST that controls the exchange of heat between the ocean and the atmosphere. PRESENTLY AVAILABLE SENSORS AND FUTURE PLANS At present the instrument used to compute SST is the AVHRR. This sensor has been flying in one of two forms since 1978. There was a change in 1982 which added another channel to the AVHRR, making it possible to compute split-window SSTs. In 1998 a new sensor called MODIS (moderate field-of-view imaging spectrometer) will be launched with a number of new channels in the thermal infrared which should afford new opportunities for computing an even more accurate skin SST. It is clear, however, that the thermal infrared techniques will remain the primary channels for the computation of SST from space. FILTERING OUT CLOUDS One of the most important processing steps in using either infrared or visible data is the detection and removal of cloud cover. This is usually done in two ways: (1) Some method is used to sense clouds and remove them from the image, and (2) a temporal compositing technique is used that suppresses any residual clouds in the image. Even with both of these techniques there are usually some cloud contaminated pixels in the composite images. VARIOUS METHODS AND THEIR ACCURACIES There is a wide variety of techniques for the detection and removal of clouds. Most common are the threshold methods where clouds are considered to have high or low values relative to the targets of interest. Fixed thresholds based on the scene histograms are the most common of these methods. Dynamic thresholds are also used where the threshold value is computed dynamically from the histogram during the cloud removal process. Another procedure is to use the ‘‘spatial coherence’’ of the noncloud pixels. Statistical classification methods such as Maximum Likelihood are often used. Frequently, methods are combined with one procedure used to remove most of the clouds and the second method used to remove the remaining clouds.

79

nately after the demise of SEASAT another SMMR was deployed on NIMBUS 7 which operated until 1986. Many of the geophysical algorithms were developed with SMMR data. These algorithms have been further explored with the SSM/ I first launched in June 1987, and subsequent instruments continue to operate today. WIND SPEED The emitted microwave radiation at the ocean’s surface is affected by the roughness of the sea surface, which is correlated with the near-surface wind speed. Atmospheric attenuation of the 37 GHz radiation propagating from the sea surface is very small except when a significant amount of rain in the atmosphere scatters the 37 GHz signal. The Wentz wind speed algorithm relates wind speed at the 19.5 m height to the 37 GHz brightness temperatures, which are computed from the SSM/I 37 GHz horizontal and vertical polarized radiance measurements. Corrections are made in the 37 GHz data for the emission of the sea surface and for atmospheric scattering conditions. The wind speeds computed by Wentz (10) are referenced to a 10 m height as is traditional in meteorology. Comparisons between SSM/I inferred wind speeds and wind speeds measured at moored buoys indicate that the SSM/I wind speeds are accurate to 2 m/s. ATMOSPHERIC WATER VAPOR The SSM/I has a number of channels that can be used to sense atmospheric moisture (11). Of these the 22 GHz channel is the most sensitive to water vapor. Retrievals of atmospheric moisture using this channel alone are accurate to 0.145 g/cm2 to 0.17 g/cm2. Global water vapor fields computed from SSM/I data demonstrates how this capability can be used to map global patterns of atmospheric moisture, something that has not been possible with radiosonde measurements. RAINFALL Precipitation is the primary contributor to atmospheric attenuation at microwave wavelengths. This attenuation results from both absorption and scattering by hydrometeors. The magnitude of these processes depends upon wavelength, drop size distribution, and precipitation layer thickness. For light rain we can neglect the effect of multiple scattering, and the attenuation can be computed from statistical considerations. For heavier rainfall we must include multiple scattering by considering the full drop size distribution.

PASSIVE MICROWAVE

MERGING THE PASSIVE MICROWAVE WITH THE OPTICAL DATA

One of the great changes in remote sensing since 1980 has been the increased application of passive microwave imagery to many fields of geoscience. This was motivated by a series of operational passive microwave instruments. The first was the scanning multifrequency microwave radiometer (SMMR), which was followed by the special sensor microwave/imager (SSM/I). The SMMR was first carried by SEASAT, but fortu-

One of the most useful applications of the passive microwave data are as corrections for the optical wavelength parameter retrievals. For example, one of the biggest problems in the infrared estimation of SST is the correction for atmospheric moisture. Since the SSM/I senses water vapor it can be used as a correction for SST estimates (3) which improves the estimation of SST to an accuracy of 0.25.

80

OCEANIC REMOTE SENSING

ACTIVE MICROWAVE One of the greatest benefits of the short-lived SEASAT satellite was the proof that both active and passive microwave sensors could measure quantities of real interest to oceanographers. The brief 90 days of data clearly demonstrated how the all-weather microwave instruments could observe the earth’s surface. The three most important instruments were the RADAR Altimeter, the synthetic aperture RADAR (SAR) and the scatterometer (SCAT). The former measures the height of the sea surface above a reference level while the scatterometer measures the wind stress over the ocean. The SAR images the ocean’s surface but also has very important applications in polar regions (mapping sea ice) and over land where it is related to the vegetation. RADAR ALTIMETERS After SEASAT there were no altimeters in space until 1985 when the US Navy launched GEOSAT designed to map the earth’s gravity field for naval operations. After it completed this mission the navy was persuaded to place the satellite in the SEASAT orbit to collect data useful for oceanographic studies. Two years of very useful data were collected and formed the basis for a number of studies. Subsequently a joint altimeter mission between France and the US National Aeronautic and Space Administration (NASA), called TOPEX/Poseidon (TP), was launched in 1992 which became the most successful altimeter ever. Also in operation during this period is the European Resources Satellite (ERS) altimeter, of which there are now two (ERS1, ERS2). It is common to merge the TP data with its 10-day repeat cycle with the ERS data with their longer repeat cycles. The altimeter also accurately measures wind and significant wave-height from the RADAR backscatter. SCATTEROMETER WINDS Another instrument demonstrated by SEASAT was the mapping of wind stress over the ocean with the scatterometer. Since it is very difficult to get information on winds over the ocean, this measurement capability is extremely important. Also, scatterometer winds are all-weather retrievals, making it possible to map ocean winds regardless of weather. After SEASAT the first scatterometers were again those on ERS1 and ERS2. More recently a NASA scatterometer (NSCAT) was launched on the Japanese ADEOS platform. There are also plans to launch future scatterometers on non-US spacecraft. SYNTHETIC APERTURE RADAR (SAR) The high spatial resolution images produced by the SEASAT SAR were very tantalizing to oceanographers who hoped to use SAR for a variety of applications. The data gap caused by the loss of SEASAT delayed a lot of those expectations. There continues to be no US SAR operating, and these data are now available from the European Earth Resource Satellites ERS1 and ERS2 as well as the RADARSAT satellite launched and operated by Canada. Early in the SEASAT period it was dem-

onstrated that SAR imagery could be used to image directional wave spectra and sense internal waves. SAR images also contain expressions of oceanographic fronts (thermal or saline), but the primary application remained the mapping of sea ice in all weather conditions. Microwaves are not blocked by clouds, making it possible to sense the polar ice surface regardless of weather. The high spatial resolution of SAR made it possible to resolve details not possible with optical systems. SUMMARY Oceanography requires sampling over large geographic regions which is time-consuming and expensive for in situ measurements platforms. Satellite remote sensing offers a costeffective method of sampling these large oceanic regions. The only difficulty is in establishing exactly what the satellites are sensing and how accurately the quantities are being sensed. We continue to develop better instruments with better signal-to-noise ratios that can better resolve the parameters of interest. It is certain that satellite data will continue to be important for future oceanographic studies. BIBLIOGRAPHY 1. H. R. Gordon, Removal of atmospheric effects from satellite imagery of the oceans, Appl. Optics, 17: 1,631–1,636. 2. H. R. Gordon, A preliminary assessment of the Nimbus-7 CZCS atmospheric correction algorithm in a horizontally inhomogeneous atmosphere, in J. F. R. Gower (ed.), Oceanography from Space, New York: Plenum, 1981, pp. 257–265. 3. W. J. Emery, C. W. Fowler, and J. Maslanik, Arctic sea ice concentrations from special sensor microwave imager and advanced very high resolution radiometer satellite data, J. Geophys. Res., 99: 18,329–18,342, 1994. 4. W. J. Emery, C. W. Fowler, and J. Maslanik, Satellite remote sensing of ice motion, Oceanographic Applications of Remote Sensing, Boca Raton, FL: CRC Press, 1994, pp. 367–379. 5. R. N. Ninnis, W. J. Emery, and M. J. Collins, Automated extraction of sea ice motion from AVHRR imagery, J. Geophys. Res., 91: 10725–10734, 1986. 6. E. P. McClain, W. G. Pichel, and C. C. Walton, Comparative performance of AVHRR-based multichannel sea surface temperatures, J. Geophys. Res., 90: 11,587–11,601, 1985. 7. C. C. Walton, Nonlinear multichannel algorithms for estimating sea surface temperature with AVHRR satellite data, J. Appl. Meteor., 27, 115–124, 1988. 8. W. J. Emery et al., An objective procedure to compute advection from sequential infrared satellite images, J. Geophys. Res., 91 (color issue): 12,865–12,879, 1986. 9. W. J. Emery, C. W. Fowler, and C. A. Clayson, Satellite image derived Gulf stream currents, J Oceanic Atm. Sci. Tech., 9: 285– 304, 1992. 10. F. J. Wentz, Measurement of oceanic wind vector using satellite microwave radiometers, IEEE Trans. Geosci. Remote Sens., 30: 960–972, 1992. 11. P. Schluessel and W. J. Emery, Atmospheric water vapor over ocean from SSM/I measurements, Int. J. Remote Sens., 11: 753– 766, 1989.

WILLIAM J. EMERY University of Colorado

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3612.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Remote Sensing by Radar Standard Article Jakob J. van Zyl1 and Yunjin Kim1 1California Institute of Technology, Pasadena, CA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3612 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (704K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Radar Principles Real Aperture Radar Synthetic Aperture Radar Advanced SAR Techniques Nonimaging Radars About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3612.htm17.06.2008 15:43:05

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

466

REMOTE SENSING BY RADAR

REMOTE SENSING BY RADAR Radar remote sensing instruments acquire data useful for geophysical investigations by measuring electromagnetic interactions with natural objects. Examples of radar remote sensing instruments include synthetic aperture radars (SARs), scatterometers, altimeters, radar sounders, and meteorological radars such as cloud and rain radars. The main advantage of radar instruments is their ability to penetrate clouds, rain, tree canopies, and even dry soil surfaces depending upon the operating frequencies. In addition, since a remote sensing radar is an active instrument, it can operate day and night by providing its own illumination. Imaging remote sensing radars such as SAR produce highresolution (from submeter to a few tens of meters) images of surfaces. The geophysical information can be derived from these high-resolution images by using proper postprocessing techniques. Scatterometers measure the backscattering cross section accurately in order to characterize surface properties such as roughness. Altimeters are used to obtain accurate surface height maps by measuring the round-trip time delay from a radar sensor to the surface. Radar sounders can image underground material variations by penetrating deeply into the ground. Unlike surveillance radars, remote sensing radars require accurate calibration in order for the data to be useful for scientific applications. In this article, we start with the basic principles of remote sensing radars. Then, we discuss the details of imaging radars and their applications. In order to complete the remote sensing radar discussion, we briefly examine nonimaging radars such as scatterometers, altimeters, radar sounders, and meteorological radars. For more information on these types of radars, the interested reader is referred to other articles in this encyclopedia. We also provide extensive references for each radar for readers who need an in-depth description of a particular radar. RADAR PRINCIPLES We start our discussion with the principles necessary to understand the radar remote sensing instruments that will be described in the later part of this article. For more detailed discussions, readers are referred to Refs. 1–3. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

REMOTE SENSING BY RADAR

Radar flight hardware

Transmit antenna

Frequency up-conversion

Waveform generator

Power amplifier

Transmitted radar wave

Digitizer

Frequency down-conversion

Low-noise amplifier Receive antenna

Data storage

Ground data processing system

Data storage

Computer

467

yy ;; ;; yy Scattering object

Scattered radar wave

Radar image

Figure 1. The basic components of a radar system. A pulse of energy is transmitted from the radar system antenna, and after a time delay an echo is received and recorded. The recorded radar echoes are later processed into images. The flight electronics are carried on the radar platform, either an aircraft or a spacecraft. Image processing is usually done in a ground facility.

Radar Operation A radar transmits an electromagnetic signal and receives and records the echo reflected from the illuminated terrain. Hence, a radar is an active remote sensing instrument since it provides its own illumination. The basic radar operation is illustrated by Fig. 1. A desired signal waveform, commonly a modulated pulse, is generated by a waveform generator. After proper frequency upconversion and high-power amplification, the radar signal is transmitted from an antenna. The reflected echo is received by the antenna, and it is amplified and down-converted to video frequencies for digitization. The digitized data are either stored in a data recorder for later ground data processing or processed by an on-board data processor. Since remote sensing radars usually image large areas, they are commonly operated from either an airborne or a spaceborne platform.

them in the range (cross-track) dimension, and they use the angular size (in the case of the real-aperture radar) or the Doppler history (in the case of the synthetic-aperture radar) to separate surface pixels in the azimuth (along-track) dimension. The imaging radar sensor uses an antenna which illuminates the surface to one side of the flight track. Usually, the antenna has a fan beam which illuminates a highly elongated

C′ A′

D′ B′

Imaging aperture

Basic Principles of Radar Imaging Imaging radars generate surface images very similar to visible and infrared images. However, the principle behind the image generation is fundamentally different in the two cases. Visible and infrared sensors use a lens or mirror system to project the radiation from the scene on a ‘‘two-dimensional array of detectors’’ which could be an electronic array or a film using chemical processes. The two-dimensionality can also be achieved by using scanning systems. This imaging approach conserves the angular relationships between two targets and their images as shown in Fig. 2. Imaging radars use the time delay between the echoes that are backscattered from different surface elements to separate

B

D

A Actual scene

C

Figure 2. Optical imaging systems preserve the angular relationship between objects in the image.

REMOTE SENSING BY RADAR

Time Near swath Far swath edge Range edge bin

θ θr

A di zim re u ct th io n

;y Radar echo

Power

pl Rad at a fo r rm

fli Ra gh d t p ar at h

468

Slant range R

Xr

Radar image sw ath Ground rang e direction Far swath Near swath edge edge

Radar pulse Cτ

separated by a time difference ⌬t equal to t =

2xr sin θ c

where c is the speed of light and the factor 2 is included to account for the signal round-trip propagation. The angle in Eq. (1) is the incidence angle. The two features can be discriminated if the leading edge of the pulse returned from the second object is received later than the trailing edge of the pulse received from the first feature. Therefore, the smallest discriminable time difference in the radar receiver is equal to the pulse effective time length . Thus, cτ 2xr sin θ = τ ⇒ xr = c 2 sin θ

Illuminated area

Resolution The resolution is defined as the surface separation between the two closest features that can still be resolved in the final image. First, consider two point targets that are separated in the range direction by xr. The corresponding echoes will be

(2)

In other words, the range resolution is equal to half the footprint of the radar pulse on the surface. Sometimes the effective pulse length is described in terms of the system bandwidth B. To a good approximation, we have τ=

Figure 3. Radar imaging geometry and definition of terms.

elliptical-shaped area on the surface as shown in Fig. 3. The illuminated area across track defines the image swath. Within the illumination beam, the radar sensor transmits a very short effective pulse of electromagnetic energy. Echoes from surface points farther away along the cross-track coordinate will be received at proportionally later time (Fig. 3). Thus, by dividing the receive time in increments of equal time bins, the surface can be subdivided into a series of range bins. The width in the along-track direction of each range bin is equal to the antenna footprint along the track, xa. As the platform moves, the sets of range bins are covered sequentially, thus allowing strip mapping of the surface line by line. This is comparable to strip mapping with a pushbroom imaging system using a line array in the visible and infrared part of the electromagnetic spectrum. The brightness associated with each image pixel in the radar image is proportional to the echo power contained within the corresponding time bin. As we will see later, the different types of imaging radars really differ in the way in which the azimuth resolution is achieved. The look angle is defined as the angle between the vertical direction and the radar beam at the radar platform, while the incidence angle is defined as the angle between the vertical direction and the illuminating radar beam at the surface. When surface curvature effects are neglected, the look angle is equal to the incidence angle at the surface when the surface is flat. In the case of spaceborne systems, surface curvature must be taken into account, which leads to an incidence angle that is always larger than the look angle (3) for flat surfaces. If topography is present (i.e., the surface is not flat), the local incidence angle may vary from radar image pixel to pixel.

(1)

1 B

(3)

The sin term in the denominator of Eq. (2) means that the ground range resolution of an imaging radar will be a strong function of the look angle at which the radar is operated. To illustrate, a signal with a bandwidth B ⫽ 20 MHz (i.e., effective pulse length ⫽ 50 ns) provides a range resolution of 22 m for ⫽ 20⬚, while a signal bandwidth B ⫽ 50 MHz ( ⫽ ns) provides a range resolution of 4.3 m at ⫽ 45⬚. In the azimuth direction, without further data processing, the resolution xa is equal to the beam footprint on the surface that is defined by the azimuth beamwidth of the radar antenna a given by λ L

(4)

λh hθa = cos θ L cos θ

(5)

θa = from which it follows that xa =

where L is the azimuth antenna length, and h is the altitude of the radar above the surface being imaged. To illustrate, for h ⫽ 800 km, ⫽ 23 cm, L ⫽ 12 m, and ⫽ 20⬚ we obtain xa ⫽ 16 km. Even if is as short as 2 cm and h is as low as 200 km, xa will still be equal to about 360 m, which is considered to be a relatively low resolution, even for remote sensing. This has led to very limited use of the real-aperture technique for surface imaging, especially from space. Equation (5) is also directly applicable to optical imagers. However, because of the small value of (about 1 애m), resolutions of a few meters can be achieved from orbital altitudes with an aperture only a few tens of centimeters in size. When the radar antenna travels along the line of flight, two point targets at the different angles from the flight track have different Doppler frequency. Using this Doppler frequency spread, one can obtain a higher resolution in the along track direction. As shown in the synthetic aperture radar section, the along-track resolution can be as small as the half of the antenna length in the along-track direction. This method is often called the Doppler beam sharpening.

REMOTE SENSING BY RADAR

Radar Equation One of the key factors that determine the quality of the radar imagery is the corresponding signal-to-noise ratio (SNR). This is the equivalent of the brightness of a scene being photographed with a camera versus the sensitivity of the film or detector. Here, we consider the effect of thermal noise on the sensitivity of radar imaging systems. Let Pt be the sensor-generated peak power transmitted out of the antenna. One function of the antenna is to focus the radiated energy into a small solid angle directed toward the area being imaged. This focusing effect is described by the antenna gain G, which is equal to the ratio of the total solid angle over the solid angle formed by the antenna beam: G=

4π 4πLW 4πA = = 2 θr θa λ2 λ

(6)

where L is the antenna length in the flight track direction, W is the antenna length in the cross track direction, and A is the antenna area. The radiated wave propagates spherically away from the antenna toward the surface. Thus the power density Pi per unit area incident on the illuminated surface is Pi =

Pt G 4πR2

(7)

The backscattered power Ps from an illuminated surface area s is given by Ps = Pi sσ0

(8)

where 0 is the surface normalized backscattering cross section which represents the efficiency of the surface in re-emitting back toward the sensor some of the energy incident on it. It is similar to the surface albedo at visible wavelengths. The backscattered energy propagates spherically back toward the sensor. The power density Pc at the antenna is then Ps Pc = 4πR2

(9)

and the total received power is equal to the power intercepted by the antenna: Pr = Pc A

(10)

469

where k is Boltzmann’s constant (k ⫽ 1.38 ⫻ 10 –23 J/K) and T is the total equivalent noise temperature. The resulting SNR is then SNR = Pr /PN

(13)

One common way of characterizing an imaging radar sensor is to determine the surface backscatter cross section N which gives an SNR ⫽ 1. This is called the noise equivalent backscatter cross section. It defines the weakest surface return that can be detected, and therefore the range of surface units that can be imaged. Backscattering Cross Section and Calibration Devices The normalized backscattering cross section represents the reflectivity of an illuminated area in the backscattering direction. A higher backscattering cross section means that the area more strongly reflects the incidence radar signal. It is mathematically defined as

4πR2 Es Es∗ (14) σ0 = lim R,A i →∞ Ai Ei Ei∗ where Ai is the illuminated area and Es and Ei are the scattered and incident electric fields, respectively. In order to calibrate the radar data, active and/or passive calibration devices are commonly used. By far the most commonly used passive calibration device is a trihedral corner reflector which consists of three triangular panels bolted together to form right angles with respect to each other. The maximum radar cross section (RCS) of a trihedral corner reflector is given by RCS =

4πa4 3λ2

(15)

where a is the long-side triangle length of a trihedral corner reflector. This reflector has about 40⬚ half-power beamwidth which makes the corner reflector response relatively insensitive to the position errors. In addition, these devices are easily deployed in the field; and since they require no power to operate, they can be used unattended in remote locations under most weather conditions. Signal Modulation

or PG λ 2 Gr Pr = t t2 sσ0 4πR (4πR)2

(11)

In Eq. (11) we explicitly show that the transmit and receive antennas may have different gains. This is important for the more advanced SAR techniques like polarimetry where antennas with different polarizations may be used during transmission and reception. In addition to the target echo, the received signal also contains noise which results from the fact that all objects at temperatures higher than absolute zero emit radiation across the whole electromagnetic spectrum. The noise component that is within the spectral bandwidth B of the sensor is passed through with the signal. The thermal noise power is given by PN = kTB

(12)

A pulsed radar determines the range by measuring the roundtrip time by transmitting a pulse signal. In designing the signal pattern for a radar sensor, there is usually a strong requirement to have as much energy as possible in each pulse in order to enhance the SNR. This can be done by increasing the peak power or by using a longer pulse. However, particularly in the case of spaceborne sensors, the peak power is usually strongly limited by the available power devices. On the other hand, an increased pulse length (i.e., smaller bandwidth) leads to a worse range resolution [see Eq. (2)]. This dilemma is usually resolved by using modulated pulses which have the property of a wide bandwidth even when the pulse is very long. One such modulation scheme is the linear frequency modulation or chirp. In a chirp, the signal frequency within the pulse is linearly changed as a function of time. If the frequency is linearly changed from f 0 to f 0 ⫹ ⌬f, the effective bandwidth would be

470

REMOTE SENSING BY RADAR

equal to

and the resulting surface footprint or swath S is given by B = |( f 0 + f ) − f 0 | = | f |

(16)

which is independent of the pulse length. Thus a pulse with long duration (i.e., high energy) and wide bandwidth (i.e., high range resolution) can be constructed. The instantaneous frequency for such a signal is given by B t τ

f (t) = f 0 +

for −

τ τ ≤t ≤ 2 2

(17)

and the corresponding signal amplitude is A(t)

s cos[

B f (t) dt] = cos f 0t + t 2 2τ

(18)

Note that the instantaneous frequency is the derivative of the instantaneous phase. A pulse signal such as Eq. (18) has a physical pulse length ⬘ and a bandwidth B. The product ⬘B is known as the time bandwidth product of the radar system. In typical radar systems, time bandwidth products of several hundred are used. At first glance it may seem that using a pulse of the form Eq. (18) cannot be used to separate targets that are closer than the projected physical length of the pulse. It is indeed true that the echoes from two neighboring targets which are separated in the range direction by much less than the physical length of the signal pulse will overlap in time. If the modulated pulse, and therefore the echoes, have a constant frequency, it will not be possible to resolve the two targets. However, if the frequency is modulated as described in Eq. (18), the echoes from the two targets will have different frequencies at any instant of time and therefore can be separated by frequency filtering. In actual radar systems, a matched filter is used to compress the returns from the different targets. It can be shown (3) that the effective pulse length of the compressed pulse is given by Eq. (3). Therefore, the achievable range resolution using a modulated pulse of the kind given by Eq. (18) is a function of the chirp bandwidth, and not the physical pulse length. In typical spaceborne and airborne SAR systems, physical pulse lengths of several tens of microseconds are used, while bandwidths of several tens of megahertz are no longer uncommon for spaceborne systems, and several hundreds of megahertz are common in airborne systems.

REAL APERTURE RADAR

S≈

λh hθr = 2 cos θ W cos2 θ

(20)

where h is the sensor height above the surface, is the angle from the center of the illumination beam to the vertical (known as the look angle at the center of the swath), and r is assumed to be very small. To illustrate, for ⫽ 27 cm, h ⫽ 800 km, ⫽ 20⬚, and W ⫽ 2.1 m, the resulting swath width is 100 km. As shown before, the main disadvantage of the real aperture radar technique is the relatively poor azimuth resolution that could be achieved from space. From aircraft altitudes, however, reasonable azimuth resolutions can be achieved if higher frequencies (typically X-band or higher) are used. For this reason, real aperture radars are not commonly used any more. SYNTHETIC APERTURE RADAR Synthetic aperture radar refers to a particular implementation of an imaging radar system that utilizes the movement of the radar platform and specialized signal processing to generate high-resolution images. Prior to the discovery of the synthetic aperture radar principle, imaging radars operated using the real aperture principle and were known as sidelooking airborne radars (SLARs). Carl Wiley of the Goodyear Aircraft Corp. is generally credited as the first person to describe the use of Doppler frequency analysis of signals from a moving coherent radar to improve along-track resolution. He noted that two targets at different along-track positions will be at different angles relative to the aircraft velocity vector, resulting in different Doppler frequencies. Therefore, targets can be separated in the along-track direction on the basis of their different Doppler frequencies. This technique was originally known as Doppler beam sharpening, but later became known as synthetic aperture radar (SAR). The reader interested in a discussion of the history of SAR, both airborne and spaceborne, is referred to an excellent discussion in Chap. 1 of Ref. 3. In this section we discuss the principles of radar imaging using synthetic-aperture techniques, the resulting image projections, distortions, tonal properties, and environmental effects on the images. We have attempted to give simple explanations of the different imaging radar concepts. For more detailed mathematical analysis the reader is referred to specialized texts such as Refs. 1–3. Synthetic Aperture Radar Principle

The real aperture imaging radar sensor uses an antenna which illuminates the surface to one side of the flight track. Usually, the antenna has a fan beam which illuminates a highly elongated elliptical shaped area on the surface as shown in Fig. 3. As mentioned before, the illuminated area across track defines the image swath. For an antenna of width W operating at a wavelength , the beam angular width in the range plane is given by θr ≈

λ W

(19)

As discussed in the previous section, a real-aperture radar cannot achieve high azimuth resolution from an orbital platform. In order to achieve a high resolution from any altitude, the synthetic-aperture technique is used. This technique uses successive radar echoes acquired at neighboring locations along the flight line to synthesize an equivalent very long antenna which provides a very narrow beamwidth and thus a high image resolution. In this section we explain SAR using two different approaches, namely, the synthetic array approach and the Doppler synthesis approach, which lead to the same results. We will also discuss uses of linear frequency

REMOTE SENSING BY RADAR

modulation (chirp) as well as some limitations and degradation that are inherent to the SAR technique. Our discussion closely follows that of Ref. 1. The range resolution and radar equation derived previously for a real aperture radar are still valid here. The main difference between real and synthetic aperture radars is in the way in which the azimuth resolution is achieved. Synthetic Array Approach. The synthetic array approach explains the SAR technique by comparing it to a long array of antennas and the resulting increase in the resolution relative to the resolution achieved with one of the array elements. Let us consider a linear array of antennas consisting of N elements (Fig. 4). The contribution of the nth element to the total far field electric field E in the direction 웁 is proportional to

s an eiφ

Er

n −ikd n sin β

e

(21)

where an and n are the amplitude and phase of the signal radiated from the nth element, dn is the distance of the nth element to the array center, and k ⫽ 2앟/ . The total electric field is given by E(β )

s

an eiφ n e−ikd n sin β

(22)

n

If all the radiators are identical in amplitude and phase and are equally spaced with a separation d, then E(β )

s aeiφ

e−inkd sin β

(23)

n

This is the vector sum of N equal vectors separated by a phase ⌿ ⫽ kd sin 웁. The resulting vector shown in Eq. (23) has the following properties: • For 웁 ⫽ 0 ⇒ ⌿ ⫽ 0 ⇒, all vectors add together leading to a maximum for E. • As 웁 increases, the elementary vectors will spread and lead to a decrease in the magnitude of E. • For 웁 such that N⌿ ⫽ 2앟, the vectors are spread all around a circle, leading to a sum equal to zero.

Array element d

SA

lig Rf

ht

pa

471

th

θa R

L′

a Rθ

Figure 5. The width of the antenna beam in the azimuth direction defines the length of the synthetic aperture.

Thus, the radiation pattern has a null for Nkd sin β = 2π ⇒ β = sin−1

2π Nkd

= sin−1

λ D

(24)

where D ⫽ Nd is the total physical length of the array. From the above it is seen that an array of total length D ⫽ Nd has a beamwidth equal to the one for a continuous antenna of physical size D. This is achieved by adding the signals from each element in the array coherently—that is, amplitude and phase. The fine structure of the antenna pattern depends on the exact spacing of the array. Close spacing of the array elements is required to avoid grating effects. In a conventional array, the signals from the different elements are combined together with a network of waveguides or cables leading to a single transmitter and receiver. Another approach is to connect each element to its own transmitter and receiver. The signals are coherently recorded and added later using a separate processor. A third approach could be used if the scene is quasi-static. A single transmitter/ receiver/antenna element can be moved from one array position to the next. At each location a signal is transmitted and the echo recorded coherently. The echoes are then added in a separate processor or computer. A stable oscillator is used as a reference to ensure coherency as the single element moves along the array line. This last configuration is used in a SAR where a single antenna element serves to synthesize a large aperture. Referring to Fig. 5, it is clear that if the antenna beam is equal to a ⫽ /L, the maximum possible synthetic aperture length that would allow us to observe a point is given by L = Rθa

(25)

This synthetic array will have a beamwidth s equal to θs = λ/2L = λ/(2Rθa ) = L/(2R) β

Far-field observation point Figure 4. Geometry of a linear array of antenna elements.

(26)

The factor 2 is included to account for the fact that the 3 dB (half-power) beamwidth is narrower in a radar configuration where the antenna pattern is involved twice, once each at transmission and reception. The corresponding surface alongtrack resolution of the synthetic array is xa = Rθs =

L 2

(27)

472

REMOTE SENSING BY RADAR

S

AR

fli

t gh

pa

For large ranges, this reduces to

th

L ≤

r λR

(30)

2

and the achievable azimuth resolution is Ro

xa ≥

√ 2λR

(31)

Ri

Figure 6. For large synthetic arrays, one has to compensate during ground processing for the change in geometry between the antenna elements and the point being imaged.

This result shows that the azimuth (or along-track) surface resolution is equal to half the size of the physical antenna and is independent of the distance between the sensor and the surface. At first glance, this result seems most unusual. It shows that a smaller antenna gives better resolution. This can be explained in the following way. The smaller the physical antenna is, the larger its footprint. This allows a longer observation time for each point on the surface; that is, a longer array can be synthesized. This longer synthetic array allows a finer synthetic beam and surface resolution. Similarly, if the range between the sensor and surface increases, the physical footprint increases, leading to a longer synthetic array and finer angular resolution which counterbalances the increase in the range. As the synthetic array gets larger, it becomes necessary to compensate for the slight changes in geometry when a point is observed (Fig. 6). It should also be taken into account that the distance between the sensor and the target is variable depending on the position in the array. Thus, an additional phase shift needs to be added in the processor to the echo received at location xi equal to 4π (R0 − Ri ) φi = 2k(R0 − Ri ) = λ

(28)

where R0 is the range at closest approach to the point being imaged. In order to focus at a different point, a different set of phase shift corrections needs to be used. However, because this is done at a later time in the processor, optimum focusing can be achieved for each and every point in the scene. SAR imaging systems that fully apply these corrections are called focused. In order to keep the processing simple, one can shorten the synthetic array length and use only a fraction of the maximum possible length. In addition, the same phase shift correction can be applied to all the echoes. This would lead to constructive addition if the partial array length is such that φi ≤ or

2k

R2

+

2

Doppler Synthesis Approach. Another way to explain the synthetic aperture technique is to examine the Doppler shifts of the radar signals. As the radar sensor moves relative to the target being illuminated, the backscattered echo is shifted in frequency due to the Doppler effect. This Doppler frequency is equal to v v f d = 2 f 0 cos = 2 sin θt c λ

(32)

where f 0 is the transmitted signal frequency, ⌿ is the angle between the velocity vector v ⫽ vˆ and the sensor-target line, and t ⫽ 앟/2 ⫺ ⌿. As the target moves through the beam, the angle t varies from ⫹a /2 to ⫺a /2 where a is the antenna azimuth beamwidth. Thus, a well-defined Doppler history is associated with every target. Figure 7 shows such a history

Doppler history for P +fφ

+fD

Doppler history for P′

∆τ Time –fD

Flight direction

π 4

L

This is called an unfocused SAR configuration where the azimuth resolution is somewhat degraded relative to the fully focused one, but still better than that of a real aperture radar. The advantage of the unfocused SAR is that the processing is fairly simple compared to that of a fully focused SAR. To illustrate, for a 12 m antenna, a wavelength of 3 cm, and a platform altitude of 800 km, the azimuth resolutions will be 6 m, 220 m, and 2000 m for the cases of a fully focused SAR, an unfocused SAR, or a real aperture radar, respectively.

Doppler frequency

Point being imaged

2

π −R ≤ 4

P

(29)

P′

Figure 7. Doppler history for two targets separated in the azimuth direction.

REMOTE SENSING BY RADAR

for two neighboring targets P and P⬘ located at the same range but at different azimuth positions. The Doppler shift varies from ⫹f D to ⫺f D where v f D = 2 sin(θa /2) λ

(33)

When a Ⰶ 1, Eq. (28) can be written as fD = 2

v λ v v θa =2 = λ 2 λ 2L L

(34)

The instantaneous Doppler shift is given by v v2t cos θ f D (t) = 2 θt ≈ 2 λ λh

(35)

where t ⫽ 0 corresponds to the time when the target is exactly at 90⬚ to the flight track. Thus, the Doppler histories for the two points P and P⬘ will be identical except for a time displacement equal to t =

PP v

(36)

It is this time displacement that allows the separation of the echoes from each of the targets. The resolution along track (azimuth resolution) xa is equal to the smallest separation PP⬘ that leads to a time separation ⌬t that is measurable with the imaging sensor. It can be shown that this time separation is equal to the inverse of the total Doppler bandwidth BD ⫽ 2f D. In a qualitative way, it can be stated that a large BD gives a longer Doppler history that can be better matched to a template. This would allow a better determination of the zero Doppler crossing time. Thus, the azimuth resolution is given by xa = (PP )min = vtmin = v

1 2 fD

=v

L 2v

=

L 2

473

Signal Fading and Speckle A close examination of a synthetic-aperture radar image shows that the brightness variation is not smooth but has a granular texture which is called speckle (Fig. 8). Even for an imaged scene which has a constant backscatter property, the image will have statistical variations of the brightness on a pixel-by-pixel basis but will have a constant mean over many pixels. This effect is identical to when a scene is observed optically under laser illumination. It is a result of the coherent nature (or very narrow spectral width) of the illuminating signal. To explain this effect in a simple way, let us consider a scene which is completely ‘‘black’’ except for two identical bright targets separated by a distance d. The received signal V at the radar is given by V = V0 e−i2kr 1 + V0 e−i2kr 2

(39)

and assuming that d Ⰶ r0 (for spaceborne radars, the pixel size is typically on the order of tens of meters, while the range is typically several hundred kilometers), we obtain V = V0 e−i2kr 0 (e−i2kd sin θ + e+i2kd sin θ ) ⇒ |V | = 2|V0 cos(kd sin θ )| (40) which shows that, depending on the exact location of the sensor, a significantly different signal value would be measured. If we now consider an image pixel which consists of a very large number of point targets, the resulting coherent superposition of all the patterns will lead to a ‘‘noise-like’’ signal. Rigorous mathematical analysis shows that the resulting signal has well-defined statistical properties (1–3). The measured

(37)

which is the same as the result derived using the synthetic array approach [see Eq. (27)]. As mentioned earlier, the imaging radar transmits a series of pulsed electromagnetic waves. Thus, the Doppler history from a point P is not measured continuously but sampled on a repetitive basis. In order to get an accurate record of the Doppler history, the Nyquist sampling criterion requires that sampling occur at least at twice the highest frequency in the Doppler shift. Thus, the pulse repetition frequency (PRF), must be larger than PRF ≥ 2 f D =

2v L

(38)

In other terms, the above equation means that at least one sample (i.e., one pulse) should be taken every time the sensor moves by half an antenna length. The corresponding aspect in the synthetic array approach is that the array elements should be close enough to each other to have a reasonably ‘‘filled’’ total aperture in order to avoid significant grating effects. To illustrate, for a spaceborne imaging system moving at a speed of 7 km/s and using an antenna 10 m in length, the corresponding minimum PRF is 1.4 kHz.

Figure 8. The granular texture shown in this image acquired by the NASA/JPL AIRSAR system is known as speckle. Speckle is a consequence of the coherent nature in which a synthetic aperture radar acquires images.

474

REMOTE SENSING BY RADAR

signal amplitude has a Rayleigh distribution, and the signal power has an exponential distribution (2). In order to narrow the width of these distributions (i.e., reduce the brightness fluctuations), successive signals or neighboring pixels can be averaged incoherently. This would lead to a more accurate radiometric measurement (and a more pleasing image) at the expense of degradation in the image resolution. Another approach to reduce speckle is to combine images acquired at neighboring frequencies. In this case the exact interference patterns lead to independent signals but with the same statistical properties. Incoherent averaging would then result in a smoothing effect. In fact, this is the reason why a scene illuminated with white light does not show speckled image behavior. In most imaging SARs, the smoothing is done by averaging the brightness of neighboring pixels in azimuth, or range, or both. The number of pixels averaged is called the number of looks N. It can be shown (1) that the signal standard deviation SN is related to the mean signal power P by 1 SN = √ P N

(41)

The larger the number of looks N, the better the quality of the image from the radiometric point of view. However, this degrades the spatial resolution of the image. It should be noted that for N larger than about 25, large increase in N leads to only a small decrease in the signal fluctuation. This small improvement in the radiometric resolution should be traded off against the large increase in the spatial resolution. For example, if one were to average 10 resolution cells in a four-look image, the speckle noise will be reduced to about 0.5 dB. At the same time, however, the image resolution will be reduced by an order of magnitude. Whether this loss in resolution is worth the reduction in speckle noise depends on both the aim of the investigation, and the kind of scene imaged. Figure 9 shows the effect of multilook averaging. The same image as Fig. 8, acquired by the NASA/JPL AIRSAR system, is shown displayed at one, four, 16, and 32 looks, respectively. This figure clearly illustrates the smoothing effect, as well as the decrease in resolution resulting from the multilook process. In one early survey of geologists done by Ford (4), the results showed that even though the optimum number of looks depended on the scene type and resolution, the majority of the responses preferred 2-look images. However, this survey dealt with images that had rather poor resolution to begin with; and one may well find that with today’s higher-resolution systems, analysts may be asking for a larger number of looks. Ambiguities and Anomalies Radar images could contain a number of anomalies which result from the way imaging radars generate the image. Some of these are similar to what is encountered in optical systems, such as blurring due to defocusing or scene motion, and some such as range ambiguities are unique to radar systems. This section addresses the anomalies which are most commonly encountered in radar images. As mentioned earlier (see Fig. 3) a radar images a surface by recording the echoes line by line with successive pulses. The leading edge of each echo corresponds to the near edge of

Figure 9. The effects of speckle can be reduced by incoherently averaging pixels in a radar image, a process known as multilooking. Shown in this figure is the same image, processed as a single look (the basic radar image), 4 looks, 16 looks, and 32 looks. Note the reduction in granular texture as the number of looks increase. Also, note that as the number of looks increases, the resolution of the images decreases. Some features, such as those in the largest dark patch, may be completely masked by the speckle noise.

the image scene, and the tail end of the echo corresponds to the far edge of the scene. The length of the echo (i.e., swath width of the scene covered) is determined by the antenna beamwidth and the size of the data window. The exact timing of the echo reception depends on the range between the sensor and the surface being imaged. If the timing of the pulses or the extent of the echo are such that the leading edge of one echo overlaps with the tail end of the previous one, then the far edge of the scene is folded over the near edge of the scene.

REMOTE SENSING BY RADAR

475

This is called range ambiguity. Referring to Fig. 10, the temporal extent of the echo is equal to Te ≈ 2

R hλ sin θ θr tan θ = 2 c cW cos2 θ

(42)

This time extent should be shorter than the time separating two pulses (i.e., 1/PRF). Thus, we must have PRF <

cW cos2 θ 2hλ sin θ

(43)

In addition, the sensor parameters, specifically the PRF, should be selected such that the echo is completely within an interpulse period; that is, no echoes should be received during the time that a pulse is being transmitted. The above equation gives an upper limit for the PRF. Another kind of ambiguity present in SAR imagery also results from the fact that the target’s return in the azimuth direction is sampled at the PRF. This means that the azimuth spectrum of the target return repeats itself in the frequency domain at multiples of the PRF. In general, the azimuth spectrum is not a band-limited signal; instead the spectrum is weighted by the antenna pattern in the azimuth direction. This means that parts of the azimuth spectrum may be aliased, and high-frequency data will actually appear in the lowfrequency part of the spectrum. In actual images, these azimuth ambiguities appear as ghost images of a target repeated at some distance in the azimuth direction as shown in Fig.

SA

R

t pla

fo

rm

11. To reduce the azimuth ambiguities, the PRF of a SAR has to exceed the lower limit given by Eq. (33). In order to reduce both range and azimuth ambiguities, the PRF must therefore satisfy the conditions expressed by both Eqs. (33) and (43). Therefore, we must insist that

θ θr R Rθ rtanθ Rθ r

θ

;y;y ;y

Power

Echo from pulse N

Echo from pulse N + 1

Interpulse period

Transmit event

Figure 11. Azimuth ambiguities result when the radar pulse repetition frequency is too low to sample the azimuth spectrum of the data adequately. In that case, the edges of the azimuth spectrum fold over themselves, creating ghost images as shown in this figure. The top image was adequately sampled and processed, while the bottom one clearly shows the ghost images due to the azimuth ambiguities. The data were acquired with the NASA/JPL AIRSAR system, and a portion of Death Valley in California is shown.

2v cW cos2 θ > 2hλ sin θ L

(44)

from which we derive a lower limit for the antenna size as LW >

Time

Figure 10. Temporal extent of radar echoes. If the timing of the pulses or the temporal extent of the echoes is such that the leading edge of one echo overlaps the trailing edge of the previous one, the far edge of the scene will be folded over the near edge, a phenomenon known as range ambiguities.

4vhλ sin θ c cos2 θ

(45)

Another type of artifact in radar images results when a very bright surface target is surrounded by a dark area. As the image is being formed, some spillover from the bright target, called sidelobes, although weak, could exceed the background and become visible as shown in Fig. 12. It should be pointed out that this type of artifact is not unique to radar systems. They are common in optical systems, where they are known as the sidelobes of the point spread function. The difference is that in optical systems the sidelobe characteristics are determined by the characteristics of the imaging optics (i.e., the

476

REMOTE SENSING BY RADAR • b′ Appears closer than a′ in radar image ⇒ layover

Radar image plane

• d′ and e′ are closer together in radar image ⇒ Foreshortening • h to i not illuminated by the radar ⇒ Radar shadow a′ b′ c′ d′e′ f′ g′

b a

h′ i′

e c

d

f

g

i

Figure 13. Radar images are cylindrical projections of the scene onto the image plane, leading to characteristic distortions. Refer to the text for more detailed discussions. Figure 12. Sidelobes from the bright target, indicated by arrows in this image, mask out the return from the dark area surrounding the target. The characteristics of the sidelobes are determined mainly by the characteristics of the radar processing filters.

hardware), whereas in the case of a SAR the sidelobe characteristics are determined by the characteristics of the processing filters. In the radar case, the sidelobes may therefore be reduced by suitable weighting of the signal spectra during matched filter compression. The equivalent procedure in optical systems is through apodization of the telescope aperture. The vast majority of these artifacts and ambiguities can be avoided with proper selection of the sensor’s and processor’s parameters. However, the interpreter should be aware of their occurrence because in some situations they might be difficult, if not impossible, to suppress. Geometric Effects and Projections The time delay/Doppler history basis of SAR image generation leads to an image projection different than that in the case of optical sensors. Even though at first look radar images seem very similar to optical images, close examination quickly shows that geometric shapes and patterns are projected in a different fashion by the two sensors. This difference is particularly acute in rugged terrain. If the topography is known, a radar image can be reprojected into a format identical to an optical image, thus allowing image pixel registration. In extremely rugged terrain, however, the nature of the radar image projection leads to distortions which sometimes cannot be corrected. In the radar image, two neighboring pixels in the range dimension correspond to two areas in the scene with slightly different range to the sensor. This has the effect of projecting the scene in a cylindrical geometry on the image plane, which leads to distortions as shown in Fig. 13. Areas that slope toward the sensor look shorter in the image, while areas that slope away from the sensor look longer in the image than horizontal areas. This effect is called foreshortening. In the extreme case where the slope is larger than the incidence angle, layover occurs. In this case, a hill would look as if it is projected over the region in front of it. Layover cannot be corrected and can only be avoided by having an incidence angle at the surface larger than any expected sur-

face slopes. When the slope facing away from the radar is steep enough such that the radar waves do not illuminate it, shadowing occurs and the area on that slope is not imaged. Note that in the radar images, shadowing is always away from the sensor flight line and is not dependent on the time of data acquisition or the sun angle in the sky. Shadowing can be beneficial for highlighting surface morphologic patterns. Figure 14 contains some examples of foreshortening and shadowing. ADVANCED SAR TECHNIQUES The field of synthetic aperture radar changed dramatically over the years, especially over the past decade with the operational introduction of advance radar techniques such as polarimetry and interferometry. While both of these techniques have been demonstrated much earlier, radar polarimetry only became an operational research tool with the introduction of the NASA/JPL AIRSAR system in the early 1980s, and it reached a climax with the two SIR-C/X-SAR flights on board

Figure 14. This NASA/JPL AIRSAR image shows examples of foreshortening and shadowing. Note that since the radar provides its own illumination, radar shadowing is a function of the radar look direction and does not depend on the sun angle. This image was illuminated from the left.

REMOTE SENSING BY RADAR

the space shuttle Endeavour in April and October 1994. Radar interferometry received a tremendous boost when the airborne TOPSAR system was introduced in 1991 by NASA/JPL, and it progressed even further when data from the European Space Agency ERS-1 radar satellite became routinely available in 1991. SAR Polarimetry Radar polarimetry is covered in detail in a different article in this encyclopedia. We therefore only summarize this technique here for completeness. The reader is referred to the appropriate article in this encyclopedia for the mathematical details. Electromagnetic wave propagation is a vector phenomenon; that is, all electromagnetic waves can be expressed as complex vectors. Plane electromagnetic waves can be represented by two-dimensional complex vectors. This is also the case for spherical waves when the observation point is sufficiently far removed from the source of the spherical wave. Therefore, if one observes a wave transmitted by a radar antenna when the wave is a large distance from the antenna (in the far-field of the antenna), the radiated electromagnetic wave can be adequately described by a two-dimensional complex vector. If this radiated wave is now scattered by an object, and one observes this wave in the far-field of the scatterer, the scattered wave can again be adequately described by a two-dimensional vector. In this abstract way, one can consider the scatterer as a mathematical operator which takes one two-dimensional complex vector (the wave impinging upon the object) and changes it into another two-dimensional vector (the scattered wave). Mathematically, therefore, a scatterer can be characterized by a complex 2 ⫻ 2 scattering matrix. However, this matrix is a function of the radar frequency, and the viewing geometry. Once the complete scattering matrix is known and calibrated, one can synthesize the radar cross-section for any arbitrary combination of transmit and receive polarizations. Figure 15 shows a number of such synthesized images for the San Francisco Bay area in California. The data were acquired with the NASA/JPL AIRSAR system. The typical implementation of a radar polarimeter involves transmitting a wave of one polarization and receiving echoes in two orthogonal polarizations simultaneously. This is followed by transmitting a wave with a second polarization, and again receiving echoes with both polarizations simultaneously. In this way, all four elements of the scattering matrix are measured. This implementation means that the transmitter is in slightly different positions when measuring the two columns of the scattering matrix, but this distance is typically small compared to a synthetic aperture and therefore does not lead to a significant decorrelation of the signals. The NASA/JPL AIRSAR system pioneered this implementation for SAR systems (5), and the same implementation was used in the SIR-C part of the SIR-C/X-SAR radars (6). The past few years have seen relatively little advance in the development of hardware for polarimetric SAR systems; newer implementations are simply using more advanced technology to implement the same basic hardware configurations as the initial systems. Significant advances were made, however, in the field of analysis and application of polarimetric SAR data.

477

Polarimetric SAR Calibration. Many of the advances made in analyzing polarimetric SAR data result directly from the greater availability of calibrated data. Unlike the case of single-channel radars, where only the radar cross section needs to be calibrated, polarimetric calibration usually involves four steps: cross-talk removal, phase calibration, channel imbalance compensation, and absolute radiometric calibration (7). Cross-talk removal refers to correcting mostly the cross-polarized elements of the scattering matrix for the effects of system cross-talk that couples part of the copolarized returns into the cross-polarized channel. Phase calibration refers to correcting the copolarized phase difference for uncompensated path length differences in the transmit and receive chains, while channel imbalance refers to balancing the copolarized and cross-polarized returns for uncompensated gain differences in the two transmit and receive chains. Finally, absolute radiometric calibration involves using some kind of a reference calibration source to determine the overall system gain to relate received power levels to normalized radar cross section. While most of the polarimetric calibration algorithms currently in use were published several years ago (7–11), several groups are still actively pursuing the study of improved calibration techniques and algorithms. The earlier algorithms are reviewed in Refs. 12 and 13, while Ref. 14 provides a comprehensive review of SAR calibration in general. Some of these earlier algorithms are now routinely used to calibrate polarimetric SAR data operationally, as for example in the NASA/ JPL AIRSAR and SIR-C processors (15). Example Applications of Polarimetric SAR Data. The availability of calibrated polarimetric SAR data allowed research to move from the qualitative interpretation of SAR images to the quantitative analysis of the data. This sparked significant progress in the classification of polarimetric SAR images, led to improved models of scattering by different types of terrain, and allowed the development of some algorithms to invert polarimetric SAR data for geophysical parameters, such as forest biomass, surface roughness, and soil moisture. Classification of Earth Terrain. Many earth science studies require information about the spatial distribution of land cover types, as well as the change in land cover and land use with time. In addition, it is increasingly recognized that the inversion of SAR data for geophysical parameters involves an initial step of segmenting the image into different terrain classes, followed by inversion using the algorithm appropriate for the particular terrain class. Polarimetric SAR systems, capable of providing high-resolution images under all weather conditions as well as during day or night, provide a valuable data source for classification of earth terrain into different land cover types. Two main approaches are used to classify images into land cover types: (1) maximum likelihood classifiers based on Bayesian statistical analysis and (2) knowledge-based techniques designed to identify dominant scattering. Some of the earlier studies in Bayesian classification focused on quantifying the increased accuracy gained from using all the polarimetric information. References 16 and 17 showed that the classification accuracy is significantly increased when the complete polarimetric information is used compared to that achieved with single-channel SAR data. These earlier classifiers assumed equal a priori probabilities for all classes, and modeled the SAR amplitudes as circular

478

REMOTE SENSING BY RADAR

Figure 15. Radar polarimetry allows one to synthesize images at any polarization combination. This set of images of San Francisco, California, was synthesized from a single set of measurements acquired by the NASA/JPL AIRSAR system. Note the differential change in brightness between the city (the bright area) and Golden Gate Park, the dark rectangular area in the middle of the images. This differential change is due to a difference in scattering mechanism. The city area is dominated by a double reflection from the streets to the buildings and back to the radar, while the park area exhibits much more diffuse scattering.

Gaussian distributions, which means that the textural variations in radar backscatter are not considered to be significant enough to be included in the classification scheme. Reference 18 extended the Bayesian classification to allow different a priori probabilities for different classes. Their method first classifies the image into classes assuming equal a priori probabilities, and then it iteratively changes the a priori probabilities for subsequent classifications based on the local results of previous classification runs. Significant improvement in classification accuracy is obtained with only a few iterations. More accurate results are obtained using a more rigorous maximum a posteriori (MAP) classifier where the a priori distribution of image classes is modeled as a Markov random field and the optimization of the image classes is done over the whole image instead of on a pixel-by-pixel basis (19). In a

subsequent work, the MAP classifier is extended to include the case of multifrequency polarimetric radar data (20). The MAP classifier was used in Ref. 21 to map forest types in the Alaskan boreal forest. In this study, five vegetation types (white spruce, balsam poplar, black spruce, alder/willow shrubs, and bog/fen/nonforest) were separated with accuracies ranging from 62% to 90%, depending on which frequencies and polarizations are used. Knowledge-based classifiers are implemented based upon determination of dominant scattering mechanisms through an understanding of the physics of the scattering process as well as experience gained from extensive experimental measurements (22). One of the earliest examples of such a knowledge-based classifier was published in Ref. 23. In this unsupervised classification, knowledge of the physics of the

REMOTE SENSING BY RADAR

scattering process was used to classify images into three classes: odd numbers of reflections, even numbers of reflections, and diffuse scattering. The odd and even numbers of reflection classes are separated based on the copolarized phase difference, while the diffuse scattering class is identified based on high cross-polarized return and low correlation between the copolarized channels. While no direct attempt was made to identify each class with a particular terrain type, it was noted that in most cases the odd numbers of reflection class corresponded to bare surfaces or open water, even numbers of reflections usually indicated urban areas or sparse forests, sometimes with understory flooding present, and diffuse scattering is usually identified with vegetated areas. As such, all vegetated areas are lumped into one class, restricting the application of the results. Reference 22 extended this idea and developed a level 1 classifier that segments images into four classes: tall vegetation (trees), short vegetation, urban surfaces, and bare surfaces. First the urban areas are separated from the rest by using the L-band copolarized phase difference and the image texture at C-band. Then areas containing tall vegetation are identified using the L-band cross-polarized return. Finally, the C-band cross-polarized return and the Lband texture are used to separate the areas containing short vegetation from those with bare surfaces. Accuracies better than 90% are reported for this classification scheme when applied to two different images acquired in Michigan. Another example of a knowledge-based classification is reported in Ref. 24. In this study, a decision-tree classifier is used to classify images of the Amazonian floodplain near Manaus, Brazil into five classes: water; clearing; macrophyte; nonflooded forest; and flooded forest based on polarimetric scattering properties. Accuracies better than 90% are reported. Geophysical Parameter Estimation. One of the most active areas of research in polarimetric SAR involves estimating geophysical parameters directly from the radar data through model inversion. Space does not permit a full discussion of recent work. Therefore, in this section only a brief summary of recent work will be provided, with the emphasis on vegetated areas. Many electromagnetic models exist to predict scattering from vegetated areas (25–34), and this remains an area of active research. Much of the work is aimed at estimating forest biomass (35–39). Earlier works correlated polarimetric SAR backscatter with total above-ground biomass (35,36) and suggested that the backscatter saturates at a biomass level that scales with frequency, a result also predicted by theoretic models. This led some investigators to conclude that these saturation levels define the upper limits for accurate estimation of biomass (40), arguing for the use of low-frequency radars to be used for monitoring forest biomass (41). More recent work suggests that some spectral gradients and polarization ratios do not saturate as quickly and may therefore be used to extend the range of biomass levels for which accurate inversions could be obtained (37). Reference 41 showed that inversion results are most accurate for monospecies forests, and it also showed that accuracies decrease for less homogeneous forests. They conclude that the accuracies of the radar estimates of biomass are likely to increase if structural differences between forest types are accounted for during the inversion of the radar data. Such an integrated approach to retrieval of forest biophysical characteristics is reported in Refs. 42 and 43. These stud-

479

ies first segment images into different forest structural types, and then they use algorithms appropriate for each structural type in the inversion. Furthermore, Ref. 43 estimates the total biomass by first using the radar data to estimate tree basal area and height and crown biomass. The tree basal area and height are then used in allometric equations to estimate the trunk biomass. The total biomass, which is the sum of the trunk and crown biomass values, is shown to be accurately related to allometric total biomass levels up to 25 kg/m2, while Ref. 44 estimates that biomass levels as high as 34 kg/m2 to 40 kg/m2 could be estimated with an accuracy of 15% to 25% using multipolarization C-, L-, and P-band SAR data. Research in retrieving geophysical parameters from nonvegetated areas is also an active research area, although not as many groups are involved. One of the earliest algorithms to infer soil moisture and surface roughness for bare surfaces was published in Ref. 45. This algorithm uses polarization ratios to separate the effects of surface roughness and soil moisture on the radar backscatter, and an accuracy of 4% for soil moisture is reported. More recently, Dubois et al. (46) reported a slightly different algorithm, based only on the copolarized backscatters measured at the L-band. Their results, using data from scatterometers, airborne SARs, and spaceborne SARs (SIR-C), show an accuracy of 4.2% when inferring soil moisture over bare surfaces. Reference 47 reported an algorithm to measure snow wetness, and it demonstrated accuracies of 2.5%. SAR Interferometry SAR interferometry refers to a class of techniques where additional information is extracted from SAR images that are acquired from different vantage points, or at different times. Various implementations allow different types of information to be extracted. For example, if two SAR images are acquired from slightly different viewing geometries, information about the topography of the surface can be inferred. On the other hand, if images are taken at slightly different times, a map of surface velocities can be produced. Finally, if sets of interferometric images are combined, subtle changes in the scene can be measured with extremely high accuracy. In this section we shall first discuss so-called cross-track interferometers used for the measurement of surface topography. This will be followed by a discussion of along-track interferometers used to measure surface velocity. The section ends with a discussion of differential interferometry used to measure surface changes and deformation. Radar Interferometry for Measuring Topography. SAR interferometry was first demonstrated by Graham (48), who demonstrated a pattern of nulls or interference fringes by vectorally adding the signals received from two SAR antennas, one physically situated above the other. Later, Zebker and Goldstein (49) demonstrated that these interference fringes can be formed after SAR processing of the individual images if both the amplitude and the phase of the radar images are preserved during the processing. The basic principles of interferometry can be explained using the geometry shown in Fig. 16. Using the law of cosines on the triangle formed by the two antennas and the point being imaged, it follows that (R + δR)2 = R2 + B2 − 2BR cos

π

2

−θ +α

(46)

480

REMOTE SENSING BY RADAR

Antenna 1 z β

Antenna 2

α

θ R + δR R

h

Z(y) y Figure 16. Basic interferometric radar geometry. The path length difference between the signals measured at each of the two antennas is a function of the elevation of the scatterer.

where R is the slant range to the point being imaged from the reference antenna, 웃R is the path length difference between the two antennas, B is the physical interferometric baseline length, is the look angle to the point being imaged, and 움 is the baseline tilt angle with respect to the horizontal. From Eq. (46) it follows that we can solve for the path length difference 웃R. If we assume that R Ⰷ B (a very good assumption for most interferometers), one finds that δR ≈ −B sin(θ − α)

(47)

The radar system does not measure the path length difference explicitly, however. Instead, what is measured is an interferometric phase difference that is related to the path length difference through φ=

a2π a2π δR = − B sin(θ − α) λ λ

(48)

where a ⫽ 1 for the case where signals are transmitted out of one antenna and received through both at the same time, and a ⫽ 2 for the case where the signal is alternately transmitted and received through one of the two antennas only. The radar wavelength is denoted by . From Fig. 16, it also follows that the elevation of the point being imaged is given by z(y) = h − R cos θ

(49)

with h denoting the height of the imaging reference antenna above the reference plane with respect to which elevations are quoted. From Eq. (48) one can infer the actual radar look angle from the measured interferometric phase as θ = α − sin−1

λφ a2πB

(50)

Using Eqs. (50) and (49), one can now express the inferred elevation in terms of system parameters and measurables as

z(y) = h − R cos α − sin−1

λφ a2πB

(51)

This expression is the fundamental IFSAR equation for broadside imaging geometry. SAR interferometers for the measurement of topography can be implemented in one of two ways. In the case of singlepass interferometry, the system is configured to measure the two images at the same time through two different antennas usually arranged one above the other. The physical separation of the antennas is referred to as the baseline of the interferometer. In the case of repeat-track interferometry, the two images are acquired by physically imaging the scene at two different times using two different viewing geometries. So far all single-pass interferometers have been implemented using airborne SARs (49–51). The Shuttle Radar Topography Mission (SRTM), a joint project between the United States National Imagery and Mapping Agency (NIMA) and the National Aeronautics and Space Administration (NASA), will be the first spaceborne implementation of a single pass interferometer (52). Scheduled for launch in 1999, SRTM will use modified hardware from the C-band radar of the SIR-C system, with a 62-m-long boom and a second antenna to form a single-pass interferometer. The SRTM mission will acquire digital topographic data of the globe between 60⬚ north and south latitudes during one 11-day shuttle mission. The SRTM mission will also acquire interferometric data using modified hardware from the X-band part of the SIR-C/X-SAR system. The swaths of the X-band system, however, are not wide enough to provide global coverage during the mission. Most of the SAR interferometry research has gone into understanding the various error sources and how to correct their effects during and after processing. As a first step, careful motion compensation must be performed during processing to correct for the actual deviation of the aircraft platform from a straight trajectory (53). As mentioned before, the single-look SAR processor must preserve both the amplitude and the phase of the images. After single-look processing, the images are carefully co-registered to maximize the correlation between the images. The so-called interferogram is formed by subtracting the phase in one image from that in the other on a pixel-by-pixel basis. The interferometric SAR technique is better understood by briefly reviewing the difference between traditional and interferometric SAR processing. In traditional (noninterferometric) SAR processing, it is assumed that the imaged pixel is located at the intersection of the Doppler cone (centered on the velocity vector), the range sphere (centered at the antenna), and an assumed reference plane, as shown in Fig. 17. Since the Doppler cone has its apex at the center of the range sphere, and its axis of symmetry is aligned with the velocity vector, it follows that all points on the intersection of the Doppler cone and the range sphere lie in a plane orthogonal to the velocity vector. The additional information provided by cross-track interferometry is that the imaged point also has to lie on the cone described by a constant phase, which means that one no longer has to assume an arbitrary reference plane. This cone of equal phase has its axis of symmetry aligned with the interferometer baseline and also has its apex at the center of the range sphere. It then follows that the imaged point lies at the intersection of the Doppler cone, the range sphere, and the equal phase cone, as shown in Fig. 18. It should be pointed out that in actual interferometric SAR processors, the two images acquired by the two interferometric antennas are

REMOTE SENSING BY RADAR

Figure 17. In traditional (noninterferometric) SAR processing, the scatterer is assumed to be located at the intersection of the Doppler cone, the range sphere, and some assumed reference plane.

actually processed individually using the traditional SAR processing assumptions. The resulting interferometric phase then represents the elevation with respect to the reference plane assumed during the SAR processing. This phase is then used to find the actual intersection of the range sphere, the Doppler cone, and the phase cone in three dimensions. Once the images are processed and combined, the measure phase must be unwrapped. During this procedure, the measured phase, which only varies between 0⬚ and 360⬚, must be

Figure 18. Interferometric radars acquire all the information required to reconstruct the position of a scatterer in three dimensions. The scatterer is located at the intersection of the Doppler cone, the range sphere, and the interferometric phase cone.

481

unwrapped to retrieve the original phase by adding or subtracting multiples of 360⬚. The earliest phase unwrapping routine was published by Goldstein et al. (54). In this algorithm, areas where the phase will be discontinuous due to layover or poor SNRs are identified by branch cuts, and the phase unwrapping routine is implemented such that branch cuts are not crossed when unwrapping the phases. Phase unwrapping remains one of the most active areas of research, and many algorithms remain under development. Even after the phases have been unwrapped, the absolute phase is still not known. This absolute phase is required to produce a height map that is calibrated in the absolute sense. One way to estimate this absolute phase is to use ground control points with known elevations in the scene. However, this human intervention severely limits the ease with which interferometry can be used operationally. Madsen et al. (53) reported a method by which the radar data are used to estimate this absolute phase. The method breaks the radar bandwidth up into upper and lower halves, and then it uses the differential interferogram formed by subtracting the upper half spectrum interferogram from the lower half spectrum interferogram to form an equivalent low-frequency interferometer to estimate the absolute phase. Unfortunately, this algorithm is not robust enough in practice to fully automate interferometric processing. This is one area where significant research is needed if the full potential of automated SAR interferometry is to be realized. Absolute phase determination is followed by height reconstruction. Once the elevations in the scene are known, the entire digital elevation map can be geometrically rectified. Reference 53 reported accuracies ranging between 2.2 m root mean square (rms) for flat terrain and 5.5 m rms for terrain with significant relief for the NASA/JPL TOPSAR interferometer. An alternative way to form the interferometric baseline is to use a single-channel radar to image the same scene from slightly different viewing geometries. This technique, known as repeat-track interferometry, has been mostly applied to spaceborne data starting with data collected with the L-band SEASAT SAR (54–59). Other investigators used data from the L-band SIR-B (60), the C-band ERS-1 radar (61,62), and more recently the L-band SIR-C (63) and the X-band X-SAR (64). Repeat-track interferometry has also been demonstrated using airborne SAR systems (65). Two main problems limit the usefulness of repeat-track interferometry. The first is due to the fact that, unlike the case of single-pass interferometry, the baseline of the repeat-track interferometer is not known accurately enough to infer accurate elevation information from the interferogram. Reference 62 shows how the baseline can be estimated using ground control points in the image. The second problem is due to differences in scattering and propagation that results from the fact that the two images forming the interferogram are acquired at different times. One result is temporal decorrelation, which is worst at the higher frequencies (58). For example, C-band images of most vegetated areas decorrelate significantly over as short a time as 1 day. This problem, more than any other, limits the use of the current operational spaceborne single-channel SARs for topographic mapping, and it has led to proposals for dedicated interferometric SAR missions to map the entire globe (66,67).

482

REMOTE SENSING BY RADAR

Along-Track Interferometry. In some cases, the temporal change between interferometric image contains much information. One such case is the mapping of ocean surface movement. In this case, the interferometer is implemented in such a way that one antenna images the scene a short time before the second antenna, preferably using the same viewing geometry. Reference 68 described such an implementation in which one antenna is mounted forward of the other on the body of the NASA DC-8 aircraft. In a later work, Ref. 69 measured ocean currents with a velocity resolution of 5 m/s to 10 m/s. Along-track interferometry was used by Refs. 70 and 71 to estimate ocean surface current velocity and wavenumber spectra. This technique was also applied to the measurement of ship-generated internal wave velocities by Ref. 72. In addition to measuring ocean surface velocities, Carande (73) reported a dual baseline implementation, implemented by alternately transmitting out of the front and aft antennas, to measure ocean coherence time. He estimated typical ocean coherence times for the L-band to be about 0.1 s. Shemer and Marom (74) proposed a method to measure ocean coherence time using only a model for the coherence time and one interferometric SAR observation. Differential Interferometry. One of the most exciting applications of radar interferometry is implemented by subtracting two interferometric pairs separated in time from each other to form a so-called differential interferogram. In this way, surface deformation can be measured with unprecedented accuracy. This technique was first demonstrated by Gabriel et al. (75) using data from SEASAT data to measure millimeterscale ground motion in agricultural fields. Since then this technique has been applied to measure centimeter- to meterscale co-seismic displacements (76–81) and to measure centimeter-scale volcanic deflation (82). The added information provided by high-spatial-resolution co-seismic deformation maps was shown to provide insight into the slip mechanism that would not be attainable from the seismic record (79,80). Differential SAR interferometry has also led to spectacular applications in polar ice sheet research by providing information on ice deformation and surface topography at an unprecedented level of spatial details. Goldstein et al. (83) observed ice stream motion and tidal flexure of the Rutford Glacier in Antarctica with a precision of 1 mm per day and summarized the key advantages of using SAR interferometry for glacier studies. Joughin (84) studied the separability of ice motion and surface topography in Greenland and compared the results with both radar and laser altimetry. Rignot et al. (85) estimated the precision of the SAR-derived velocities using a network of in situ velocities and demonstrated, along with Joughin et al. (86), the practicality of using SAR interferometry across all the different melting regimes of the Greenland Ice Sheet. Large-scale applications of these techniques is expected to yield significant improvements in our knowledge of the dynamics, mass balance, and stability of the world’s major ice masses. One confusing factor in the identification of surface deformation in differential interferograms is due to changing atmospheric conditions. In observing the earth, radar signals propagate through the atmosphere, which introduces additional phase shifts that are not accounted for in the standard geometrical equations describing radar interferometry. Spatially varying patterns of atmospheric water vapor changes the lo-

cal index of refraction, which, in turn, introduces spatially varying phase shifts to the individual interferograms. Since the two (or more) interferograms are acquired at different times, the temporal change in water vapor introduces a signal that could be on the same order of magnitude as that expected from surface deformation, as discussed by Goldstein (87). Another limitation of the technique is temporal decorrelation. Changes in the surface properties may lead to complete decorrelation of the images and no detectable deformation signature (78). Current research is only beginning to realize the full potential of radar interferometry. Even though some significant problems still have to be solved before this technique will become fully operational, the next few years will undoubtedly see an explosion in the interest and use of radar interferometry data.

NONIMAGING RADARS Scatterometers Scatterometers measure the surface backscattering cross section precisely in order to relate the measurement to the geophysical quantities such as the ocean wind speed and the soil moisture (88). Since spatial resolution is not a very important parameter, it is usually sacrificed for better accuracy in the backscattering cross-section measurement. Typical resolution of a scatterometer is several tens of kilometers. Since the backscattering cross section depends strongly on the surface roughness and the dielectric constant, scatterometer measurements are used to determine these surface properties. For example, since ocean surface roughness is related to the ocean wind speed, scatterometers can be used to measure the ocean wind speed indirectly. Over land, backscattering cross-section measurements can be used to estimate the surface moisture content on a global scale. However, since the scatterometer resolution size is on the order of several kilometers, the retrieved information is not as useful as the one derived from a high-resolution imaging radar. As wind blows over the ocean surface, the first wave that is generated by the coupling between wind and the ocean surface is the 1.7 cm waves. As the wind continues to blow, energy is transferred to other parts of the surface spectrum and the surface wave starts to grow. Since the ocean surface has a large dielectric constant for the microwave spectrum, the backscattering is mainly related to the surface roughness. Therefore, it is reasonable to believe that the backscattering cross section is sensitive to the wind speed via surface roughness (89). Since the surface roughness scale at the size of the radar wavelength strongly influences the backscattering cross section, the scatterometer wavelength should be at the centimeter scale in order to derive the surface wind speed. Specifically, the surface roughness size (⌳) responsible for backscattering is related to the radar wavelength as =

λ 2 sin θ

(52)

where is the incidence angle. This wavelength ⌳ is also known as the Bragg wavelength which represents a resonance scale in the scattering surface.

As discussed in the signal fading and speckle section, radar return measurements are contaminated by the speckle noise. In order to measure the backscattering cross section accurately, a large number of independent observations must be averaged (90). This can be done in the frequency domain or the time domain. In scatterometry, a commonly adopted parameter for the backscattering cross-section measurement accuracy is Kp, defined as Kp =

pvar{σ

0 meas }

(53)

σ0

which is the normalized standard deviation of the measured backscattering cross section (91). To obtain an accurate measurement, Kp must be minimized. Once the received power is measured, the backscattering cross section can be determined by using the radar equation. In this process, the noise power is also estimated and subtracted from the received power. Then, this backscattering cross section is related to the wind vector via a geophysical model function (92). In general, a model function can be written as σ0 = F (U, f, θ, α, p, . . . )

(54)

where U is the wind speed, f is the radar frequency, is the incidence angle, 움 is the azimuth angle, and p denotes the radar signal polarization. Due to a lack of rigorous theoretical models, empirical models have been used for scatterometry applications. The precise form of the model function is still debated and is currently the subject of intense study. Figure 19 shows a schematic of the model functions. As an example of a model function, Wentz et al. (93,94) have used SEASAT data to derive a Ku-band geophysical model function known as SASS-2. From a geophysical model function, one can observe that 0 is a function of the radar azimuth angle relative to the

Incidence angle

Backscattering cross section (dB)

10

0° 10°

0 Larger incidence angle

20° 30°

–10

40° –20

Backscattering cross section (dB)

REMOTE SENSING BY RADAR

483

VV Higher wind speed

HH VV HH

0

180 Azimuth angle (deg)

360

Figure 20. Backscattering cross section in terms of the radar azimuth angle relative to the wind direction. Note that 0 in the upwind direction is slightly higher than 1 in the downwind direction.

wind direction. Figure 20 shows the double sinusoidal relationship (92). That is, 0 is maximum at upwind (움 ⫽ 0⬚) and downwind (움 ⫽ 180⬚) directions, while it is minimum near the crosswind direction (움 ⫽ 90⬚ and 270⬚). As can be seen from Fig. 20, 0 in the upwind direction is slightly higher than 0 in the downwind direction. In principle, a unique wind vector can be determined due to this small asymmetry. However, extremely accurate measurements are required to detect this small difference. It is clear that more than one 0 measurement must be made at different azimuth angles to determine the wind direction. In order to explain the wind direction determination technique, we use a simple model given by σ0 = AU γ (1 + a cos α + b cos 2α)

(55)

where A, a, b, and 웂 are empirically determined for the wind speed U measured at a reference altitude (usually at 19.5 m above the ocean surface). As can be seen from Eq. (55), two measurements provide the wind speed U and the wind direction with a fourfold ambiguity; therefore, additional measurements are needed to remove the ambiguity. Otherwise, auxiliary meteorological information is required to select the correct wind direction from ambiguous vectors (95). Spaceborne scatterometers are capable of measuring global wind vectors over oceans to be used to study upper ocean circulation, tropospheric dynamics, and air–sea interaction. Examples of spaceborne scatterometers are SASS (SEASAT scatterometer), ERS-1 (96,97), and NSCAT. Their radar parameters are shown in Table 1. To estimate a wind vector, multiple colocated 0 measurements from different azimuth angles are required. Hence, the antenna subsystem is the most important component in the

Table 1. Spaceborne Scatterometer Parameters

1

10 Wind speed (m/s)

100

Figure 19. Schematic scatterometer model function. Using this geophysical model function, backscattering measurements are related to wind speed.

Frequency Spatial resolution Swath width Number of antennas Polarization Orbit altitude

SASS

ERS-1

NSCAT

14.6 GHz 50 km 500 km 4 VV, HH 800 km

5.3 GHz 50 km 500 km 3 VV 785 km

14 GHz 25, 50 km 600 km 6 VV, HH 820 km

REMOTE SENSING BY RADAR

yyyyyyy ;;;; ;;; ;;;;yyy yyyy ;;;

scatterometer design. Multiple fan beam and scanning spot beam antennas are widely used configurations. The next-generation scatterometer, known as SeaWinds, implements a scanning pencil-beam instrument in order to avoid the difficulties of accommodating a traditional fan-beam space scatterometer. The RF and digital subsystems of a scatterometer are similar to other radars except for a noise source for onboard calibration. Both internal and external calibration devices are required for a scatterometer to provide accurate backscattering cross-section measurements. The resolution along the flight track can be improved by applying the Doppler filtering technique to the received echo. That is, if the echo is passed through a bandpass filter with a center frequency f D and a bandwidth ⌬f, then the surface resolution ⌬x can be improved as

h f x =

2v λ

2v 2

2

λ −

3/2

(56)

f D2

where h is the platform altitude and v is the platform velocity (88). Altimeters A radar altimeter (88) measures the distance between the sensor and the surface at the nadir direction to derive a topographic map of the surface. Ranging accuracy of a spaceborne radar altimeter is a few tens of centimeters. Even though an altimeter can measure the land surface topography, the resulting topographic map is not very useful since the resolution of a radar altimeter is on the order of a few kilometers. However, this is satisfactory for oceanographic applications since high-resolution measurements are not required. The geoid is the equipotential surface that corresponds to the mean sea level. Since the geoid is the static component of the ocean surface topography, it can be derived by averaging repetitive measurements over a long time period. The spatial variation in the geoid provides information on different geoscientific parameters. For example, large-scale variations (앑1000 km) in the geoid are related to processes occurring deep in the Earth’s interior. A radar altimeter transmits a short pulse to the nadir direction and measures the round-trip time (T) accurately. Hence, the distance (H) from the sensor to the surface can be calculated from H=

vT 2

(57)

Here, v is the speed of a radar wave in the propagating medium. The height error (⌬H) can be written in terms of the velocity error (웃v) and the timing error (웃T) as δH =

Tδv vδT + 2 2

(58)

The velocity error results from the refractive index variation due to ionosphere and atmosphere. The timing error is mainly related to the finite signal bandwidth and the clock accuracy on the spacecraft. In addition, small-scale roughness varia-

Beam-limited

Transmit pulse

Pulse-limited

Transmit pulse

Figure 21. Beam-limited and pulse-limited altimeter footprints.

tion due to surface elevation causes an electromagnetic bias (98). This bias is about 1% of the significant wave height (SWH). These errors must be estimated and corrected to achieve the required height accuracy. The altimeter resolution can be determined by either the radar beamwidth or the pulse length. If the beam footprint is smaller than the pulse footprint, the altimeter is called beamlimited. Otherwise, it is pulse-limited (see Fig. 21). The beamlimited footprint is given by h/L, while the pulse limited footprint is 2兹ch, where h is the altitude, L is the antenna length, and is the pulse length. The altimeter mean return waveform W(t) (99,100) can be written as W (t) = F (t)∗ q(t)∗ p(t)

(59)

where F(t) is the flat surface impulse response including radar antenna beamwidth and pointing angle effects, q(t) is the surface height probability density function, and p(t) is the radar system impulse response. Here, the symbol * denotes the convolution operator. As illustrated in Fig. 22, the return pulse shape changes for different surface roughness. As an example of radar altimeters, we briefly describe the TOPEX radar altimeter (101) that has been used to measure the sea level precisely. The resulting rms height accuracy of a single-pass measurement is 4.7 cm (102). This information is used to study the circulation of the world’s oceans. The TOPEX altimeter is a dual-frequency (5.3 GHz and 13.6 GHz) radar in order to retrieve the ionospheric delay of the radar

Relative received power

484

Flat surface return 1

1/2

Rough surface return Return from mean surface

Time Figure 22. Altimeter return pulse shape for different surface roughnesses.

REMOTE SENSING BY RADAR

signal since ionosphere is a dispersive medium. The TOPEX microwave radiometer measures sea surface emissivity at three frequencies (18 GHz, 21 GHz, and 37 GHz) to estimate the total water vapor content. In addition, the satellite carries a global positioning system (GPS) receiver for precise satellite tracking. All these measurements are used to produce highaccuracy altimeter data. Recent research topics related to radar altimetry can be found in Ref. 102. Radar Sounders Radar sounders are used to image subsurface features by measuring reflections from dielectric constant variations. For example, an ice sounding radar can measure the ice thickness by detecting the ice–ocean boundary (103). In order to penetrate into subsurface, a long-wavelength radar is desired. Various radar sounding techniques are well-summarized in Ref. 104. In order to image subsurface features, a radar signal must penetrate to the target depth for satisfactory SNR. Like other radars, subsurface radar should have an adequate bandwidth for sufficient resolution to detect buried objects or other dielectric discontinuity. For a ground penetrating radar (105), a probing antenna must be designed for efficient coupling of electromagnetic radiation into the ground. The depth resolution can be obtained by using similar techniques described in the previous sections. However, the physical distance must be estimated from the slant range information and the real part of the medium refractive index. In order to enhance horizontal resolution, one can use the synthetic aperture technique. However, the matched filter construction is very difficult since the medium dielectric constant is usually inhomogeneous and unknown. The most important quantity to design a subsurface radar is the medium loss that determines the penetration depth. For a ground subsurface radar, it is advantageous to average many samples or increase effective pulse length to enhance SNR. Polarimetric information is also important when buried objects are long and thin since strong backscattering is produced by a linearly polarized signal parallel to the long axis. For an airborne (106) or a spaceborne radar sounder (107), subsurface returns must be separated from unwanted surface returns. Since a surface return is usually much stronger than a subsurface return, the radar must be designed for extremely low sidelobe. A surface return can be an ambiguous signal if it is at the same range as a subsurface return (surface ambiguity). This problem becomes more serious as the altitude of a radar becomes higher or the antenna gain becomes lower. Clearly, future research activities are required to overcome this difficulty. In addition, when the medium loss is large, the radar must have a large dynamic range to detect the small subsurface return. As an example of orbiting radar sounders, we will briefly describe the Apollo 17 lunar sounder radar (107). The objectives of the sounder experiment were to detect subsurface geological structures and to generate a lunar surface profile. Since lunar soil and rock exhibit less attenuation due to the absence of free water, one may expect deeper subsurface penetration compared with the observations on Earth. This sounder, operating at three frequencies (5 MHz, 15 MHz, and 150 MHz), was also used to generate a lunar surface profile using the strong surface return.

485

Cloud Radar Most meteorological radars (108) operate at centimeter wavelengths in order to avoid the significant attenuation by precipitation. However, cloud radars operate at millimeter wavelength since clouds cannot be observed easily by using conventional centimeter wavelength radars (109). In order to minimize absorption, the radar frequency must be in the spectral windows whose center frequencies are 35 Hz, 100 Hz, and 150 GHz. The first millimeter wave radar observations of clouds were done in the 35 GHz window (110). Benefiting from the technology development at 94 GHz, a 94 GHz spaceborne cloud radar has been proposed where even higher radar reflectivity is expected at a shorter wavelength. For a cloud radar, high dynamic range and low sidelobes are required to detect a weak signal scattered from a cloud in presence of large surface return. For a pulsed radar, the received power (Pr) can be written as Pr =

Pt G2 λ2V −2α ηe (4π )3 r4

(60)

where Pt is the peak transmit power, G is the antenna gain, is the wavelength, V is the resolution volume, is the volumetric radar cross section, and 움 is the one-way loss (111). If the cloud particles are much smaller than the radar wavelength, using the Rayleigh scattering method, the radar reflectivity (Z) can be written as Z=

ηλ4 π 5 |K|2

(61)

where K ⫽ (n2 ⫺ 1)/(n2 ⫹ 2) and n is the complex refractive index of a particle. The radial velocity (vr) is measured by using the Doppler frequency ( fd) as vr =

fd λ 2

(62)

Cloud radar measurements provide radar reflectivity and radial velocity profiles in terms of altitude. Recent airborne cloud radars (112,113) can measure polarimetric reflectivity which can provide the additional information such as the linear depolarization ratio (LDR). Using these parameters, a cloud region classification (ice, cloud droplets, mixed-phase hydrometeors, rain, and insects) can be achieved (111). If multiwavelength measurements are made, it may be possible to estimate the drop size distribution. Rain Radar The accurate measurement of rainfall is an important factor in understanding the global water and energy cycle. Rain radars measure the rain reflectivity that can be used to estimate the parameters related to rainfall using inversion algorithms (114). As an example of rain radars, one of tropical rainfall measuring mission (TRMM) (115) instruments is a single-frequency (13.8 GHz) , cross-track scanning radar for precipitation measurement. The satellite altitude is 350 km and the scanning swath is 220 km. The range and surface horizontal resolutions are 250 m and 4 km, respectively. Using TRMM data, rain profiles can be estimated (114).

486

REMOTE SENSING BY RADAR

The operation frequency (13.8 GHz) is selected by considering both antenna size and attenuation. At this frequency, the antenna size does not have to be too large and the attenuation is small enough to measure rainfall near the surface. As an airborne rain radar, the National Aeronautics and Space Administration and the Jet Propulsion Laboratory developed an airborne rain-mapping radar (ARMAR) that flies on the NASA DC-8 aircraft (116). ARMAR operates with TRMM frequency and geometry to understand the issues related to TRMM rain radar. Due to the downward looking geometry, it is possible that surface clutter return may obscure the return from precipitation. Even for an antenna looking off-nadir, the nadir return can be at the same range as precipitation. In order to overcome these difficulties, both antenna sidelobes and pulse compression sidelobes must be low (앑 ⫺60 dB relative to the peak return) enough to detect precipitation. In order to measure rain reflectivity accurately, it is necessary to calibrate the radar precisely. Radar measurements such as the reflectivity, the H and V polarization phase difference, and the differential reflectivity (HH and VV) can be used to estimate rain rate and rain profile.

BIBLIOGRAPHY 1. C. Elachi, Introduction to the Physics and Techniques of Remote Sensing, New York: Wiley, 1987. 2. F. T. Ulaby, R. K. Moore, and A. K. Fung, Microwave remote sensing: active and passive, Radar Remote Sensing and Surface Scattering and Emission Theory, Vol. 2, Dedham, MA: Artech House, 1982. 3. J. C. Curlander and R. N. McDonough, Synthetic Aperture Radar Systems and Signal Processing, New York: Wiley, 1991. 4. J. P. Ford, Resolution versus speckle relative to geologic interpretability of spaceborne radar images: A survey of user preferences, IEEE Trans. Geosci. Remote Sens., GRS-20: 434–444, 1982. 5. H. A. Zebker, J. J. van Zyl, and D. N. Held, Imaging radar polarimetry from wave synthesis, J. Geophys. Res., 92: 683–701, 1987. 6. R. L. Jordan, B. L. Huneycutt, and M. Werner, The SIR-C/XSAR synthetic aperture radar system, IEEE Trans. Geosci. Remote Sens., GRS-33: 829–839, 1995. 7. J. J. van Zyl, A technique to calibrate polarimetric radar images using only image parameters and trihedral corner reflectors, IEEE Trans. Geosci. Remote Sens., GRS-28: 337–348, 1990. 8. H. A. Zebker and Y. L. Lou, Phase calibration of imaging radar polarimeter Stokes matrices, IEEE Trans. Geosci. Remote Sens., GRS-28: 246–252, 1990. 9. A. L. Gray et al., Synthetic aperture radar calibration using reference reflectors, IEEE Trans. Geosci. Remote Sens., GRS-28: 374–383, 1990. 10. A. Freeman, Y. Shen, and C. L. Werner, Polarimetric SAR calibration experiment using active radar calibrators, IEEE Trans. Geosci. Remote Sens., GRS-28: 224–240, 1990. 11. J. D. Klein and A. Freeman, Quadpolarisation SAR calibration using target reciprocity, J. Electromagn. Waves Appl., 5: 735– 751, 1991. 12. H. A. Zebker et al., Calibrated imaging radar polarimetry: Technique, examples, and applications, IEEE Trans. Geosci. Remote Sens., GRS-29: 942–961, 1991.

13. A. Freeman et al., Calibration of Stokes and scattering matrix format polarimetric SAR data, IEEE Trans. Geosci. Remote Sens., GRS-30: 531–539, 1992. 14. A. Freeman, SAR calibration: An overview, IEEE Trans. Geosci. Remote Sens., GRS-30: 1107–1121, 1992. 15. A. Freeman et al., SIR-C data quality and calibration results, IEEE Trans. Geosci. Remote Sens., GRS-33: 848–857, 1995. 16. J. A. Kong et al., Identification of earth terrain cover using the optimum polarimetric classifier, J. Electromagn. Waves Appl., 2: 171–194, 1988. 17. H. H. Lim et al., Classification of earth terrain using polarimetric synthetic aperture radar images, J. Geophys. Res., 94: 7049– 7057, 1989. 18. J. J. van Zyl and C. F. Burnette, Bayesian classification of polarimetric sar images using adaptive a priori probabilities, Int. J. Remote Sens., 13: 835–840, 1992. 19. E. Rignot and R. Chellappa, Segmentation of polarimetric synthetic aperture radar data, IEEE Trans. Image Process., 1: 281– 300, 1992. 20. E. Rignot and R. Chellappa, Maximum a-posteriori classification of multifrequency, multilook, synthetic aperture radar intensity data, J. Opt. Soc. Amer. A, 10: 573–582, 1993. 21. E. J. M. Rignot et al., Mapping of forest types in Alaskan boreal forests using SAR imagery, IEEE Trans. Geosci. Remote Sens., GRS-32: 1051–1059, 1994. 22. L. E. Pierce et al., Knowledge-based classification of polarimetric SAR images, IEEE Trans. Geosci. Remote Sens., GRS-32: 1081–1086, 1994. 23. J. J. van Zyl, Unsupervised classification of scattering behavior using radar polarimetry data, IEEE Trans. Geosci. Remote Sens., GRS-27: 36–45, 1989. 24. L. L. Hess et al., Delineation of inundated area and vegetation along the amazon floodplain with the SIR-C synthetic aperture radar, IEEE Trans. Geosci. Remote Sens., GRS-33: 896–904, 1995. 25. R. H. Lang and J. S. Sidhu, Electromagnetic backscattering from a layer of vegetation, IEEE Trans. Geosci. Remote Sens., GRS-21: 62–71, 1983. 26. J. A. Richards, G. Sun, and D. S. Simonett, L-Band radar backscatter modeling of forest stands, IEEE Trans. Geosci. Remote Sens., GRS-25: 487–498, 1987. 27. S. L. Durden, J. J. van Zyl, and H. A. Zebker, Modeling and observation of the radar polarization signature of forested areas, IEEE Trans. Geosci. Remote Sens., GRS-27: 290–301, 1989. 28. F. T. Ulaby et al., Michigan microwave canopy scattering model, Int. J. Remote Sens., 11: 1223–1253, 1990. 29. N. Chauhan and R. Lang, Radar modeling of a boreal forest, IEEE Trans. Geosci. Remote Sens., GRS-29: 627–638, 1991. 30. G. Sun, D. S. Simonett, and A. H. Strahler, A radar backscatter model for discontinuous coniferous forest canopies, IEEE Trans. Geosci. Remote Sens., GRS-29: 639–650, 1991. 31. S. H. Yueh et al., Branching model for vegetation, IEEE Trans. Geosci. Remote Sens., GRS-30: 390–402, 1992. 32. Y. Wang, J. Day, and G. Sun, Santa Barbara microwave backscattering model for woodlands, Int. J. Remote Sens., 14: 1146– 1154, 1993. 33. C. C. Hsu et al., Radiative transfer theory for polarimetric remote sensing of pine forest at P-Band, Int. J. Remote Sens., 14: 2943–2954, 1994. 34. R. H. Lang et al., Modeling P-Band SAR returns from a red pine stand, Remote Sens. Environ., 47: 132–141, 1994. 35. M. C. Dobson et al., Dependence of radar backscatter on conifer forest biomass, IEEE Trans. Geosci. Remote Sens., GRS-30: 412– 415, 1992.

REMOTE SENSING BY RADAR 36. T. LeToan et al., Relating forest biomass to SAR data, IEEE Trans. Geosci. Remote Sens., GRS-30: 403–411, 1992. 37. K. J. Ranson and G. Sun, Mapping biomass of a northern forest using multifrequency SAR data, IEEE Trans. Geosci. Remote Sens., GRS-32: 388–396, 1994. 38. A. Beaudoin et al., Retrieval of forest biomass from SAR data, Int. J. Remote Sens., 15: 2777–2796, 1994. 39. E. Rignot et al., Radar estimates of aboveground biomass in boreal forests of interior Alaska, IEEE Trans. Geosci. Remote Sens., GRS-32: 1117–1124, 1994. 40. M. L. Imhoff, Radar backscatter and biomass saturation: Ramifications for global biomass inventory, IEEE Trans. Geosci. Remote Sens., GRS-33: 511–518, 1995. 41. E. J. Rignot, R. Zimmerman, and J. J. van Zyl, Spaceborne applications of P-Band imaging radars for measuring forest biomass, IEEE Trans. Geosci. Remote Sens., GRS-33: 1162–1169, 1995. 42. K. J. Ranson, S. Saatchi, and G. Sun, Boreal forest ecosystem characterization with SIR-C/XSAR, IEEE Trans. Geosci. Remote Sens., GRS-33: 867–876, 1995. 43. M. C. Dobson et al., Estimation of forest biophysical characteristics in northern Michigan with SIR-C/X-SAR, IEEE Trans. Geosci. Remote Sens., GRS-33: 877–895, 1995. 44. E. S. Kasischke, N. L. Christensen, and L. L. Bourgeau-Chavez, Correlating radar backscatter with components of biomass in loblolly pine forests, IEEE Trans. Geosci. Remote Sens., GRS33: 643–659, 1995. 45. Y. Oh, K. Sarabandi, and F. T. Ulaby, An empirical model and an inversion technique for radar scattering from bare soil surfaces, IEEE Trans. Geosci. Remote Sens., GRS-30: 370–381, 1992. 46. P. C. Dubois, J. J. van Zyl, and T. Engman, Measuring soil moisture with imaging radars, IEEE Trans. Geosci. Remote Sens., GRS-33: 915–926, 1995. 47. J. Shi and J. Dozier, Inferring snow wetness using C-Band data from SIR-C’s polarimetric synthetic aperture radar, IEEE Trans. Geosci. Remote Sens., GRS-33: 905–914, 1995. 48. L. C. Graham, Synthetic interferometer radar for topographic mapping, Proc. IEEE, 62: 763–768, 1974. 49. H. Zebker and R. Goldstein, Topographic mapping from interferometric SAR observations, J. Geophys. Res., 91: 4993–4999, 1986. 50. H. A. Zebker et al., The TOPSAR interferometric radar topographic mapping instrument, IEEE Trans. Geosci. Remote Sens., GRS-30: 933–940, 1992. 51. N. P. Faller and E. H. Meier, First results with the airborne single-pass DO-SAR interferometer, IEEE Trans. Geosci. Remote Sens., GRS-33: 1230–1237, 1995. 52. J. E. Hilland et al., Future NASA spaceborne missions, Proc. 16th Digital Avionics Syst. Conf., Irvine, CA, 1997. 53. S. N. Madsen, H. A. Zebker, and J. Martin, Topographic mapping using radar interferometry: Processing techniques, IEEE Trans. Geosci. Remote Sens., GRS-31: 246–256, 1993. 54. R. M. Goldstein, H. A. Zebker, and C. Werner, Satellite radar interferometry: Two-dimensional phase unwrapping, Radio Sci., 23: 713–720, 1988. 55. F. K. Li and R. M. Goldstein, Studies of multibaseline spaceborne interferometric synthetic aperture radars, IEEE Trans. Geosci. Remote Sens., GRS-28: 88–97, 1990. 56. C. Prati and F. Rocca, Limits to the resolution of elevation maps from stereo sar images, Int. J. Remote Sens., 11: 2215–2235, 1990.

487

57. C. Prati et al., Seismic migration for SAR focusing: Interferometrical applications, IEEE Trans. Geosci. Remote Sens., GRS28: 627–640, 1990. 58. H. A. Zebker and J. Villasenor, Decorrelation in interferometric radar echoes, IEEE Trans. Geosci. Remote Sens., GRS-30: 950– 959, 1992. 59. C. Prati and F. Rocca, Improving slant range resolution with multiple SAR surveys, IEEE Trans. Aerosp. Electron. Syst., 29: 135–144, 1993. 60. A. K. Gabriel and R. M. Goldstein, Crossed orbit interferometry: Theory and experimental results from SIR-B, Int. J. Remote Sens., 9: 857–872, 1988. 61. F. Gatelli et al., The wavenumber shift in SAR interferometry, IEEE Trans. Geosci. Remote Sens., GRS-32: 855–865, 1994. 62. H. A. Zebker et al., Accuracy of topographic maps derived from ERS-1 interferometric radar, IEEE Trans. Geosci. Remote Sens., GRS-32: 823–836, 1994. 63. E. R. Stofan et al., Overview of results of spaceborne imaging radar-C, X-band synthetic aperture radar (SIR-C/X-SAR), IEEE Trans. Geosci. Remote Sens., GRS-23: 817–828, 1995. 64. J. Moreira et al., X-SAR interferometry: First results, IEEE Trans. Geosci. Remote Sens., GRS-33: 950–956, 1995. 65. A. L. Gray and P. J. Farris-Manning, Repeat-pass interferometry with airborne synthetic aperture radar, IEEE Trans. Geosci. Remote Sens., GRS-31: 180–191, 1993. 66. A. Moccia and S. Vetrella, A tethered interferometric synthetic aperture radar (SAR) for a topographic mission, IEEE Trans. Geosci. Remote Sens., GRS-31: 103–109, 1992. 67. H. A. Zebker et al., Mapping the world’s topography using radar interferometry: The TOPSAT mission, Proc. IEEE, 82: 1774– 1786, 1994. 68. R. M. Goldstein and H. A. Zebker, Interferometric radar measurements of ocean surface currents, Nature, 328: 707–709, 1987. 69. R. M. Goldstein, T. P. Barnett, and H. A. Zebker, Remote sensing of ocean currents, Science, 246: 1282–1285, 1989. 70. M. Marom et al., Remote sensing of ocean wave spectra by interferometric synthetic aperture radar, Nature, 345: 793–795, 1990. 71. M. Marom, L. Shemer, and E. B. Thronton, Energy density directional spectra of nearshore wave field measured by interferometric synthetic aperture radar, J. Geophys. Res., 96: 22125– 22134, 1991. 72. D. R. Thompson and J. R. Jensen, Synthetic aperture radar interferometry applied to ship-generated internal waves in the 1989 Loch Linnhe experiment, J. Geophys. Res., 98: 10259– 10269, 1993. 73. R. E. Carande, Estimating ocean coherence time using dualbaseline interferometric synthetic aperture radar, IEEE Trans. Geosci. Remote Sens., GRS-32: 846–854, 1994. 74. L. Shemer and M. Marom, Estimates of ocean coherence time by interferometric SAR, Int. J. Remote Sens., 14: 3021–3029, 1993. 75. A. K. Gabriel, R. M. Goldstein, and H. A. Zebker, Mapping small elevation changes over large areas: Differential radar interferometry, J. Geophys. Res., 94: 9183–9191, 1989. 76. D. Massonnet et al., The displacement field of the Landers earthquake mapped by radar interferometry, Nature, 364: 138– 142, 1993. 77. D. Massonnet et al., Radar interferometric mapping of deformation in the year after the Landers earthquake, Nature, 369: 227–230, 1994. 78. H. A. Zebker et al., On the derivation of coseismic displacement fields using differential radar interferometry: The Landers earthquake, J. Geophys. Res., 99: 19617–19634, 1994.

488

REMOTE SENSING GEOMETRIC CORRECTIONS

79. G. Peltzer, K. Hudnut, and K. Feigl, Analysis of coseismic displacement gradients using radar interferometry: New insights into the Landers earthquake, J. Geophys. Res., 99: 21971– 21981, 1994. 80. G. Peltzer and P. Rosen, Surface displacement of the 17 May 1993 Eureka Valley, California, earthquake observed by SAR interferometry, Science, 268: 1333–1336, 1995. 81. D. Massonnet, P. Briole, and A. Arnaud, Deflation of Mount Etna monitored by spaceborne radar interferometry, Nature, 375: 567–570, 1995. 82. D. Massonnet and K. Freigl, Satellite radar interferometric map of the coseismic deformation field of the M ⫹ 6.1 Eureka Valley, California earthquake of May 17, 1993, Geophys. Res. Lett., 22: 1541–1544, 1995. 83. R. M. Goldstein et al., Satellite radar interferometry for monitoring ice sheet motion: Application to an Antarctic ice stream, Science, 262: 1525–1530,1993. 84. I. R. Joughin, Estimation of ice-sheet topography and motion using interferometric synthetic aperture radar, Ph.D. thesis, Univ. of Washington, Seattle, 1995. 85. E. Rignot, K. Jezek, and H. G. Sohn, Ice flow dynamics of the Greenlandice sheet from SAR interferometry, Geophys. Res. Lett., 22: 575–578, 1995. 86. I. R. Joughin, D. P. Winebrenner, and M. A. Fahnestock, Observations of ice-sheet motion in Greenland using satellite radar interferometry, Geophys. Res. Lett., 22: 571–574, 1995. 87. R. M. Goldstein, Atmospheric limitations to repeat-track radar interferometry, Geophys. Res. Lett., 22: 2517–1520, 1995. 88. C. Elachi, Spaceborne Radar Remote Sensing: Applications and Techniques, New York: IEEE Press, 1988. 89. M. A. Donelan and W. J. Pierson, Radar scattering and equilibrium ranges in wind-generated waves with application to scatterometry, J. Geophys. Res., 92: 4971–5029, 1987. 90. W. J. Plant, The variance of the normalized radar cross section of the sea, J. Geophys. Res., 96: 20643–20654, 1991. 91. C. Y. Chi and F. K. Li, A comparative study of several wind estimation algorithms for spaceborne scatterometers, IEEE Trans. Geosci. Remote Sens., 26: 115–121, 1988. 92. F. M. Naderi, M. H. Freilich, and D. G. Long, Spaceborne radar measurement of wind velocity over the ocean—an overview of the NSCAT scatterometer system, Proc. IEEE, 79: 850–866, 1991. 93. F. J. Wentz, S. Peteherych, and L. A. Thomas, A model function for ocean radar cross sections at 14.6 GHz, J. Geophys. Res., 89: 3689–3704, 1984. 94. F. J. Wentz, L. A. Mattox, and S. Peteherych, New algorithms for microwave measurements of ocean winds: Applications to SEASAT and the special sensor microwave imager, J. Geophys. Res., 91: 2289–2307, 1986. 95. M. G. Wurtele et al., Wind direction alies removal studies of SEASAT scatterometer-derived wind fields, J. Geophys. Res., 87: 3365–3377, 1982. 96. C. L. Rufenach, J. J. Bates, and S. Tosini, ERS-1 scatterometer measurements—Part I: The relationsihp between radar cross section and buoy wind in two oceanic regions, IEEE Trans. Geosci. Remote Sens., 36: 603–622, 1998. 97. C. L. Rufenach, ERS-1 scatterometer measurements—Part I: An algorithm for ocean-surface wind retrieval including light winds, IEEE Trans. Geosci. Remote Sens., 36: 623–635, 1998. 98. E. Rodriguez, Altimetry for non-Gaussian oceans: Height biases and estimation of parameters, J. Geophys. Res., 93: 14107– 14120, 1988. 99. G. S. Brown, The average impulse response of a rough surface and its applications, IEEE Trans. Antennas Propag., AP-25: 67– 74, 1977.

100. D. B. Chelton, E. J. Walsh, and J. L. MacArthur, Pulse compression and sea level tracking in satellite altimetry, J. Atmos. Oceanic Technol., 6: 407–438, 1989. 101. A. R. Zieger et al., NASA radar altimeter for the TOPEX/POSEIDON project, Proc. IEEE, 79: 810–826, 1991. 102. TOPEX/POSEIDON: Geophysical evaluation, J. Geophys. Res., 99: 1994. 103. R. Bindschadler et al., Surface velocity and mass balance of ice streams D and E, West Antarctica, J. Glaciol., 42: 461–475, 1996. 104. D. J. Daniels, D. J. Gunton, and H. F. Scott, Introduction to subsurface radar, IEE Proc., Part F, 135: 278–320, 1988. 105. Special issue on ground penetrating radar, J. Appl. Geophys., 33: 1995. 106. T. S. Chuah, Design and development of a coherent radar depth sounder for measurement of Greenland ice sheet thickness, Radar Systems and Remote Sensing Laboratory RSL Technical Report 10470-5, Univ. of Kansas Center for Res., 1997. 107. L. J. Porcello et al., The Apollo lunar sounder radar system, Proc. IEEE, 62: 769–783, 1974. 108. R. J. Doviak and D. S. Zrnic, Doppler Radar and Weather Observations, Orlando, FL: Academic Press, 1984. 109. R. M. Lhermitte, Cloud and precipitation remote sensing at 94 GHz, IEEE Trans. Geosci. Remote Sens., 26: 256–270, 1997. 110. P. V. Hobbs et al., Evaluation of a 35 GHz radar for cloud physics research, J. Atmos. Oceanic Technol., 2: 35–48, 1985. 111. S. P. Lohmeier et al., Classification of particles in stratiform clouds using the 33 and 95 Ghz polarimetric cloud profiling radar system (CPRS), IEEE Trans. Geosci. Remote Sens., 35: 256– 270, 1997. 112. A. L. Pazmany et al., An airborne 95 GHz dual polarized radar for cloud studies, IEEE Trans. Geosci. Remote Sens., 32: 731– 739, 1994. 113. G. A. Sadowy et al., The NASA DC-8 airborne cloud radar: Design and preliminary results, Proc. IGARSS ’97, 1997, pp. 1466– 1469. 114. Z. S. Haddad et al., The TRMM ‘day-1’ radar/radiometer combined rain-profiling algorithm, J. Meteorol. Soc. Jpn., 75: 799— 809, 1997. 115. J. Simpson, R. F. Adler, and G. R. North, A proposed tropical rainfall measuring mission (TRMM) satellite, Bull. Amer. Meteorol. Soc., 69: 278—295, 1988. 116. S. L. Druden et al., ARMAR: An airborne rain-mapping radar, J. Atmos. Oceanic Technol., 11: 727—737, 1994.

JAKOB J. VAN ZYL YUNJIN KIM California Institute of Technology

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3605.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Remote Sensing Geometric Corrections Standard Article Jose F. Moreno1 1University of Valencia, Valencia, Spain Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3605 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (195K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Spatial Resolution General Parametric Approach for Image Geocoding Polynomial Approximations Automatic Registration Techniques The Need of Ground Control Points for Accurate Registration Cartographic Projections The Problem of Topographic Distortions Resampling Calibration and Removal of Technical Anomalies Spatial Mosaicking Multitemporal Compositing

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3605.htm (1 of 2)17.06.2008 15:43:27

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3605.htm

Operational Processing and Data Volume Considerations Keywords: calibration; distortion; global positioning system; ground control points; mosaicking; orbital mechanics; preprocessing; projection; rectification; registration; resampling; resolution About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3605.htm (2 of 2)17.06.2008 15:43:27

488

REMOTE SENSING GEOMETRIC CORRECTIONS

REMOTE SENSING GEOMETRIC CORRECTIONS Satellite remote-sensing data have become an essential tool in many applications, mainly environmental monitoring. Although remote-sensing techniques have been applied in many different fields, operational use is still far from being achieved. New applications and increased possibilities are expected for the coming years due to the availability of a new series of advanced sensors with improved capabilities. Apart from the need for a better understanding of the remotely sensed signals, most of the problems in using remotesensing data have been due to inadequate data processing. Data-processing aspects become essential for actually obtaining useful information from remote-sensing data and for J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

REMOTE SENSING GEOMETRIC CORRECTIONS

489

deriving usable products. Within these data-processing aspects, geometric corrections play an essential role. Multitemporal analyses are required for monitoring of changes and evolution, while multisensor data are typically needed to solve a particular problem. Integration of the different data, acquired under different viewing geometries and with different geometric characteristics (resolution, pixel shape, sensor spatial response function, interpixel overlaps, etc.), requires careful processing to avoid misleading results. The article is structured as follows: after introduction of the topic and the driving concept of spatial resolution, a method to account for the geometric distortions introduced by orbital motion, scanning, and panoramic view in remote sensing data is presented. Although the relevant details depend very much on each particular sensor or even on each particular acquisition mode of each sensor, a general parametric approach is presented which can be potentially applied to any sensor, both airborne and spaceborne cases, with minimal changes of inputs parameters. Due to the many sensors and systems available for a given application, a general, consistent method to process the data is preferable, instead of specific approaches for each new particular sensor. After a description of the approach step by step, some approximations traditionally used are briefly presented (polynomial distortion models, linear cross-correlation), and then the need of some external cartographic reference is discussed, addressing the problems arising from the use of many different cartographic projections and the difficulties in the mathematical modeling of the earth as a simple geometrical figure. The problems introduced by topographic structure are then discussed. Then, when the geometric transformation model image-map is defined, the resampling of the image to a different pixel size/ shape is discussed, pointing out difficulties with classical approaches and new algorithms based on image restoration. Other external factors which must be also taken into account, like calibration and removal of technical anomalies (noises) are then considered as additional problems. Finally, the generation of mosaics representing a large geographical area by compositing several or many individual images and the multitemporal compositing for monitoring changes are discussed, with a final comment about the challenges posed by the large amount of data to be processed with current or near-future sensors in the context of operational use of the data in practical applications. To clarify some of the aspects, most of the examples given correspond to the case of NOAA (National Oceanographic and Atmospheric Administration) AVHRR (Advanced Very High Resolution Radiometer) data, one of the most widely used remote sensing data and one good example on how geometric distortions affect remote sensing images and how methods can be applied to correct for such distortions. By using general parametric approaches, similar methods have been applied for a wide range of sensors, like, for instance, from very high resolution hyperspectral airborne data (1) to low resolution data from meteorological satellites (2).

about tens of kilometers (for low-resolution satellite passive microwave sensors). Obviously, the techniques, approaches, and limitations are quite different for each resolution range. However, to solve a particular problem it is often necessary to merge data from different sensors and with different resolutions, and then problems appear in handling the varying resolutions. An additional problem is given by the fact that high spatial resolution instruments generally have low temporal resolution, while medium spatial resolution sensors have greater temporal resolution, since the limiting factor is the whole amount of data to be transmitted or stored on board. The concept of spatial resolution is well defined, especially in optoelectronics systems. The use of this concept in remote sensing is, however, not always clear and many times confusing. For one of the sensors described in more detail, the multispectral scanner (MSS) on board Landsat, the resolution has been defined in terms of geometric instantaneous field of view (IFOV), with values ranging from 73.4 m to 81 m, depending on the analysis criteria. The IFOV response function is another criteria, giving values between 66 m and 135 m for the derived corresponding resolution. Other criteria are based on the concept of effective resolution, giving a range between 87 m and 125 m. Other criteria, taking into account atmospheric spatial blurring and feature discrimination, assign other types of resolution ranging from 220 m to 500 m, depending on the final application. Another aspect different from spatial resolution is pixel spacing in the image. Correspondence between resolution and pixel spacing is sometimes unclear, unless a complete spatial characterization of the sensor is available for actual flight configuration. Actually, the spatial resolution of a sensor is a combination of the optical point spread function, sampling, and downstream electronics. In the case of synthetic-aperture radar (SAR) data the concept of spatial resolution is also somehow confusing, since the resolution depends on the way the original raw data is processed. Typical available systems can provide a resolution of only few meters. However, this resolution cannot be used in regular applications due to the presence of noise (or undesired signals). Two approaches are followed: spatial averaging and multilooking, reducing the spatial resolution by increasing pixel spacing, and local filtering reducing the spatial resolution but keeping the same pixel spacing. In both cases, the local filtering or average is performed over a window determined from the local level of noise. If the statistical properties of the data are so that the local entropy can be determined, optimal resolutions can be established. Otherwise, the effective resolution can be critical for accomplishing the requirements of the selected applications. Spatial resolution considerations play a significant role in the case of SAR data processing, especially in those approaches based on statistical analysis.

SPATIAL RESOLUTION

Many different approaches have been described in the literature for geometric registration or geocoding of remotely sensed data. Some of them are quite sensor-specific. Owing to the current available technologies for global positioning, a quite general approach can be adopted, valid even for satellites, aircraft, and most of the sensors, including optical and

A critical aspect to be considered previously to any processing of remote-sensing data is spatial resolution. Current available systems can produce images with resolutions ranging from a few centimeters (for very-high-resolution airborne sensors) to

GENERAL PARAMETRIC APPROACH FOR IMAGE GEOCODING

490

REMOTE SENSING GEOMETRIC CORRECTIONS

SAR (2,3). First, the platform trajectory is determined; second, the attitude angles of the platforms are instantaneously calculated; third, the viewing geometry of each particular sensor is related to platform orientation and the instantaneous viewing direction is derived, parametrized as a function of universal time t. Each point over the surface can be observed only at a given time t (Fig. 1). After derivation of the time t, by solving a system of coupled nonlinear equations, the exact viewing geometry can be derived for each point over the surface (map) or, alternatively, for each pixel in the image.

Table 1. NASA NORAD Two-Line Orbital Element Set Format, Detailing the Orbital Information Daily Available for All Orbiting Satellites. For Many of them, Additional, More Detailed Information Is Also Available from Other Specific Sources. Line

Column

Description

1

01–01 03–07 10–18

Line number of element data Satellite number International designator (last two digits of launch year ⫹ launch number of the year ⫹ piece of launch) Last two digits of epoch year Julian day and fractional portion of the day First time derivative of the mean motion (or ballistic coefficient) Second time derivative of the mean motion Drag term, General Perturbation-4 (GP4) perturbation theory (or radiation pressure coefficient) Ephemeris type Element number Line number of element data Satellite number Inclination (degrees) Right ascension of the ascending node (degrees) Eccentricity Argument of perigee (degrees) Mean anomaly (degrees) Mean motion (revolutions per day) Revolution number at epoch revolutions

19–20 21–32 34–43

Platform Trajectory Determination The platform trajectory can be derived by different methods, and typically becomes known quite accurately for non-realtime data processing and less accurately for real-time applications.

45–52 54–61

γ (t) O

2

β (t)

α (t)

63–63 65–68 01–01 03–07 09–16 18–25 27–33 35–42 44–51 53–63 64–68

n p

Additional numbers are added at the end of each line as a check sum. Example: ERS-2 1 23560U 95021A 97188.14082342 .00000503 00000-0 20289-3 0 4065 2 23560 98.5461 262.6711 0001003 102.2708 257.8609 14.32248108115670

(x(t), y(t), z(t))

(a) Celestial reference frame (inertial)

Satellite orbital information (ephemeris data)

Earth center reference frame

Orbital reference frame

Attitude reference frame

Image coordinates (line, pixel)

Direct modeling Inverse modeling

Geographical coordinates (x, y, z) (h, ϑ, ϕ )

(b) Figure 1. (a) Variables involved in the definition of local observation geometry: platform trajectory O ⬅ (x(t), y(t), z(t)) and orientation [움(t), 웁(t), 웂(t)] are functions of time t. Location P, with normal n, can only be observed at some time t⬘, which is determined by numerical solution of the system of parametric equations in t for the whole set of variables involved. (b) Steps in the geometrical processing of satellite remote sensing data.

Satellite Sensors: Orbital Mechanics. Most remote-sensing satellites are placed in quasi-polar orbits, because of the preferable observation repetivity, i.e., repeatable solar positions, and mainly due to helio-synchronicity capabilities (4–6). Two types of satellite orbits must be identified: some satellites are almost kept in a given, predefined orbit, by means of constant operations to compensate for deviations due to perturbations in the Earth’s gravitational field; other satellites are allowed to drift as a consequence of gravitational and other (such as atmospheric drag) perturbations. In both cases, daily routine monitoring of satellite positions allow determination of instantaneous positions with relative accuracy (few kilometers) on global basis. Several sources, mostly military, are available to navigate satellite data by means of actual ephemeris information. NORAD [currently U.S. Space Command (USSC)] two-line orbital element (TLE) sets (also called NASA Prediction Bulletins) for all orbiting satellites and some documentation and software are available via the World Wide Web and anonymous file transfer protocol from several sources, and they are updated almost daily. Each satellite is identified by a 22-character name (NORAD SATCAT name). A brief description of the format is included in Table 1 for reference. For more detailed information see Ref. 7. Other sources are also available. However, it is very important to point out that each ephemeris data [TLE, Television Infrared Observation Satellite

REMOTE SENSING GEOMETRIC CORRECTIONS

(TIROS) Bulletin United States (TBUS), etc.] are derived by means of a particular reduction of observations to some specific model (7–9). For each ephemeris type, the corresponding orbital extrapolation model has to be used in order to get consistent results according to the model used to derive the orbital elements. Ephemeris types are not exchangeable. The use of some orbital information with a wrong orbit extrapolation model is one of the major sources of problems in some current processing algorithms. Due to the recent advances in the global positioning system (GPS), very significant progress in satellite data processing is becoming possible in terms of geometric aspects, mainly due to the improved capabilities in positioning observation platforms. GPS (10) was developed by the U.S. Department of Defense as an all-weather, satellite-based system for military use, but now it is also available for civilian use, including scientific and commercial applications. The current GPS space segment includes 24 satellites, each one traveling in a 12-h, circular orbit at 20,200 km above Earth. The satellites are positioned so that about six are observable nearly all the time from any point on the earth. There is also a GPS control segment of ground-based stations that perform satellite monitoring and command functions. The GPS satellites transmit ranging codes on two radio-frequency carriers at L-band frequencies, allowing the locations of the receivers (the user segment) to be determined with an accuracy of about 16 m in a stand-alone, real-time mode. By using corrections sent from another GPS receiver at a known location, accuracy in relative position of 1 m can be obtained. Subcentimeter accuracies can be achieved through the simultaneous analysis of the dual-band carrier-phase data received from all GPS satellites by a global network of GPS receivers by means of postprocessing. The use of GPS techniques in the processing of remote-sensing data is not only limited to satellite positioning, but they are also used for identification of surface points used for reference to increase the accuracy in automatic registration. In more recent satellites, such as the European Earth Remote Sensing Satellites (ERS 1/2), a product called precise orbit is provided by ESA in specific format (after refinement of models plus observations with laser tracking data) with radial nominal accuracy below 1 m. Other systems, like SPOT and Topex/Poseidon, use a combined system of satellite laser ranging and a dual-Doppler tracking system (DORIS) that allows determination of satellite’s position to within a few centimeters from the earth’s center. With the new GPS systems, accuracies in satellite positioning within 2.5 cm have been demonstrated. Hopefully, when all these techniques become fully operational in all new platforms, geometric data processing of data acquired by such platforms will be much more easy and accurate. Airborne Sensors. The same positioning capabilities are true not only for satellite systems but also for the case of airborne sensors. In the case of airborne sensors, geometric distortions are critical due to changes in aircraft flight patterns, especially for low-altitude flights. Moreover, there is no possible simplified orbital modeling here, but the trajectory must be kept in the form of (x, y, z) coordinates and local trajectory reconstructed by means of polynomials in time t.

491

Platform Orientation and Sensor Geometry Once the platform trajectory is determined, a reference basis on each point of the trajectory must be defined. For instance, one axis can be chosen in the direction of the instantaneous velocity vector, and another axis can be normal to the plane defined by the velocity vector and the local geocentric or geodetic (see Ref. 11) direction, with the third axis completing the orthogonal reference. In this reference frame, each sensor on board the platform has specific viewing direction vectors, which are sensor-dependent, but are firmly linked to the platform orientation. A convenient way of optimizing computations is to reduce formulation to a given vector u that defines the local viewing direction. This vector changes with time. If selected properly, we can guarantee that variations in u can be described as small-angle rotations, the so-called attitude variations. If we rotate the vector u an angle around the direction given by a vector w, the components of u change according to w, σ )u u = (w w · u )w w + cos σ (w w × u ) × w + sin σ (w w × u) u = R(w (1) with u⬘ the transformed vector. The transformation R(w, ) can also be expressed as a three-axis rotation of angles 1 (pitch), 2 (roll), and 3 (yaw), around the three unitary vectors that define the instantaneous local orbital reference frame, as in the classical modeling of attitude angle effects w, σ ) = R(ee 3 , σ3 )R(ee2 , σ2 )R(ee 1 , σ1 ) R(w

(2)

As attitude angles are always very small, the order of rotations is not quite important. Moreover, Eq. (1) can be reduced, at the first order, to w × u) u = u + σ (w

(3)

Parameterization of w and as a function of three (because w is unitary) functions of time t allows a very adequate description of platform attitude variations (3,12), and references therein. Determination of Instantaneous Surface Observation Geometry Since all intervening variables are parametrized as functions of time t, the resulting approach implies the necessity for solving a system of nonlinear equations in parametric form (13), the universal time (UT) being used as iterative parameter. Once the observation time is obtained by solving the corresponding equations, the satellite position, platform orientation, and sensor angles can be immediately determined, which allows direct derivation of the exact pixel number in the image. Also, observation time directly determines the line number of the image (shifts of the whole image data are sometimes necessary due to satellite internal clock drifts). Moreover, observation time allows derivation of instantaneous solar position, which makes possible exact pixel-bypixel illumination corrections, which is critical in the case of optical data, especially for mosaicking and multitemporal composites. In the case of SAR data the general parameterization is similar, but due to the way the synthetic image is generated other concepts appear in the parameterization of SAR image

492

REMOTE SENSING GEOMETRIC CORRECTIONS

geometric corrections (slant-range, Doppler frequency). Due to the way in which the image is generated, motion compensation schemes applied in the processing of raw data is preferable to a posteriori geometric corrections of the already resulting (flat) image in slant range, especially if topography plays a significant role. The approach just described is called inverse mapping because the observation geometry (line, column) is derived for each ground point. An alternative is to use the so-called direct mapping in which each image point is mapped into the surface cartographic projection. The first approach can result in oversampling, while the second approach can result in gaps. This second approach is typically more efficient and useful if no topographic information is available. However, for adequate resampling procedures the inverse mapping becomes more appropriate. POLYNOMIAL APPROXIMATIONS Under some circumstances (small, almost linear distortions), the general image transformation functions x = f (x, y),

y = g(x, y)

(4)

can be approximated as simple polynomials

x ≈ a0 + a1 x + a2 y + a3 x2 + a4 y2 + a5 xy + · · · y ≈ b0 + b1 x + b2 y + b3 x2 + b4 y2 + b5 xy + · · ·

(5)

Note that this is nothing more than an approximation, useful only if few terms have to be retained for a given accuracy requirement, and cannot be considered as a general geometric correction procedure, even if a high-degree polynomial is used. Typical distortions cannot be described by polynomials, and numerical problems arise for high-degree polynomials (a ninth-degree polynomial is almost the limit for double-precision numerical calculations). AUTOMATIC REGISTRATION TECHNIQUES When increasing the amount of data does not allow detailed processing of each single image, automatic registration techniques are developed. The most widely used approach is based on linear cross-correlation, i.e., applicability only for almost-linear distortions. The essential idea is to define the transformation between two images in the form I(x, y) → I(x + x, y + y) ↔ I (x , y )

(6)

The linear parameters ⌬x and ⌬y are determined by iteratively maximizing the correlation between I(x, y) and I⬘(x⬘, y⬘). The procedure is accelerated by reducing the range of ⌬x, ⌬y by working in a small window. The method becomes applicable only for small (linear) distortions, and it is typically used as a second step in the refinement of automatic procedures based on stand-alone geometrical models by using ephemeris or attitude data as a single input without any reference point, by cross-registration to a reference image, or for images with very small geometric distortions (close-nadir viewing cameras with optical or electronic compensation of attitude deviation effects).

More general transformations such as I(x, y) → I(( λ(x + x), µ(y + y)))

(7)

are possible, but extremely time-consuming and impractical unless and 애 can be considered as constant across the whole image.

THE NEED OF GROUND CONTROL POINTS FOR ACCURATE REGISTRATION Unfortunately, completely automated techniques still cannot provide accurate registration. Although the platform trajectory can be known very accurately, orientation angles and attitude changes cannot be known, at least by now, with the necessary accuracy, so that to achieve a subpixel registration among different images, some reference points [ground control points (GCP)] have to be identified and used in a refinement procedure. Satellite clock drifts and assumptions in the orbital model are additional sources of errors that need to be compensated. The refinement procedures allow slight changes in orientation directions of the platform or sensor, redefined by means of GCP, and/or the platform trajectory, to compensate for the observed errors when ‘‘nominal’’ values are used first. In other cases, the refinement procedure is actually a second geometric correction procedure (based on polynomial transformation) after the image has been precorrected by a nominal full geometrical model. But the identification of GCPs in the image and maps is not easy. The use of GPS for identification of GCP has increased the accuracy of using GCP techniques tremendously. Automatic correlation techniques can only be applied over boundaries (coastal, lakes, rivers), but the identification of GCPs in the image allows the use of single-point features in a very precise way. Still the major problem in using GCPs is the lack of operationality, since the method requires necessarily the intervention of an operator, which not only slows the whole procedure but can also introduce subjective bias due to the different skills of each operator. Unfortunately, accurate registration still requires GCPs. The spectacular advance in platform positioning techniques in the past few years now makes it possible to reduce the number of GCPs to an absolute minimum. The determination of the instantaneous attitude angle will also soon be possible, by means of differential GPS techniques with different receivers located at the edges of the platform, but is still insufficient for accurate positioning, especially for very-high-resolution data. However, the major problems that make the use of GCPs necessary are the deficiencies in the cartographic modeling of the earth, and the lack of elevation information, in many cases, for each GCP.

CARTOGRAPHIC PROJECTIONS When images have to be transformed from the original acquisition geometry (typically not useful for most applications) to some cartographic reference, a specific map projection has to be chosen. Local maps available for each area define the type of cartographic projection, which are different in each coun-

REMOTE SENSING GEOMETRIC CORRECTIONS

try. It is often difficult to identify the best projection to be used for a given particular problem. To avoid such problems, in many cases data are simply projected to a latitude–longitude grid, where the latitude-longitude relative factor is compensated to make the resulting pixel (in kilometers over the surface) almost square. This is only possible for some reference latitude, which is chosen as the latitude of the central point in the reference area. One advantage of this projection, apart from the simplicity, is that most applications require computation of derived quantities that are given as functions of latitude and longitude, so that computations are easy if a latitude–longitude grid is used. For cartographic applications, however, local references are needed. Fortunately, computational tools are available that allow transfer of data from any cartographic projection to any other by means of mathematical transformations. The

problem is then the resampling of the data. Each time the geometry of the data is changed resampling is needed, with the unavoidable loss of some information and introduction of interpolation artefacts. Single-step procedures from the original acquisition geometry to the final cartographic product, by using a single resampling of the data, are always preferable.

THE PROBLEM OF TOPOGRAPHIC DISTORTIONS Two effects have to be taken into account. On the one hand, given a sensor altitude over a reference surface (typically the earth ellipsoid) the effect of varying altitude of the target over the same reference surface is to introduce horizontal displacement ⌬X [see Fig. 2(a)] with additional geometric distortion, plus a change in the sensor–target distance, relevant for ra-

α Dapparent

Dtrue hsensor

α’

htarget

∆X

Xtrue hsensor (a)

493

(b)

Figure 2. Effects introduced by topography in the geometric processing of remote sensing data. (a) Geometric distortions due to relief for off-nadir viewing geometry, including horizontal displacement ⌬X of apparent positions, change in sensor–target distance D, and changes in the local illumination angle 움⬘ over the nominal illumination angle for a horizontal surface 움. (b) Radiometric distortions due to changes in illumination angle and viewed area, as well as additional reflections due to adjacent slopes, for the case of topography (bottom) as compared to the case of flat surfaces (top).

494

REMOTE SENSING GEOMETRIC CORRECTIONS

diometric corrections and with relevant effects in radar data processing. On the other hand, the change from a flat to a rugged surface introduces alterations if the effective illumination angle and effective scattering area, plus additional reflections coming from adjacent slopes with alterations in the desired signal from the target [Fig. 2(b)]. The first effect is purely geometric and can be easily accounted for provided enough information about earth topographic structure (digital terrain model). The second one is much more difficult to correct, and only first-order effects are usually compensated in correction approaches. The parametric geometric model fully describes the incorporation of topographic effects as direct, especially in the inverse mapping approach. Several techniques have been suggested to include (at least first-order) topographic corrections in polynomial models. The method is only applicable in areas with low topographic distortions, like close-nadir viewing or limited altitude changes across the area. For more general cases, a full 3-D geometrical model is required to account for geometric projections of objects over the perpendicular plane to the viewing direction. The modeling of radiometric effects due to topography is, in a rigorous way, quite difficult: local slope and orientation, plus the local horizon line, at least, have to be determined for each point in the image. Each viewing/illumination condition determines a changing geometry in the resulting scene. Although most studies consider very simple approaches to describe radiation exchanges in rugged terrain, other approaches take full advantage of computer graphics technologies for a realistic description of intervening effects. Ways to speed up calculations while still keeping a realistic description of the major intervening effects have been developed (14), but a proper description of effects introduced by topography, suitable for correction/normalization of the data from such perturbations, remains an open issue.

RESAMPLING Once the remotely sensing data can be located geographically or registered over a spatial database, resampling techniques become the next critical issue. The most simple method is called the nearest-neighbor technique, in which you simply assign to each new pixel the value of the closest one in the original image, without the need for recalculations. But many more advanced techniques have been developed (15–20). Assign a value of 1.0 for the central processing unit (CPU) time for the nearest-neighbor algorithm, considered as the basic procedure for a typical geometric processing, including registration, UTM projection, and resampling. The relative increment in CPU time required by different interpolation algorithms applied in the resampling varies only few percent for ‘‘standard’’ interpolation approaches: bilinear (1.02), cubic convolution (1.07), cubic B spline (1.10). However, if sensorspecific optimum-interpolation approaches are used (see Refs. 18 and 20), CPU time increases drastically: analytical optimum interpolation (7.7), fully numerical optimum interpolation (360.5). Even with more sophisticated processing to achieve more accurate results, an increase in CPU time by a factor of 360 is not acceptable for operational use. Analytical simplifications are more practical but still give reasonable accuracies (18). Resampling considerations become only critical

for multisensor studies, especially when very different spatial resolution data have to be merged. In all optimum interpolation methods, the basic idea is not to increase the apparent resolution, but to provide interpolated values for new pixels resulting from geometrical transformations. Another type of resampling is specifically oriented to create ‘‘super-resolution’’ products by enhancement of the highfrequency content of the image. This is possible with the help of at least two resolution data. However, some recent approaches use many different low-resolution images of a given area, each acquired under a slightly different viewing geometry (only partial overlaps among IFOVs in each different images or images shifted one with respect each other by a small fraction of the IFOV). In this way high-resolution information is reconstructed by iterative processing of the multiple views (21,22). Super-resolution resampling techniques have been intensively used to increase usefulness of low-resolution passive microwave or scatterometer data, for which spatial resolution is typically very poor but many views are available for each area. Although these techniques are still in early stages of development, they appear to be really successful only in areas with highly contrasted spatial substructures. CALIBRATION AND REMOVAL OF TECHNICAL ANOMALIES Several sensor-specific technical anomalies have to be considered in the geometric processing (23). Calibration typically refers to radiometric calibration. In the case of SAR data, calibration refers not only to radiometric but also some geometric aspects. Correct interpretation of SAR data requires deconvolution of the signal from artifacts due to antenna gain pattern and then directly related to local variations in incidence angle for each observed area. Calibration also means intersensor normalization. Only few sensors acquire the images on a strictly pixel-by-pixel basis (AVHRR (Advanced Very High Resolution Radiometer) is a typical example). In the case of the Landsat Thematic Mapper (TM), 16 lines are simultaneously acquired by an array of detectors. In the case of the Satellite Pour l’Observation de la Terre (SPOT), each whole line is acquired simultaneously by charge-coupled device (CCD) arrays. Electrooptical multidetectors, with CCD technology and advanced optical fiber devices, will be the common technology for sensors in the near future. Since the behavior of multiple detectors composing the final image is not the same, intersensor normalization or recalibration is strictly required to avoid artifacts in the image (different intensity vertical strips, horizontal stripping, and nonregular spatial noises). Additional problems are due to the scanning devices, especially for dual-scan systems (forward and reverse) such as Landsat TM. Nonlinear figure-8-shaped distortions in line geometry must be optically compensated, but local geometric distortions resulting in the images are quite difficult to remove. SPATIAL MOSAICKING Satellite data acquisitions are typically done along strips of limited width. The width varies from hundred of meters, for very-high-resolution systems, to thousands of kilometers, for low-resolution systems. The reason for this variation is mainly the limited capabilities of data transmission from sat-

495

T4 – T5 ≅ ρ wv α 1 ≅ ρ aeros

Radiometric calibration

Aerosol-type spatialization

Cloud screening

Thermal infrared atmospheric correction

Cloud classification

AVHRR 5 ground radiance

AVHRR 4 ground radiance

AVHRR 3 ground radiance

AVHRR 2 ground reflectance

AVHRR 1 ground reflectance

Surface temperature

Surface NDVI

Surface albedo

Thermal radiance model

Emissivity corrections

Bidirectional reflectance model

Multitemporal composites

Database management

Figure 3. Schematic diagram illustrating the whole processing chain for NOAA AVHRR data. Similar processing schemes are used for most remote-sensing data systems. SGP ⫽ simplified general perturbation; nDVI ⫽ normalized difference vegetation index; wv ⫽ water vapor; aeros ⫽ aerosols.

Atmospheric data: • radiosounding • visibility

Internal calibration data (+ deep-space counts) External calibration data

Topographic normalization

Visible and near-infrared atmospheric correction

Geometric correction and resampling

Sun angles Satellite angles

Brouwer-Lyddane SGP 4

Orbital extrapolation

Digital elevation model (three dimensional)

Reference data: • Year • Day of the year • Image time

Orbital data (TBUS/TLE)

496

REMOTE SENSING GEOMETRIC CORRECTIONS

ellites to Earth. Since the whole data volume is limited, an increase of spatial coverage is done at the expense of reduction in spatial resolution. For many applications, mainly those requiring high-resolution data, a single strip is not enough to cover the study area, and several images have to be ‘‘mosaicked’’ to make a single image of the area. A large image area is defined and all pixels are set to zero values. Then, each single image is reference over the large frame background. When two single images overlap each other, a decision has to be made about how to combine both pixels to define the unique value in the mosaic. Accurate geometric registration of each single image forming the mosaic is not enough to make the mosaic look like a single image. Single images are acquired under different viewing geometries, and illumination corrections are needed in order to avoid artifacts in the boundaries between original single images. Since images are acquired at different times, motions or changes in targets (i.e., clouds) can result in discontinuities. Simple image-processing techniques are often used (local histogram equalizations plus local cross-correlation and linear composites across overlaps) to improve appearance. However, physically based methods are preferred to compensate for perturbing effects, especially if the data have to be used in numerical studies or as input to physical models after the mosaic images have been produced. MULTITEMPORAL COMPOSITING Monitoring the surface condition by remote sensing implies the use of multitemporal data. Obviously, geometric registration among all the multidate images must be set within one pixel to make sense of the use of the composites. A critical issue is the necessity of accounting for the illumination dependence on the measured data. It is therefore necessary to keep track during the geometric processing about the observation angles but also the illumination angles, absolutely critical in the case of passive optical data. In the case of SAR data, this is no longer a serious limitation. This illumination independence of SAR data and also the cloud-transparency properties are the two major reasons for the superior capabilities of SAR remote sensing over optical systems for operational all-weather monitoring capabilities. However, proper corrections of antenna gain pattern effects that vary with local incidence angle are needed, especially over topographically structured areas, to avoid artefacts simply due to changing illumination conditions in multitemporal studies. OPERATIONAL PROCESSING AND DATA VOLUME CONSIDERATIONS Since remote-sensing data are used in practical applications, operational constraints limit the use of sophisticated algorithms, which in many cases represent an unacceptable amount of computer time or memory management. Most of the considerations previously noted are highly limited by the amount of data to be processed, which points to the subsequent need for simplified algorithms and numerically optimized techniques. Although it is true computer techniques have experienced tremendous improvements in processing speed and memory capabilities, the increase in the amount of

remote sensing data to be processed, as well as in the sophistication of the algorithms used for the processing of the data, has increased also in such a way that limitations exist. The optimum compromise between accuracy requirements and practicality should be achieved in each particular application by optimization of the codes and advanced memory management in processing facilities. The case of AVHRR data has become a typical example, partly due to the many applications of these data due to low cost and global availability, and also due to the peculiar characteristics of the system with highly nonlinear geometric distortions due to panoramic view and circular scanning. Figure 3 indicates the many steps involved in the whole AVHRR data-processing scheme. Most other data-processing schemes for other sensors or systems follow similar steps. The developments in AVHRR data processing (3,12) have become a good example of how improvements in data-processing techniques can drastically increase the usefulness of data in many new potential applications. However, the future is really challenging. The Earth Observing System (EOS) platforms will provide data at the rate of 13.125 Mbyte/s for the first EOS platform and slightly higher for the posterior series. Similar or even higher rates are expected for other systems, especially for those using active sensors like SAR. These data rates represent a real challenge for current computational algorithms and hardware technologies (24).

BIBLIOGRAPHY 1. P. Meyer, A parametric approach for the geocoding of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data in rugged terrain, Remote Sens. Environ., 49: 118–130, 1994. 2. H. De Groof, G. De Grandi, and A. J. Sieber, Geometric rectification and geocoding of JPL’s AIRSAR data over hilly terrain, Proc. 3rd Airborne Synthetic Aperture Radar Workshop, J. J. van Zyl (ed.), NASA JPL Pub. 91-30, 1991, pp. 195–204. 3. J. Moreno and J. Melia´, A method for accurate geometric correction of NOAA AVHRR HRPT data, IEEE Trans. Geosci. Remote Sens., 31: 204–226, 1993. 4. R. R. Bate, D. D. Mueller, and J. E. White, Fundamentals of Astrodynamics, New York: Dover, 1971. 5. K. I. Duck and J. C. King, Orbital mechanics for remote sensing, in R. N. Colwell (ed.), Manual of Remote Sensing, 2nd ed., Falls Church, VA: American Society Photogrammetry, 1983, vol. I, chap. 16, pp. 699–717. 6. A. E. Roy, Orbital Motion, 3rd ed., Bristol: Hilger, 1988. 7. F. R. Hoots and R. L. Roehrich, Models for propagation of NORAD elements sets, Spacetrack Rep. No. 3, NORAD, Aerospace Defense Command, Peterson AFB, CO, 1980. 8. D. Brouwer, Solution to the problem of artificial satellite theory without drag, Astron. J., 64: 378–397, 1959. 9. P. R. Escobal, Methods of Orbit Determination, New York: Wiley 1965. 10. E. D. Kaplan (ed.), Understanding GPS: Principles and Applications, Norwood, MA: Artech House, 1996. 11. J. Morrison and S. Pines, The reduction from geocentric to geodetic coordinates, Astron. J., 66: 15–16, 1961. 12. G. W. Rosborough, D. G. Baldwin, and W. J. Emery, Precise AVHRR image navigation, IEEE Trans. Geosci. Remote Sens., 32: 644–657, 1994.

REPAIRABLE SYSTEMS 13. W. H. Press et al., Numerical Recipes, Cambridge, UK: Cambridge Univ. Press, 1986. 14. J. Dozier and J. Frew, Rapid calculation of terrain parameters for radiation modeling from digital elevation data, IEEE Trans. Geosci. Remote Sens., 28: 963–969, 1990. 15. R. Bernstein, Digital image processing of Earth observation sensor data, IBM J. Res. Develop., 20: 40–57, 1976. 16. R. Bernstein et al., Image geometry and rectification, in R. N. Colwell (ed.), Manual of Remote Sensing, 2nd ed., Falls Church, VA: American Society Photogrammetry, vol. I, chap. 21, pp. 873– 922, 1983. 17. R. G. Keys, Cubic convolution interpolation for digital image processing, IEEE Trans. Acoust. Speech Signal Process., 29: 1153– 1160, 1981. 18. J. Moreno and J. Melia´, An optimum interpolation method applied to the resampling of NOAA AVHRR data, IEEE Trans. Geosci. Remote Sens., 32: 131–151, 1994. 19. S. K. Park and R. A. Schowengerdt, Image reconstruction by parametric cubic convolution, Comput. Vision Graphics Image Process., 23: 258–272, 1983. 20. G. A. Poe, Optimum interpolation of imaging microwave radiometer data, IEEE Trans. Geosci. Remote Sens., 28: 800–810, 1990. 21. D. Baldwin, W. Emery, and P. Cheeseman, Higher resolution Earth surface features from repeat moderate resolution satellite imagery, IEEE Trans. Geosci. Remote Sens., 36: 244–255, 1998. 22. P. Cheeseman et al., Super resolved surface reconstruction from multiple images, NASA-AMES Tech. Rep. FIA-94-12, 1994. 23. P. N. Slater, Remote Sensing Optics and Optical Systems, Reading, MA: Addison-Wesley, 1980. 24. M. Halem, Scientific computing challenges arising from spaceborne observations, Proc. IEEE, 77: 1061–1091, 1989.

JOSE F. MORENO University of Valencia

RENDERING (COMPUTER GRAPHICS). See GLOBAL ILLUMINATION.

RENEWABLE ENERGY. See OCEAN THERMAL ENERGY CONVERSION.

RENEWABLE SYSTEMS. See REPAIRABLE SYSTEMS.

497

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3616.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Visible and Infrared Remote Sensing Standard Article William J. Emery1 1University of Colorado, Boulder, CO Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3616 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (753K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Development of Meteorological Satellites The AVHRR Data Ingest, Processing, and Distribution NOAA's Comprehensive Large Array-data Stewardship System (CLASS) National Polar-orbiting Operational Satellite System (NPOESS) NASA's MODerate resolution Imaging Spectroradiometer (MODIS) The Geostationary Observing Environmental Satellite (GOES) Development of Earth Resources Technology Satellites The French Spot Satellites Summary About Wiley InterScience | About Wiley | Privacy | Terms & Conditions

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3616.htm (1 of 2)17.06.2008 15:43:47

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3616.htm

Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3616.htm (2 of 2)17.06.2008 15:43:47

VISIBLE AND INFRARED REMOTE SENSING The earliest satellite remote sensing information was supplied by video-style cameras on polar-orbiting satellites. Measuring an integral over a wide range of visible wavelengths, these cameras provided a new perspective to the study of the Earth, making it possible to view features from space that had previously been studied only from the Earth’s surface itself. One of the most obvious applications of these satellite data was monitoring clouds and their motion as indicators of changes in weather patterns. This application to meteorology was in fact the primary driving force in the early development of satellite remote sensing. As we matured from polar-orbiting to geostationary satellites our ability to monitor atmospheric conditions improved greatly, resulting in modern systems that give us very high-resolution images of atmospheric pattern changes in time and space. Early (1960s) weather satellites were spin stabilized with their axis and camera pointed at the Earth over the United States, meaning that the satellite could view the Earth only at the latitudes of North America. Today almost everyone on the Earth is accustomed to seeing frequent satellite images collected by geostationary satellites as part of their local weather forecast. This is a very dramatic change that has taken place in slightly over 30 years. As the meteorological satellites improved it became apparent that satellite data were useful for a great many other disciplines. Some of these other applications used data from the meteorological satellites directly while others awaited new sensors and satellite systems. During those developmental times the National Space and Aeronautics Administration (NASA) sponsored the NIMBUS program where a common satellite “bus” was used to carry into space various instruments requiring testing and evaluation. This excellent program ran for about 20 years and was amazingly successful in ﬂying and demonstrating the capabilities of a variety of sensors. One of the greatest successes was the start of the land surface remote sensing satellite series, known as the LANDSAT series. The ﬁrst few of these starting in 1972 actually used the NIMBUS platform as the spacecraft. Starting with LANDSAT-4 and -5 a new bus was built just for this satellite series. The operation of LANDSAT was privatized in 1980s, and unfortunately the privately developed LANDSAT-6 failed upon launch. LANDSAT-7 was planned to launch in 1998; once again a NASA satellite will usher in a new phase in the study of land surface remote sensing images. Land surface sensing satellites have not been exclusively American. Also designed to measure over the same wavelengths as LANDSAT was the French Syst`eme Pour l’Observation de la Terre (SPOT) satellite system. The series began with the operation of SPOT-1 on February 22, 1986, and continues today. Companies have been set up to process and sell the SPOT satellite data products.

DEVELOPMENT OF METEOROLOGICAL SATELLITES NASA’s Television Infrared Observation Satellite (TIROS1) (1), launched on 1 April 1960, gave us our ﬁrst systematic

images of Earth from space. This single television camera was aligned with the axis of this spin-stabilized satellite, which meant that it could point at the Earth only for a limited time each orbit (which naturally collected pictures for the latitudes of North America). This experimental satellite series eventually carried a variety of sensors, evolving as technology and experience increased. Working together, NASA and the Environmental Science Services Administration (ESSA), merged into the National Oceanographic and Atmospheric Administration (NOAA) at the latter’s formation in 1970, stimulated improved designs. TIROS1 through TIROS-X contained simple television cameras, while four of the ten satellites also included infrared sensors. One interesting development was the change in location of the camera from the spin axis of the satellite to pointing outward from this central axis. The satellite axis was also turned 90◦ so that now its side, rather than the central axis, pointed toward the Earth. Called the “wheel” satellite, this new arrangement allowed the camera to collect a series of circular images of the Earth, which when mosaicked together provided the ﬁrst global view of the Earth’s weather systems from space. Using this wheel concept, cooperation between NASA and ESSA initiated the TIROS Operational System (TOS) with its ﬁrst launch in 1966. Odd-numbered satellites carried improved Vidicon cameras and data storage/replay systems that provided global meteorological data, evennumbered satellites provided direct readout Automatic Picture Transmission (APT) video to low-cost VHF receiving stations. APT, now derived from Advanced Very High Resolution Radiometry (AVHRR) imagery, is still provided to thousands of simple stations in schools, on ships, and elsewhere worldwide. Nine wheel satellites, called ESSA-1 through ESSA-9, were launched between 1966 and 1969. The 1970s saw the Improved TOS (ITOS), which combined APT and global data collection/recording in each satellite. The major improvement was the use of guidance systems developed for ballistic missiles, which made it possible to stabilize the three axes of the spacecraft. Thus a single camera could be aimed at the Earth, eliminating the need to assemble a series of circular images to map the world’s weather. ITOS also introduced day/night acquisitions and a new series of Scanning Radiometers (SRs), which offered vastly improved data. Later, ITOS carried the Very High Resolution Radiometer (VHRR). As part of international weather data exchange, NOAA introduced the direct reception of VHRR data at no charge to ground stations built by an increasing number of users, beginning in 1972. ITOS-1 and NOAA-1, launched in 1970, were transition satellites of the ITOS series, whereas NOAA2 through NOAA-5, launched in 1972–1976, carried the VHRR instrument. The latest generation of this series has been operational since 1978. TIROS-N (for TIROS-NOAA) and NOAA7 through the latest NOAA-12/NOAA-14, include the AVHRR, discussed in the following section. The major advance introduced with this satellite series was the shift from an analog data relay to a fully digital system. Now the data are digitized onboard the spacecraft before being transmitted to Earth. Also the size and weight of the satel-

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.

2

Visible and Infrared Remote Sensing

Figure 1. Advanced TIROS-N satellite.

lite has changed from under 300 kg with the ESSA series of satellites to over 1200 kg with the TIROS-N satellites. There has been one change in the TIROS-N series, and we now have the advanced A-TIROS-N, shown in Fig. 1, along with the advanced A-AVHRR or the AVHRR-2. The primary difference in the AVHRR is the addition of a second thermal infrared band to help in the correction for water vapor attenuation when computing sea surface temperature. The two polar orbiters ﬂying at the time this was ﬁrst written were NOAA-12 (morning orbit) and NOAA-14 (afternoon orbit). Usually the even-numbered satellites ﬂy in the morning and the odd numbered satellites ﬂy in the afternoon. Because of the premature failure of NOAA-13 it was necessary to put NOAA-14 in the afternoon orbit. A comparison between NOAA-12 and NOAA-14 is given in Table 1. We are now (2006) ﬂying NOAA 18 with one satellite (NOAA-N’) left to be launched in this series. THE AVHRR Throughout the developmental process, NOAA has followed a philosophy of meeting operational requirements with instruments whose potential has been proven in space. Predecessor instruments were ﬂown experimentally on experimental satellites before they were accepted and implemented on operational monitoring satellites. These instruments were redesigned to meet both the scientiﬁc and technical requirements of the mission; the goal of the redesign was to improve the reliability of the instrument and the quality of the data without changing the previously proven measurement concepts (2). This philosophy brings both beneﬁts and challenges to the user. Beneﬁts revolve around relative reliability, conservative technology, continuity of access, and application of the data compared with other satellite systems. Challenges include desires to use the system beyond its original design, which could have been advanced more rapidly (but at considerably more cost and/or risk of loss of continuity of data characteristics). Challenges also include conﬂicting desires by users for greater support for their own particular scientiﬁc disciplines with more advanced sensors and more sophisticated customer support (while often also desiring even lower-cost

imagery) from NOAA. AVHRR’s ancestors were the Scanning Radiometers (SRs), ﬁrst orbited on ITOS-1 in 1970. These early SRs had a relatively low spatial resolution (8 km) and fairly low radiometric ﬁdelity. The VHRR was the ﬁrst improvement over the SR and for a while ﬂew simultaneously with the SR. Later the VHRR was replaced by the AVHRR which combined the high resolution and monitoring functions. There are two series of AVHRR instruments. Built by ITT Aerospace/Optical Division in the mid1970s, the AVHRR/1 is a four-channel, ﬁlter-wheel spectrometer/radiometer while the AVHRR-2, built in the early 1980s, is identical except for the addition of a second longwave channel (5). The AVHRR instrument is made up of ﬁve modules: the scanner module, the electronics module, the radiant cooler, the optical system, and the base plate. Schwalb (3, 4) and ITT (5) provide detailed descriptions of AVHRR hardware. Starting in Nov. of 1998 a new version called the AVHRR3 (Table 2) was launched on NOAA-15. In this modiﬁcation of the sensor the mid-wavelength infrared channel is switched from approximately 1.6 µm during the day back to the previous 3.7 µm at night. Thus, the sensor is switching this channel approximately every 45 min during its 90 min orbit. This switch was introduced to provide new snow and ice detection during the day without a major change in the data downlink format. AVHRR channels 1 and 2 are calibrated before launch and designed to provide direct, quasi-linear conversion between the 10-bit digital numbers and albedo. In addition,

Visible and Infrared Remote Sensing

Band Band 1 (VIS) Band 2 (NIR) Band 3A (NIR) Band 3B (MIR) Band 4 (TIR) Band 5 (TIR)

Wavelength (µm) 0.58 to 0.68 0.725 to 1.1 0.58 to 1.64 3.55 to 3.93 10.3 to 11.3 11.5 to 12.5

Table 2. AVHRR3 Channel Characteristics Nadir Resol. (km) Swath Width (km) 1.1 1.1 1.1 1.1 1.1 1.1

the thermal channels are designed and calibrated before launch as well as in space (using the AVHRR as a “blackbody” at a measured temperature and cold space to represent 0 ◦ C) to provide direct, quasi-linear conversion between digital numbers and temperature in degrees Celsius. As the thermal infrared channels were optimized for measuring the skin temperature of the sea surface, their range is approximately −25◦ C to +49◦ C for channel 3, −100◦ C to +57◦ C for channel 4, and −105◦ C to +50◦ C for channel 5 for a typical NOAA 11 scene.

DATA INGEST, PROCESSING, AND DISTRIBUTION There are four classes of AVHRR data: (1) High Resolution Picture Transmission (HRPT) data are full-resolution (1 km) data received directly in real time by ground stations; (2) Global Area Coverage (GAC) data are sampled on-board to represent a 4.4 km pixel, allowing daily global coverage to be systematically stored and played back to NOAA ground stations at Wallops Island, Virginia, and Fairbanks, Alaska, and a station operated at Lanion, France, by the Centre National d’Edudes Spatiales (CNES); (3) Local Area Coverage (LAC) data are 1 km data recorded on-board for later replay to the NOAA ground stations; and (4) Automatic Picture Transmission (APT) is an analog derivative of HRPT data transmitted at lower resolution and high power for low-cost VHF ground stations. Kidwell (6) provides a handbook for users of AVHRR data. Special acquisitions of LAC data may be requested by anyone (7). HRPT, LAC, and GAC data are received by the three stations just mentioned and processed at NOAA facilities in Suitland, Maryland. In addition relatively low-cost, direct-readout stations can be set up to read the continuously broadcast HRPT data. Further information about these data, and about NOAA’s on-line cataloging and ordering system, can be obtained from: National Oceanic and Atmospheric Administration National Environmental Satellite Data and Information Service (NESDIS) National Climatic Data Center (NCDC) Satellite Data Services Division (SDSD) Princeton Executive Square, Suite 100 Washington, DC 20233 Tel: (301) 763-8400 FAX: (301) 763-8443

Users of large quantities of data may be able to obtain them more rapidly from NOAA, by making the appropriate individual arrangements with: Chief, NOAA/NESDIS Interactive Processing Branch (E/SP22) Room 510, World Weather Building Washington, DC 20233 Tel: (301) 763-8142

2048 2048 2048 2048 2048 2048

3

Typical Use

Daytime cloud and surface mapping Land-water boundaries Snow and ice detection Night cloud mapping, sea surface temperature Night cloud mapping, sea surface temperature Sea surface temperature

NOAA’S COMPREHENSIVE LARGE ARRAY-DATA STEWARDSHIP SYSTEM (CLASS) What was discussed earlier as the Satellite Active Archive (SAA) was subsumed by NOAA’s new and comprehensive CLASS system. CLASS is a web-based data archive and distribution system for NOAA’s environmental data. It is NOAA’s premier online facility for the distribution of NOAA and U.S. Department of Defense (DoD) operational environmental satellite data (Geostationary and Polar (GOES and POES), and DMSP) and derived data products. CLASS is evolving to support additional satellite data streams, such as MetOp, EOS/MODIS, NPP, and NPOESS, and NOAA’s in situ environmental sensors, such as NEXRAD, USCRN, COOP/NERON, and oceanographic sensors and buoys, plus geophysical and solar environmental data. CLASS is in its second year of a major 10-year growth program, adding new data sets and functionality to support a broader user base. To learn more about CLASS, please visit their online system at http://www.class.noaa.gov. CLASS Goals The CLASS project is being conducted in support of the NESDIS mission to acquire, archive, and disseminate environmental data. NESDIS has been acquiring this data for more than 30 years, from a variety of in situ and remote sensing observing systems throughout the National Oceanic and Atmospheric Administration (NOAA) and from a number of its partners. NESDIS foresees signiﬁcant growth in both the data volume and the user population for this data, and has therefore initiated this effort to evolve current technologies to meet future needs. The long-term goal for CLASS is the stewardship of all environmental data archived at the NOAA National Data Centers (NNDC). The initial objective for CLASS is to support speciﬁcally the following campaigns:

NOAA and Department of Defense (DoD) Polar-

orbiting Operational Environmental Satellites (POES) and Defense Meteorological Satellite Program (DMSP) NOAA Geostationary Operational Environmental Satellites (GOES) National Aeronautics and Space Administration (NASA) Earth Observing System (EOS) Moderateresolution Imaging Spectroradiometer (MODIS) National Polar-orbiting Operational Environmental Satellite System (NPOESS) The NPOESS Preparatory Program (NPP)

4

Visible and Infrared Remote Sensing

EUMETSAT Meteorological Operational Satellite (Metop) Program NOAA NEXt generation weather RADAR (NEXRAD) Program The development of CLASS is expected to be a longterm, evolutionary process, as current and new campaigns are incorporated into the CLASS architecture. This master project management plan deﬁnes project characteristics that are expected to be applicable over the life of the project. However, as conditions change over time, this plan will be updated as necessary to provide current and relevant project management guidance. The goal of CLASS is to provide a single portal for access to NOAA environmental data, some of which is stored in CLASS and some available from other archives. The major processes required to meet this goal that are in scope for CLASS are:

Ingest of environmental data from CLASS data providers

Extraction and recording of metadata describing the data stored in CLASS

Archiving data Browse and search capability to assist users in ﬁnding data

Distribution of CLASS data in response to user request

Identiﬁcation and location of environmental data that is not stored within CLASS, and connection with the owning system Charging for data, as appropriate (see out of scope note below) Operational support processes: 24 × 7 availability, disaster recovery, help desk/user support While the capability of charging for media delivery of data is a requirement for CLASS, the development of an eCommerce system to support ﬁnancial transactions is out of scope. CLASS will interface with another system, NESDIS e-commerce System (NeS), for ﬁnancial transactions; deﬁnition and implementation of the CLASS side of that interface is in scope for this project. Location/Organization CLASS is being developed and operated under the direction of the NESDIS Ofﬁce of Systems Development, having been under the direction of the NESDIS Chief Information Ofﬁcer (CIO) during the period 2001—2003 (through Release 2.0). CLASS is being developed by NOAA and contractor personnel associated with the Ofﬁce of Systems Development (OSD) at the CLASS-MD site in Suitland, MD and at the CLASS WV site in Fairmont, WV, the National Climatic Data Center (NCDC) in Asheville, NC, and the National Geophysical Data Center (NGDC) in Boulder, CO. The operational system is currently located at the NOAA facility in Suitland, MD, with a second operational facility at the NCDC facility in Asheville, NC.

The project management is conducted by the CLASS Project Management Team (CPMT), with representatives from each government and contractor team participating in CLASS development and operations and chaired by the OSD Project Manager. Section 2.2 describes the organizational structure for the CLASS project. Data Data stored in CLASS includes the following categories:

Environmental data ingested and archived by CLASS. Currently this includes data for each of the campaigns listed in Section 1.1, and certain product and in situ data Descriptive data received with the environmental data, used to support browsing and searching the CLASS archive Descriptive data maintained by CLASS to support searching for data maintained in other (non-CLASS) repositories Operational data required to support the general operation of the system that is not related to environmental data (e.g., user information, system parameters) System Overview An overview of the CLASS system is shown here in Fig. 2, which is a ﬂow chart for the system. The system functions are to: ingest satellite data and derived data products, create browse products for user selected products and store them online, create netCDF ﬁles from selected product data, store some ﬁles in permanent cache and others in temporary cache, and archive all data in robotic system. In terms of user services CLASS is to: set up the user proﬁle, initiate catalog search, view search results: catalog data, browse images, dataset coverage maps, order data, view order status, visualize and download product data netCDF ﬁles. Visualization will be done at the user interface in response to requests for browse images of the selected data set. The primary value of these browse images is to determine if the right area was selected and whether or not an image contains an abundance of cloud cover. The system delivers the URL of the browse image to the user’s computer for display. Once it is determined that the data granule is to be ordered the order is conﬁrmed by the user and the order module locates the data set in terms of data type, time range, geographic range or other criteria. The data ﬁles are located in the robotic storage and retrieved for processing. The requested ﬁle is generated from the storage ﬁle and the requested ﬁle is put on temporary cache for user retrieval. The data is kept there for several days. These ﬁles can be transferred by standard FTP for future analysis and storage by the users. The system notiﬁes users by email when the data ﬁles are ready for transfer. It is possible to regularly subscribe for certain types and amounts of data. Similarly it is possible to “bulk order” data on CLASS.

Visible and Infrared Remote Sensing

Visualiz ation

Data Products And

Metadata

In gest Store Data

Data

Visual ize

Access Data

Main Monitor Control

Interface with

Custo mers

Users

Data Cache Data Providers

5

Proces Orders Archive

Orders

CLASS Operator s

Figure 2. CLASS System Overview Diagram.

The system command unit oversees the operation of CLASS and monitors its activities. There is a backup system to insure continued operation and secure archival of all the data in CLASS.

NATIONAL POLAR-ORBITING OPERATIONAL SATELLITE SYSTEM (NPOESS) On May 15, 1994 The U.S. President issued a “Decision Directive” that the three separate polar-orbiting satellite programs operated by NOAA, the DoD and NASA should be converged into a single program of polar-orbiting satellites. An Integrated Program Ofﬁce (IPO) was formed and located in Silver Springs, Md. Which was staffed by NOAA, NASA and DoD personnel contributed by each agency to this joint effort. Although a NOAA program the procurement of this system followed along DoD lines and representatives of these agencies met and deﬁned a list of Environmental Data Records (EDRs) that the sensors on NPOESS must be able to measure and provide. With each EDR an accuracy, precision were also given by the IPO. The primary sensors were then deﬁned by the IPO and requests for proposals were initiated. The instruments were initially contracted separately by the government. The primary sensor is the Visible/Infrared Imaging Radiometer Suite (VIIRS), which was to take over the functions of the AVHRR in terms of the visible and infrared measurements. In addition VIIRS was to include the bands of NASA’s MODerate resolution Imaging Spectrometer (MODIS) that had ocean color bands instead of the broad visible band of the AVHRR. The NPOESS satellite (Fig. 3) is a relatively large satellite and will carry a number of instruments. There will ﬁrst be an NPOESS Preparatory Platform (NPP), which is a NASA satellite that will carry three of the future NPOESS instruments. The ﬁrst will be the VIIRS just mentioned in addition to the Cross-track Infrared Sounder (CrIS) and the Advanced Technology Microwave Sounder (ATMS).

Figure 3. NPOESS satellite.

These latter two instruments exploit the infrared and passive microwave portions of the spectra to proﬁle the lower atmosphere providing information critical for operational weather forecasting. The VIIRS will have 21 channels ranging from 412 nm to 11450 nm. The visible channels are quite narrow (20 nm) to allow for the computation of ocean color indications of chlorophyll and to accurately sense land surface vegetation. A schematic for the VIIRS instrument is presented here in Fig. 4, which shows the processing of a photon received by the instrument to the end product.

6

Visible and Infrared Remote Sensing

A prime contractor was selected to build the NPOESS satellites and the corresponding ground system. This was awarded to Northrop Grumman together with Raytheon Corp. It was decided to have the prime contractor also handle the administration of the subcontracts to the selected instrument vendors. The VIIRS had been awarded to Raytheon Santa Barbara Research Systems, the CrIS to ITT in Fort Wayne, Indiana and the ATMS to Aerospace Corp., which was later taken over by Northrop Gruman as well. During its development phase this program dramatically overran its budget and ﬁnally hit a boundary where a DoD program was required by congress to undergo a review. This review was agreed to by NOAA and NASA. As a result of this review NPOESS has been dramatically scaled back with the cancellation of some planned instruments and the reduction from three to only two satellites operating at the same time. Also the launch of the ﬁrst NPOESS satellite was delayed from 2009 to 2012.

NASA’S MODERATE RESOLUTION IMAGING SPECTRORADIOMETER (MODIS) MODIS was the primary imager selected for NASA’s Earth Observing System. Awarded to Raytheon’s Santa Barbara lab it became a precursor for the NPOESS VIIRS instrument. First launched on NASA’s TERRA satellite on Dec.18, 1999 MODIS began collecting data on Feb. 24, 2000. The MODIS instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 µm to 14.4 µm. The responses are custom tailored to the individual needs of the user community and provide exceptionally low out-of-band response. Two bands are imaged at a nominal resolution of 250 m at nadir, with ﬁve bands at 500 m, and the remaining 29 bands at 1 km. A

Figure 4. VIIRS schematic operation description.

±55-degree scanning pattern at the EOS orbit of 705 km achieves a 2,330-km swath and provides global coverage every one to two days. The Scan Mirror Assembly uses a continuously rotating double-sided scan mirror to scan ±55-degrees and is driven by a motor encoder built to operate at 100 percent duty cycle throughout the 6-year instrument design life. The optical system consists of a two-mirror off-axis afocal telescope, which directs energy to four refractive objective assemblies; one for each of the VIS, NIR, SWIR/MWIR and LWIR spectral regions to cover a total spectral range of 0.4 to 14.4 µm. A high-performance passive radiative cooler provides cooling to 83K for the 20 infrared spectral bands on two HgCdTe Focal Plane Assemblies (FPAs). Novel photodiode-silicon readout technology for the visible and near infrared provide unsurpassed quantum efﬁciency and low-noise readout with exceptional dynamic range. Analog programmable gain and offset and FPA clock and bias electronics are located near the FPAs in two dedicated electronics modules, the Space-viewing Analog Module (SAM) and the Forward-viewing Analog Module (FAM). A third module, the Main Electronics Module (MEM) provides power, control systems, command and telemetry, and calibration electronics. The system also includes four onboard calibrators as well as a view to space: a Solar Diffuser (SD), a v-groove Blackbody (BB), a Spectroradiometric calibration assembly (SRCA), and a Solar Diffuser Stability Monitor (SDSM). A second MODIS instrument was launched on the AQUA satellite May 4, 2002 giving us both a morning (TERRA) and afternoon (AQUA) Earth viewing with MODIS instruments. The afternoon orbit of AQUA is one in which a whole series of satellites will be operated and it has become known as the A-Train.

Visible and Infrared Remote Sensing

7

THE GEOSTATIONARY OBSERVING ENVIRONMENTAL SATELLITE (GOES) To monitor changes in the atmosphere related to the Earth’s weather accurately, it is necessary to sample much more frequently than is possible with a polar-orbiting satellite. Thus geostationary satellite sensors were created to make it possible to sample rapidly from geostationary orbit. The much greater altitudes of geostationary orbits (36,000 km versus 800 km for polar orbits) require much more sensitive instruments than were ﬂown earlier in polar orbit. In addition, because the early geostationary satellites were spin-stabilized, the sensors needed to be able to operate on this spinning satellite. Thus each scan could view the Earth only for a period of time set by the rotation of the spacecraft. The solution came in the form of what was originally called the spin scan camera, which was later called the spin scan radiometer. The key to the success of this unit was that it used the spin of the satellite to mechanically cause the telescope to scan the Earth’s surface and also used the satellite’s spin to increment the scan mirror vertically. Called the Visible Infrared Spin Scan Radiometer (VISSR, Fig. 5), it would scan the surface of the Earth from north to south or vice versa. This basic system continued up to the early 1990s when a new generation of geostationary orbiting environmental satellites (GOES) were deployed. Using control systems similar to those used in polar orbiter, the new GOES are three-axis stabilized and maintain their orientation to the Earth (Fig. 6). This orientation makes it possible to “stare” at the Earth, increasing the amount of radiative energy available for Earth surface sensing and so resulting in higher spatial resolutions. This new constant orientation to the Earth also results in a satellite that is constantly heated at one end and cooled at the other. A complex system is required to maintain thermal equilibrium. The GOES Imager (Fig. 7) is a multichannel instrument designed to sense radiant and solar-reﬂected energy from sampled areas of the Earth. The multielement spectral channels simultaneously sweep east-west and west-east along a north-to-south path by means of a two-axis mirror scan system. The instrument can produce full-Earth disk images, sector images that contain the edges of the Earth, and various sizes of area scans completely enclosed within the Earth scene using a new ﬂexible scan system. Scan selection permits rapid continuous viewing of local areas for monitoring of mesoscale (regional) phenomena and accurate wind determination. The GOES system produces a large number of primary data products. They include

Figure 5. GOES spinner spacecraft.

Figure 6. Three-axis stabilized GOES.

Detection and monitoring of forest ﬁres resulting from natural causes and/or human-made causes and monitoring of smoke plumes, Precipitation estimates, Total column ozone concentration (potential data product), and Relatively accurate estimates of total outgoing longwave radiation ﬂux (potential data product). These data products are summarized in Table 3. These data products enable users to monitor severe storms accurately, to determine winds from cloud motion, and, when combined with data from conventional meteorological sensors, to produce improved short-term weather forecasts. The major operational use of 1 km resolution vis-

Basic day/night cloud imagery and low-level cloud and fog imagery,

Upper and lower tropospheric water vapor imagery, Observations of land surface temperature data with strong diurnal variation,

Sea surface temperature data, Winds from cloud motions at several levels and hourly cloud-top heights and amounts,

Albedo and infrared radiation ﬂux to space, important for climate monitoring and climate model validation,

Figure 7. The GOES Imager.

8

Visible and Infrared Remote Sensing

ible and 4 km resolution infrared multispectral imagery is to provide early warnings of threatening weather. Forecasting the location of probable severe convective storms and the landfall position of tropical cyclones and hurricanes is heavily dependent upon GOES infrared and visible pictures. The quantitative temperature and moisture and wind measurements are useful for isolating areas of potential storm development. GOES data products are used by a wide variety of both operational and research centers. The National Weather Service’s (NWS’s) extensive use of multispectral imagery provides early warnings of threatening weather and is central to its weather monitoring and short-term forecast function. Most nations in the western hemisphere depend on GOES imagery for their routine weather forecast functions as well as other regional applications. GOES data products are also used by commercial weather users, universities, the Department of Defense, and the global research community, particularly the International Satellite Cloud Climatology Project, through which the world’s cloud cover is monitored for the purpose of detecting change in the Earth’s climate. Users of GOES data products are also found in the air and ground trafﬁc control, ship navigation, agriculture, and space services sectors. The GOES system serves a region covering the central and eastern Paciﬁc Ocean; North, Central, and South America; and the central and western Atlantic Ocean. Paciﬁc coverage includes Hawaii and the Gulf of Alaska. This is accomplished by two satellites, GOES West located at 135◦ west longitude and GOES East at 75◦ west longitude. A common ground station, the Central Data Antenna (CDA) station located at Wallops, Virginia, supports the interface to both satellites. The NOAA Satellite Operations Control Center (SOCC), in Suitland, Maryland, provides spacecraft scheduling, health and safety monitoring, and engineering analyses. Delivery of products involves ground processing of the raw instrument data for radiometric calibration and Earth location information and retransmission to the satellite for relay to the data user community. The processed data are received at the control center and disseminated to the NWS’s National Meteorological Center (NMC), Camp Springs, Maryland, and NWS forecast ofﬁces, including the National Hurricane Center, Miami, Florida, and the National Severe Storms Forecast Center, Kansas City, Missouri. Processed data are also received by Department of Defense installations, universities, and numerous private commercial users. An example infrared (IR) image is shown in Fig. 8 for GOES 9 located in the eastern position.

Figure 8. Full disk GOES-9 IR image on June 19, 1995.

GOES Generations We are now in the second generation of the 3-axis stabilized GOES series of satellites, which will take us into the next decade, and we are planning for their follow on in 2012. This will be the GOES-R program, which will see a dramatic improvement in the capability of the GOES imager. The new Advanced Baseline Imager (ABI) already awarded to ITT will have channels similar to MODIS and the future VIIRS instruments. Even though in geostationary orbit the ABI will have capabilities closer to MODIS including multiple infrared channels and narrow visible channels for ocean color and land surface vegetation studies. In addition the ABI will be able to rapidly scan not only the entire hemisphere that it views but also even more rapidly the central U.S. and even more quickly smaller “mesoscale” regions where the development of severe weather is anticipated. Recently due to anticipated cost and complexity the future GOES-R sounder known as the Hyperspectral Environmental Sounder (HES) has been cancelled at least for the ﬁrst couple of GOES-R satellites. Instead there is an expectation that the present generation of GOES sounding instruments will be carried by these satellites. DEVELOPMENT OF EARTH RESOURCES TECHNOLOGY SATELLITES The ﬁrst satellite in the Earth Resources Technology Satellite (ERTS) program was launched in 1972 and designated ERTS 1. In 1975 the program was redesigned as the LANDSAT program to emphasize its primary area of interest, land resources. The mission of LANDSAT is to provide for repetitive acquisition of high-resolution multispectral data of the Earth’s surface on a global basis. As mentioned earlier, the ﬁrst three Earth Resources Technology Satellites (ERTS-1,2,3) were actually modiﬁed NIMBUS satellites with a sensor suite focused on sensing the land surface. These were later referred to as LANDSAT-1, 2, and 3 and all three greatly outlasted their design life of just one year. They carried a four-channel Multispectral Scanner (MSS), a three-camera return beam vidicon (RBV), a data collection system (DCS), and two video tape recorders. The MSS operated over the following spectral intervals: band 4 (0.5 to 0.6 µm), band 5 (0.6 to 0.7 µm), band 6 (0.7 to 0.8 µm), and band 7 (0.8 to 1.1 µm). Three independent cameras,

Visible and Infrared Remote Sensing

making up the RBV, covered three spectral bands: bluegreen (0.47 to 0.575 µm), yellow-red (0.58 to 0.68 µm), and near-infrared (0.69 to 0.83 µm). Both systems viewed a ground scene of approximately 185 km by 185 km area with a ground resolution of about 80 m. On LANDSAT 1, the RBV was turned off because of a malfunction. LANDSAT-3 added a ﬁfth channel in the thermal infrared (10.4 to 12.6 µm) to the MSS. The LANDSAT satellites are in a sun-synchronous, near polar orbit with a ground swath extending 185 km (115 mi) in both directions. LANDSAT-4 and -5 were inclined 98◦ and had an orbital cycle of 16 days and an equatorial crossing of 9:45 A.M. local time. The altitude of the satellites is 705 km (437 mi) for both LANDSAT-4 and -5. Working together, LANDSAT-4 and -5 offer repeat coverage of any location every 8 days. At the equator, the ground track separation is 172 km, with a 7.6% overlap. This overlap gradually increases as the satellites approach the poles, reaching 54% at 60◦ latitude. The data from both instruments were transmitted directly to a ground receiving station when the satellite was within range. During those periods when the satellite was not in view of a US owned and operated ground station the satellites were only turned on according to a predetermined schedule. The satellite did not carry enough tape recording capacity to record very many of the MSS images, and it thus motivated the international science community to use its global recording capability to download the images from these satellites and make them available to interested parties. This practice led to the establishment of LANDSAT receiving stations all over the world. At present, these stations must pay an annual license fee to EOSAT Corporation in order to receive and use LANDSAT data in their own area of recording coverage. LANDSAT-4 and -5 were built as speciﬁc satellites for their applications dropping the NIMBUS bus along the way. Not only was the spacecraft a new system, but a new scanner also had been developed for these satellites. Called the thematic mapper (TM), the instrument was designed to give some very speciﬁc signatures for various types of resource related surface features. The TM has seven spectral bands covering four regions of the electromagnetic spectrum: bands 1 to 3 the visible range (0.45 µm to 0.69 µm), band 4 the near-infrared (1.55 µm to 2.35 µm), and the thermal infrared band 7 (10.4 µm to 12.5 µm). There was also a physical difference between the TM and the MSS in that the MSS scanned only in one direction, whereas the TM scanned and collected data in two different directions. Also in the TM, the target energy fell almost directly on the detector face, whereas the MSS energy must travel ﬁber optics before getting to the detector, making it easier for the TM to sense changes in the target. A summary of all LANDSAT satellites is given in Table 4, which shows that only one satellite is operating in 1997 and then on a reduced scale. The loss of LANDSAT-6 has severely curtailed research efforts with data from these satellites.

9

THE FRENCH SPOT SATELLITES A program also designed to measure in the same wavelength domain as LANDSAT was the French Syst`eme Pour l’Observation de la Terre (SPOT) satellite system. The series began with the operation of SPOT-1 on February 22, 1986, which carried the multimission platform that could be modiﬁed for future payloads. Operational activities of SPOT-1 ceased at the end of 1990, but it was later reactivated on March 20, 1992 and later stopped tracking on August 2, 1993. SPOT-1 was put into a sun-synchronous orbit with a repeat cycle of 26 days (5 days with pointing capability). The SPOT-1 payload consisted of two High Resolution Visible (HRV) scanners, which were pointable CCD pushbroom scanners with off-nadir viewing: ±27◦ (in steps of 0.6◦ ) or 460 km. These scanners can be operated in two modes: multispectral and panchromatic. In the panchromatic mode, the sensor was capable of a 10 m spatial resolution, which in the multispectral domain doubled to 20 m. In both conﬁgurations, the swath width was 60 km. The sensor bands were 0.50 µm to 0.59 µm, 0.61 µm to 0.68 µm, 0.79 µm to 0.89 µm, and 0.51 µm to 0.73 µm. This conﬁguration was retained for SPOT-2, which was made operational on January 21, 1990. The next satellite SPOT3 was launched on September 26, 1993 and made operational on May 24, 1994. On November 13, 1996, the satellite entered the safehold mode and was no longer operational. Since then, SPOT-2 has continued to operate. In 1997 both SPOT-1 and SPOT-2 have operated simultaneously, providing greatly improved coverage. The SPOT-3 payload consisted of two HRVs (same as SPOT-1), POAM II (Polar Ozone and Aerosol Measurement), and DORIS (Doppler Orbitography and Radio-positioning Integrated by Satellite). As an example of this class of satellites the 2 m × 2 m × 4.5 m SPOT-1 weighed 1907 kg and had a solar array system capable of producing 1 kW from a span of 8.14 m. The orbital altitude at the equator was 822 km and the orbital period was 101.4 min. The on-board tape recorders were capable of 22 min of data collection. The SPOT orbit is sun-synchronous and nearly circular. The altitude is again about 822 km, the inclination is 98◦ , there are 14 and 5/26 orbits in each Earth day for a repeat cycle of 26 days during which time the satellite has completed 369 orbits. Because the valid comparison of images of a given location acquired on different dates depends on the similarity of the conditions of illumination, the orbital plane must also form a constant angle relative to the sun direction. This is achieved by ensuring that the satellite overﬂies any given point at the local time, which in turn requires that the orbit be sun-synchronous (descending node at 10:30 A.M.).

10

Visible and Infrared Remote Sensing

Figure 9. SPOT’s stereo viewing capability.

A big change is planned for SPOT-4, which will now carry two HRV Infrared (HRVIR) radiometers. The spatial resolutions and swath widths have not changed from earlier systems. The spectral bands are changed slightly and are 0.50 µm to 0.59 µm, 0.61 µm to 0.68 µm, 0.79 µm to 0.89 µm, 1.58 µm to 1.75 µm, and 0.61 µm to 0.68 µm. In addition, SPOT-4 will carry the Vegetation Monitoring Instrument (VMI), which has a resolution of 1 km and a swath width of 2,000 km. The sensor bands are 0.43 µm to 0.47 µm, 0.50 µm to 0.59 µm, 0.61 µm to 0.68 µm, 0.79 µm to 0.89 µm, and 1.58 µm to 1.75 µm. A SPOT-5 is planned for late 1999 and will likely carry the same sensor array as does SPOT-4. One of the unique capabilities of the SPOT series of satellites is their ability to produce stereo imagery for making three-dimensional images of ground subjects. This ability is achieved by being able to point the sensor in a desired direction (Fig. 9). SPOT’s oblique viewing capacity makes it possible to produce stereo pairs by combining two images of the same area acquired on different dates and at different angles, due to the parallax thus created. A base/height (B/H) ratio of 1 can be obtained for a viewing angle of 24◦ to the east and to the west. For a stereo-pair comprising a vertical view and one acquired at 27◦ , a B/H of 0.5 is obtained. Stereo pairs are mainly used for stereo-plotting, topographic mapping, and automatic stereo-correlation, from which Digital Elevation Models (DEMs) can be directly derived without the need for maps. SPOT satellites can transmit image data to the ground in two ways, depending on whether or not the spacecraft is within range of a receiving station. As the satellite proceeds along its orbit, four situations arise concerning imagery acquisition and image data transmission to ground. 1. The satellite is within range of a Direct Receiving Station (DRS), so imagery can be down-linked in realtime provided both satellite and DRS are suitably programmed. The DRS locations are shown in Fig. 10. 2. The satellite is not within range of a SPOT DRS. Programmed acquisitions are executed, and the image data are stored on the on-board recorders. 3. The satellite is within range of a main receiving station (Kiruna or Toulouse). It can thus be programmed either to down-link image data in real-time or play back the on-board recorders and transmit image data recorded earlier during the same orbital revolution. 4. The rest of the time, the satellite is on standby ready to acquire imagery in accordance with uplinked commands.

Figure 10. SPOT Direct Receiving Stations.

Applications of Satellite Imagery In this section we will present some of the applications of visible, near-infrared, and thermal infrared satellite data. This review of satellite data applications can by no means be comprehensive, and we will instead present examples of the types of applications that can be made with these satellite images. Our review will naturally represent the experience of the author, and it is not intended to neglect other important applications. It is simply easier to present those application most familiar to the author. Visible and Near-Infrared Applications Meteorological Applications. The very ﬁrst use of satellite data was the depiction of changes in cloud patterns that were associated with atmospheric systems. First Earth views were restricted by a spin-stabilized satellite to a limited latitude band. It was increased to daily coverage of the entire Earth as the sensor was oriented to point out from the axis of rotation, which was changed to be perpendicular to the Earth’s surface. These global maps were a collage of different synoptic conditions collected over 24 h. Even though these global world views were new and unique, the meteorologists really wanted a time series of their areas of interest, which led to the development of the GOES satellites with their ability to sample a full hemisphere every half hour (Fig. 5). For limited locations, the sampling frequency can be increased to ﬁve images every few minutes. Initially the satellite images themselves were the primary sources of information for the weather forecasters. Cloud images were interpreted together with atmospheric pressure maps and some early numerical model results to create the local forecast. The ability to follow the movement of the cloud features in time greatly enhanced the understanding of the evolution of various atmospheric phenomena. Later a system was developed to proﬁle the atmospheric temperature from geostationary orbit called the VISSR Atmospheric Sounder (VAS). Similar to TOVS, the VAS used infrared channels to estimate the atmospheric temperature proﬁles. The advent of the three-axis stabilized GOES satellites ushered in a new era of atmospheric proﬁling with the new GOES sounder. These sounding proﬁles are assimilated into numerical models that do the prediction. Because it is much easier to deﬁne the relationship between the numerical models and the satellite’s temperature proﬁles, these are then primary sources of forecast information. The images continue to have value particularly

Visible and Infrared Remote Sensing

11

for the study of poorly understood weather phenomena. The ability to sample rapidly in time has made the new GOES satellites very important for scientiﬁc studies that are trying to understand the formation of severe weather and their causes. Vegetation Mapping. There are many land surface applications that beneﬁt from the combination of near-infrared sensing with visible sensor data. For this reason we treat it together with the purely visible channels which are widely exploited for many different applications. It is not possible to treat each application individually here so we make a few comments about some of the more important applications of these near-infrared data. One of the most popular combinations of AVHRR data uses the visible (0.53 µm to 0.68 µm) channel together with the near-infrared (NIR; 0.71 µm to 1.6 µm) to become the Normalized Difference Vegetation Index (NDVI) which is deﬁned as NDVI =

VIS − NIR STN DEV (VIS) − STN DEV (NIR)

(1)

This vegetation index combines the visible and nearinfrared to form an index that when normalized by the standard deviations ranges between +1 and −1. This index indicates how well the vegetation is growing rather than the amount of biomass present. The visible channel responds to the greening up of the vegetation as it emerges either as a new plant or one that has lain dormant. With SPOT and LANDSAT, it is possible to actually view this change in the green radiation band. For other satellites such as the NOAA-AVHRR, the visible channel is so broad that it covers all the visible wavelengths. We can still use the formulation in Eq. (1) to compute our NDVI because the visible channel is centered in the red. The NIR channel responds to the mesophyll structure in the leaves, which is a function of the amount of water held in the leaves. This is an important sensing mechanism because the leaves will go from brown to green early in the growing season. Later on, their greenness will be saturated, and the leaves will not get any greener. During this time, the leaves will indicate the condition of the plant as they reﬂect the NIR wavelengths back up to the satellite. In reality, the NDVI does not really go from −1 to +1 but instead ranges from low negatives up to 0.7 or 0.8. For display on a satellite image, the NDVI must be converted back to a raster image, which requires a scaling from the NDVI value to an 8-bit raster display value (which limits the dynamic range). As an example we present in Fig. 11 the NDVI for the state of Kansas, which is predominantly wheat-growing country. Here we show a series of years all using the same color scale so that changes in color are related to changes in the health of the vegetation. The substantial variability between individual years can clearly be seen even in these small images. Note that the color scale has been created to be brown when the NDVI is low and yellow to red when the vegetation is very healthy. Also the NDVI is never greater than 220 even though it has a theoretical maximum of 255 (the 8 bit equivalent of +1) nor does it go below 120 (−1 would be equal to 0). These NDVI images have been constructed from a series of daily AVHRR images using channels 1 and 2. Because

Figure 11. The NDVI for Kansas for 6 years 1982–1987.

clouds will obscure the surface in either of these channels, it is necessary to eliminate the cloud cover. This is done in a series of steps. First a cloud classiﬁer or locator program is run to determine which features are clouds. Those pixels clearly deﬁned as clouds are set to a uniform value (usually 0 or 255) in the individual image. The non-cloud pixels of the individual images are composited to construct the single image that covers a number of days. Compositing of the NDVI is done by retaining the maximum value whenever it occurs. This overwrites any values that are not the maximum. Clouds will have the effect of lowering the NDVI so that this type of maximum compositing further reduces the effects of clouds. Realize, however, that even this NDVI composite is not totally free from cloud effects. Clouds that are smaller than the AVHRR pixel size will be brought into this calculation. Here again the NDVI maximum composite will minimize the effects of these sub pixel clouds. Other sources of error in the NDVI calculation are the effects of atmospheric aerosols and possible errors in geolocation of the pixel. The latter very dramatically affects the NDVI particularly in areas of strong inhomogeneity. It is often a difﬁcult problem to detect as the changes will just appear as normal temporal changes in the NDVI. The aerosol inﬂuence is also difﬁcult to recognize and is difﬁcult to correct for. There are few direct measurements of atmospheric aerosols to use for correcting the satellite data derived NDVI. Unfortunately, the satellite data technique for estimating aerosols is also based on a combination of the visible and near-infrared channels—the very data that was used to construct the NDVI. Cases where the aerosol effect becomes large are often associated with volcanic eruptions that introduce ash into the upper atmosphere, changing the visible and near-infrared signals. These events can last for weeks and even months depending on the strength and location of the ash plume. Very strong eruptions with a marked ash component have produced ash plumes that have circled the globe before the ash has diffused sufﬁciently to no longer inﬂuence the NDVI computation. There is considerable discussion on how to interpret the NDVI and how to relate the satellite-derived model to climate and terrestrial phenomena (15, 16). However, there is little doubt that the NDVI provides a useful space perspective monitor of vegetation, land cover, and climate, if used carefully. The index has been produced and utilized globally (6) and regionally (16–20).

12

Visible and Infrared Remote Sensing

Figure 12. Sea surface temperature (◦ C) from infrared AVHRR data.

Thermal Infrared Applications Sea Surface Temperature (SST). Initially, thermal infrared imagery was collected to provide information on the atmosphere when the Earth was not in sunlight. Later it became clear that there were many nonmeteorological applications of these thermal infrared images. One wellknown application is the computation of sea surface temperature. Traditionally computed from the reports of ships at sea and later autonomous drifting buoys, the satellite infrared data provided a substantial increase in the area covered for any speciﬁc time. Cloud cover was still a major problem because it obscured the infrared signal from the ocean. In addition, atmospheric moisture in the form of water vapor signiﬁcantly altered the SST signal. In spite of these limitations, the satellite SST data were such a large increase in the time/space coverage of SST measurements that they completely changed the way SST was routinely calculated. Prior to the advent of satellite data, the SST measurements were limited to the major shipping routes and the limited areas where drifting buoys were deployed. Later the distribution of buoy data increased, and they became a major source of in situ SST data that was used to “calibrate” the thermal infrared satellite SST data. In this calibration, the buoy data are used as “ground truth,” and the satellite SST algorithm is adjusted to ﬁt the buoy in situ SST measurement. Even with the satellite, SST data global ﬁelds are not usually full on a daily basis, and a compositing procedure is needed to create a global SST map (Fig. 12). This is a two-week composite of the Multi-Channel (MC) SST (21) with a 40 km spatial resolution. The satellite data used to generate this SST map were the 4 km spatial resolution Global Area Coverage from the NOAA AVHRR. Each image was ﬁrst navigated to locate the pixels accurately. Clouds have then been ﬁltered from each of the individual scenes used in the composite. In computing the composite SST, a maximum temperature has been retained because residual clouds will always lower the temperatures. This is much the same logic as is used with the maximum NDVI composite. The resultant ﬁeld has further been smoothed by a Laplacian ﬁlter to reduce any noise in the spatial signal. Notice the white patches that indicate cloud cover even after the compositing procedure. The striped appearance of Hudson Bay is due to an error in the smoothing program. The main features of the SST ﬁeld are the tropical core of high SST, the coldest temperatures in the Polar Regions, and the highly variable regions at the boundaries of these

Figure 13. SST for May 1986 from the AVHRR with coincident buoy tracks (white).

ﬁelds. In the tropics, much of the spatial variability caused by waves propagating in the equatorial waveguide. It is notable that strong warm currents such as the Kuroshio and Gulf Stream are not readily apparent as patches of warm SST. This is likely the result of low spatial resolution and the Laplacian smoothing. The strong western boundary currents have simply been smoothed out of the map. It is difﬁcult to judge just how much credibility to give to individual features. One would like to say something about various current systems, but most known currents do not appear on this map. Furthermore, heating and cooling can have very dramatic effects, particularly on the tropical warm SSTs. In the southern polar region, it is tempting to suggest that the dark blue band represents the polar front with its meanders and eddy formation. Still in Fig. 9 this dark blue band extends much farther north than is usually thought to be the case for the polar front. Another suggestion of the expression of ocean currents by the SST pattern in the southern ocean can be seen in Fig. 13 where a month-long satellite SST similar to that in Fig. 12 has been computed and the coincident tracks of drifting buoys have been superimposed. Note how well the buoys appear to follow the SST isotherms, particularly in the area around Cape Horn and into the Indian Ocean. Buoy trajectories just north of the light-blue area exhibit north and south excursions that are matched by the frontal features in the SST map. Because these are regions occupied by the Antarctic Circumpolar Current (ACC), it is realistic to have these apparently large velocities in the area. These fronts are known to be locations of strong currents that together make up the ACC. This is also the case south of the Australian bight where the buoy tracks now appear to follow the Subantarctic Front to the south of the light blue band. The continuation of strong currents is emphasized by the long trajectories for this monthly period. This is particularly apparent when these trajectories are compared with the monthly trajectories located to the north of the light-blue area. Comparing the buoy locations also emphasizes that the coverage by drifting buoys alone is meager and that any SST made without the satellite data would be forced to interpolate between widely separated data points. For this particular comparison, it should be noted that there was a speciﬁc program to drop buoys in the southern ocean. When ship tracks are added, the data coverage increases, but then only along shipping routes. There are many areas that a ship does not visit nor are there usually any drifting buoy measured SSTs. Thus satellite data must be used to

Visible and Infrared Remote Sensing

compute a global SST with any hope for decent coverage. There are many problems involved with combining infrared satellite SSTs with in situ measured SSTs. In situ measurement of SST is not a simple task, and there is the potential that errors exist both in archived in situ SSTs and in satellite SST ground-truthing procedures using in situ SST. Heating or cooling of the sea surface occurs as a result of the air-sea ﬂuxes, horizontal and vertical heat advection, and mixing within the ocean. It is the nature of the air-sea ﬂuxes of heat to complicate the measurement of SST in the ocean. The heat exchange associated with three of the four components of the net heat ﬂux—the sensible heat ﬂux, the latent heat ﬂux, and the longwave heat ﬂux—occurs at the air-sea interface. The heat exchange associated with the fourth component—the shortwave radiation—occurs over the upper few tens of meters of the water column. Typically, the sum of the three surface ﬂuxes is negative (the ocean gives heat to the atmosphere), and the sea surface is cooled relative to temperatures just a few millimeters below the surface. During daylight hours, the shortwave radiation penetrates and warms the water where it is absorbed. The absorption is an exponential function of depth, and the energetic red light is absorbed more rapidly (e-folding depth of approximately 3 m) than the less energetic blue-green light (e-folding depth of roughly 15 m), with the exact rate of absorption at each frequency depending on water clarity. Heating caused by solar insolation can produce a shallow, warm, near-surface layer meters to tens of meters in thickness. The surface is still cooled by evaporation, longwave radiation, and sensible heat ﬂux; but the “cooler” surface temperature can, because it is the surface of the layer warmed by the sun, be greater than a temperature from below a shallow diurnal mixed layer [Fig. 14(a)]. During the day, the strong vertical temperature gradients within and at the base of the region that experiences diurnal warming make the measurement of sea surface temperature from engine intakes, temperature sensors attached to buoys, and other ﬁxed-depth instruments difﬁcult. At night, this shallow warm layer cools, and the continued surface cooling results in a skin temperature below that a few centimeters deeper [Fig. 14(b)]. As discussed in Schlussel ¨ et al. (22), the difference between the skin SST and the bulk SST, 2 m below the surface, ranges from −1.0◦ C to 1.0◦ C over a 6-week period with a mean difference of 0.1◦ C to 0.2◦ C (positive values indicate a cool skin temperature). Attempts to make in situ measurements of SST fall into two classes. Traditionally measurements have been made from ships with “bucket samplers.” These buckets sample a temperature in the upper meter of the ocean. Because of its ease of operation, shipboard SST measurements shifted to the reporting of “ship-injection” cooling water temperature collected from intakes ranging from 1 m to 5 m below the sea surface. A study by Saur (23) indicated that heating in the engine room resulted in a bias of about +1.5◦ C in these injection temperatures. Such problems with ship SSTs have led many people to the exclusive use of SST data from moored and drifting buoys. These systems sample temperature at a wide variety of depths ranging from 0.5 m to 1.5 m, depending on the hull shape and behavior in the wave ﬁeld. Moored buoys with large reserve buoyancy

13

Figure 14. Ideal near-surface temperature proﬁles.

do not oscillate as much vertically as do the drifting buoys, which are often designed to minimize wind drag. Moored buoys provide temperature time series at a point, but the buoy does move up and down as it travels within its “watch circle.” All these in situ SST measurements sample a nearsurface temperature that is commonly referred to as the bulk SST. This bulk SST has been used in the common “bulk formulas” for the computation of the surface ﬂuxes of heat and moisture. Depending on the surface wind and wave and solar heating conditions, the bulk temperature differs greatly from the skin SST. It is the temperature of this thin molecular boundary layer (24) that radiates out and can be detected by infrared sensors. The sharp temperature gradient (Fig. 12) in this molecular sublayer is always present at surface wind speeds up to 10 m/s (25). At higher wind speeds, the skin layer is destroyed by breaking waves. Earlier studies (25, 26) have shown, however, that the skin layer reestablishes itself within 10 s to 12 s after the cessation of the destructive force. Direct measurement of the skin temperature is not possible because any intrusion will destroy the molecular layer. Even drifting buoys cannot measure the temperature of this radiative skin layer because contact with the skin will destroy it for the duration of the contact. Shipmounted infrared radiometers have been used (22, 27) to observe the ocean’s skin temperature without the effects of atmospheric SST signal attenuation that are suffered by satellite-borne infrared sensors. In order to overcome the many errors inherent in this type of measurement, a reference bucket calibration system was developed. In this system, the bucket is continuously refreshed with sea water so as to ensure that no skin layer develops. The temperature of the reference bucket is continuously monitored,

14

Visible and Infrared Remote Sensing

and the radiometer alternately views the sea surface and the bucket, providing regular calibration of the radiometer. This calibration not only includes an absolute reference to overcome any drift of the radiometer blackbody references or instrument electronics but also will provide a calibration of the radiometer for the effects of the nonblackness of the sea surface, the contributions of reﬂected radiation from sky and clouds, and the possible contamination of the entrance optics by sea spray. Monitoring the bucket temperatures with platinum resistance thermometers, accurate to 0.0125◦ C, made it possible to measure the skin temperatures accurate to 0.05◦ C. Other investigators (28) have relied on internal reference temperatures for the calibration of the skin SST radiometer using pre- and post-cruise calibrations to ensure the lack of drift in the reference values. To account for the nonblackness of the sea surface and reﬂected sky radiation, the radiometer was periodically turned to point skyward during variety of conditions. The resulting correction was an increase in the measured radiometric SST by an average of 0.25◦ C. This rather large average value emphasizes the need for a continuous reference calibrator for the ship skin radiometer measurements of SST. All previously used radiometers have employed a single channel. Because the satellite radiometers that these ship measurements are designed to calibrate have more than a single channel, it would be useful if the shipborne radiometer also had multiple thermal infrared channels. Unlike the satellite measurements, where these different thermal channels are intended to be used for atmospheric water vapor attenuation correction, the multichannel shipboard radiometer would provide redundant measures of the emitted surface radiation and a better source of calibration information for the satellite SST. Sea Surface Temperature Patterns. Some applications of satellite SST estimates do not involve the need for computing the precise SST, but rather it is the SST pattern that is important not the SST magnitude. Much as was seen in Fig. 8 where the drifting buoy trajectories matched the shapes of the temperature contours, the pattern indicates the current structure that we are interested in. This is most clearly the case for a western boundary current such as the Gulf Stream (Fig. 15) where the dark red indicates the core of the current that splits off into meanders and then eddies. In this image, we see three cold eddies that have separated from the Gulf Stream. The westernmost two are clearly still part of the Gulf Stream, but the eddy to the east appears to have separated as marked by the colder (green) center. Note the meander to the northwest where colder slope-water has been entrained from the north. This is the source water of the eddies that separate to the south. Different studies have used the shape of the Gulf Stream core to estimate the long-term behavior of the current system. A line that separates the cold waters to the north from those of the Gulf Stream can be drawn. This is done every 2 weeks, and the resulting lines are then analyzed for the time/space character of the Gulf Stream. Other studies have examined the interaction between the eddies and coastal bottom topography. Warm rings that

Figure 15. Sea surface temperature of the Gulf Stream region.

separate to the north eventually reach the continental shelf break where a strong upwelling that feeds the local fauna occurs. These warm eddies are known to be the locations of good ﬁshing conditions, and some groups sell SST maps as a service to ﬁshermen. Combining Visible and Thermal Infrared Data Snow Cover Estimation. The process of routinely estimating seasonal snow cover and converting it to snow water equivalence (SWE) for water resource management is currently labor intensive, expensive, and inaccurate because of the inability of in situ measurements to cover all the spatial variations in a large study area. It is attractive to use satellite data to monitor the snow cover because of the unique synoptic view of the satellite (29). If data from operational weather satellites can be used to estimate reliably annual snow pack and annual SWE, the seasonal snow pack assessments can be improved, and the water supply for the coming summer season can be better predicted. We introduce a new method for using satellite data from the Advanced Very High Resolution Radiometer to estimate the snow cover and the SWE from a series of images composited over time to remove cloud contamination. A pseudo-channel 6 is introduced that combines two AVHRR channels to discriminate snow from cloud objectively. Daily snow retrievals in individual AVHRR images are composited over one-week intervals to produce a clear image of the snow cover for that week. This technique to produce snow cover maps routinely from AVHRR imagery is different from that used operationally (30). Our April 1990 snow cover was based on a composite of 7 to 14 different images, whereas the operational estimates are derived from a relatively limited number of AVHRR images. A key to estimating SWE from a snow cover map derived from satellite data is a knowledge of the relationship between snow cover, snow depth, and the resultant SWE. A long (25 year) time series of historical snow pack measurements from Colorado SNOTEL and SNOW COURSE

Visible and Infrared Remote Sensing

sites was used to develop correlations between snow depth and elevation in the snow covered regions. It is well known (31) that above the tree line, or in areas where trees are so sparse that they provide little protection from the wind, there is no clear relationship between SWE and elevation. Instead, wind moves much of the snow after it falls, and other weather variables, like solar radiation, cause snow ablation to be larger in exposed areas. In most of the basins that we studied, less than 15% of the ground area is above tree line. Thus, we ignored this effect and assumed that SNOTEL and SNOW COURSE measurements provided good estimates of the snowpack at the elevations in each drainage basin. Colorado was separated into seven river drainage basins, and the relationship between snow depth and elevation for each of these basins was calculated. The lowest correlations were not for those areas with the greatest elevations but rather for those areas with the poorest coverage in terms of ground measurement sites of snow depth. In one case, it was necessary to eliminate some sites in order to improve the correlations. These sites were located in the same region and were apparently not representative. The AVHRR snow cover maps were merged with a digital terrain map (DTM) of Colorado, and the snow elevations were computed from this merged set. Using linear regressions between snow depth and elevation from the historical data, these snow elevations were converted into SWE for each individual drainage basin. In each basin, river run-off was estimated from the gauged measurements of each river. There were large differences between individual basin gauge values (cumulated from all river gauges in the individual basins) and with the SWE estimates from the satellite and DTM data. These differences appeared to cancel out when the entire state was considered. There are transmountainal diversions, human-made tunnels used to divert water from one basin to another, that routinely shift water from one basin to another to satisfy demand. The only valid comparison was to lump the seven basins together and make an estimate for the state as a whole. This statewide estimate of SWE from the satellite imagery was found to be within 11.2% of the combined river run-off from the gauged rivers (taken from the maximum in 1990), when the ﬁrst two weeks of April 1990 were used to make the composite image of snow coverage. In order to determine how to use AVHRR data to assess winter snowpack conditions, it was necessary to decide when during the year such an assessment should be made. Because we are interested only in the long-term snowpack and not in the short-term transient snow, a long time series of snowpack measurements and multiple consecutive days of spatial snow coverage (from AVHRR) are considered. We analyzed 25 years of data from the US Department of Agriculture Soil Conservation Service (SCS) based on automated SNOTEL stations and manual SNOW COURSE stations across the state of Colorado. The SNOTEL stations measure snow depth by monitoring snow pillows, collapsible bags ﬁlled with antifreeze solution. The SNOW COURSE data are seasonal measurements of snow depth and water equivalence made manually over bimonthly intervals during the snow season. Each of the SNOTEL stations is manually veriﬁed once each year. While the SNO-

15

TEL measurements are recorded each day and are transmitted from these sites to a central location only on the ﬁrst and ﬁfteenth of each month. These values are published yearly in the Colorado Annual Data Summary, and digital values are available from the SCS West National Technical Center computer in Portland, Oregon. The 25 years of SNOTEL data were averaged to ﬁnd the mean annual SWE. The steplike character of this diagram reﬂects the fact that it is built from biweekly observations averages represented by the steps. There is a linear increase from October to the maximum at the beginning of April. After that there is a sharp decrease during the melt season of late May through July. We therefore selected the ﬁrst part of April as the maximum snow extent and hence the time when we should estimate the annual snow cover from the AVHRR imagery. Snow Cover from AVHRR Imagery. Snow Image Calculation. The primary advantage of satellite imagery for snow mapping is the synoptic areal coverage provided by a single satellite image. This eliminates the changes over time and space that take place while collecting SNOW COURSE measurements. Satellite images also provide much greater spatial resolution than is possible even by combining SNOW COURSE measurements with automated SNOTEL sites. The disadvantage of the satellite imagery is the fact that snow and clouds appear similar in the visible and thermal infrared images. Both are bright reﬂective targets having very similar values in the AVHRR visible channel 1 (0.58 µm to 0.635 µm). Clouds and snow cover also have similar temperatures resulting in similar thermal infrared (channels 3, 4, and 5) signatures. To discriminate between cloud and snow, we introduce a ﬁctitious channel 6 that is a combination of channels 3 (3.7 µm to 3.935 µm) and 4 (10.3 µm to 11.35 µm). We deﬁne channel 6 as Chan 6 = (Chan 3 − Chan 4)

(2)

Because channel 3 contains both thermally emitted radiation and reﬂected radiation, the difference between channels 3 and 4 removes the thermal response portion from channel 3. In channel 6, clouds should appear much “brighter” than the snow pixels. The channel 6 image is combined with the channel 1 visible image to determine clearly which areas of the visible image are clouds. Both clouds and snow are bright in channel 1 whereas clouds appear brighter than snow cover in the synthetic channel 6 image. Where bright values in channel 1 match with the bright features in the channel 6 image they are conﬁrmed as cloud cover and masked out as such in the channel 1 image. The remaining bright values in channel 1 then represent the snow cover. This procedure is very different than the supervised classiﬁcation procedure used by Baumgartner et al. (32) and Baumgartner (29). In this type of approach, it is possible to deﬁne “pure snow cover pixels” based on in situ measurements (33). It is not possible to assess quantitatively the degree to which a “snow cover” pixel is occupied by snow, but we believe that the snow signatures in channels 1 and “6” indicate that the pixel is more than 75% snow covered. Most

16

Visible and Infrared Remote Sensing

of the pixels will be greater than that in snow cover with most of them at 100%. No in situ data were collected to analyze which of the snow pixels was 100% covered and which were less. This is an area for future study in order to better establish the accuracy of this type of remote sensing for mapping snow cover. The procedure just described was used to create “snow images” for each day in the month of April 1990. In general, at least one AVHRR image per day was available for analysis. In some cases, there were two images (morning and afternoon satellite passes), and a correction was applied to account for differences in solar azimuth angles. Snow and clouds reﬂect light in the visible part of the spectrum. Radiation with wavelengths > 1.2 m is strongly absorbed by snow, and a snow-cloud discrimination channel centered at 1.6 m is planned for later versions of the AVHRR. In the thermal infrared, both snow and clouds absorb strongly in the range between 10 m and 12 m, but at 3.7 m clouds will reﬂect more light than snow. Hence the virtual channel 6 value will be greater for clouds. The problem that arises with high cirrus clouds is that they have a high ice crystal content and an albedo similar to snow. For this reason it was necessary to do some additional processing to further remove cloud residuals. Cloud Residue Removal. After this mapping procedure was applied to each image, two different types of cloud versus snow discrimination errors remain. First, some snow pixels may have been mistakenly eliminated as clouds. Second, some clouds that have been improperly identiﬁed as snow will remain. These errors are partially accounted for by producing a composite image over time. Because clouds move over short time periods and snow changes more slowly over time, it is possible to retrieve those portions of individual images that are identiﬁed as snow and then composite them over a period of time. Thus bright features that change rapidly in time (between two images) are identiﬁed as clouds rather than snow even if they have passed the earlier ﬁlter process. In this way, we compensate for those image pixels that are clouds incorrectly identiﬁed as snow. Only those pixels that are identiﬁed as snow in more than one image are retained as snow cover. This temporal composite has an additional beneﬁt. Because we are interested in the snow cover that contributes to the annual basin run-off, we are not interested in the transient snow cover that may exist right after a snowfall. We are instead interested in the continuing seasonal snow cover that is found above the snowline (approximately 9,000 ft), which is the primary contributor to the spring water shed in Colorado. Our temporal image composites will eliminate the possible contribution from short-term transient snow cover. Composite Snow Images. For our study week, long periods were used to composite the snow cover retrievals. Separate composite “snow images,” as well as a composite image, were computed for the ﬁrst and second weeks of April 1990. In these images, the bright white areas are the snowcovered regions and clearly depict the mountain ridges. The images seem to indicate a heavier snow cover for the ﬁrst week than for the second week. This was caused by fresh

snowfall early in the ﬁrst week that then settled (melting, compacting, etc.) into snowpack in the second week. During this ﬁrst week, the transient snow cover leads to a false estimate of the overall snowpack. The image for both weeks is likely to be more representative of the total snowpack as these transient conditions are smoothed out over this longer time period. A technique was introduced to distinguish objectively between clouds and snow cover in multichannel AVHRR images. Introducing a pseudo-channel 6 as the difference between infrared channels 3 and 4, it was possible to discriminate snow cover from clouds by using channel 6 and the visible image of channel 1. Both clouds and snow cover appear bright in the visible channel, whereas clouds are darker in channel 6 so that it is possible to identify those portions of the channel 1 image that represent clouds. These are then masked off, and the remaining bright portions of the channel 1 image are determined to be snow cover. Using a relationship between elevation and snow water equivalent developed for each major river basin from 25 years of historical in situ measurements, the satellite snow cover estimates for April 1990 were converted to a snow water equivalent. These values were then compared with the gauged values of river run-off and found to agree with 11% for the ﬁrst two weeks of April when averaged over the entire state of Colorado. It was not possible to use this analysis for individual basins because water was diverted from one basin to another according to need. In addition, the individual weeks in April did not perform as well primarily because of a strong snowfall early in the ﬁrst week. These results suggest that it is possible to enhance snow cover assessments for Colorado using satellite remote sensing. Even though this approach will not replace the in situ measurements, it may be possible to reduce the number of expensive SNOW COURSE measurements or perhaps to improve the present snow pack assessments without any additional in situ measurements. Ice Motion. It is possible to use a sequence of ice images to estimate ice motion. Here it is possible to use visible and infrared imagery in addition to passive microwave imagery that will not be discussed here. The basic requirement is to have excellent geolocation for each of the images. Any misregistration will result in an error in the estimated ice motion. For visible and infrared images, there is the usual requirement that the image be cloud free. This is something that almost never happens in Polar Regions where cloud cover is more prevalent than in other locations. Because of this extreme cloudiness, we use a different compositional technique. Instead of compositing the brightness values of the individual images, we composite only those areas that are clear enough to calculate the ice velocity vector. Thus we are now seeking areas that are clear in both the ﬁrst and second image so that we can compute the ice velocity vector. We then composite the vectors from each pair of images over a period of time much as we did before for the individual images. Ice motion can be clearly seen in the two images in Fig. 16, which are of the area between Greenland and Spitsber-

Visible and Infrared Remote Sensing

17

correlation position. When search windows are overlapped, a densely populated velocity vector ﬁeld is produced. This procedure can either be repeated for every image pair, or as above the clear portions of the images can be used to compute vectors that are themselves then composited to form a larger vector ﬁeld. It is more attractive, however, to use the MCC approach to calculate the ice motion from the passive microwave images of the Special Sensor Microwave Imager (SSM/I), which has been done for both hemispheres by Emery et al. (35). Thanks to the lack of sensitivity of the passive microwave to atmospheric constituents such as water vapor, it is possible to view the ice surface regardless of cloud cover or the presence of atmospheric moisture. The only drawback is a decrease in spatial resolution going from 1 km with the AVHRR to a minimum of 12.5 km with the 85.5 GHz channel of the SSM/I. Lower-frequency SSM/I channels have a corresponding increase in resolution size up to about 25 km with the 37 GHz channel. Forest Fire Detection and Monitoring

Figure 16. (a) Near-infrared image of sea ice in Fram Strait on April 21, 1986. (b) Near-infrared image of sea ice in Fram Strait on April 22, 1986.

gen known as Fram Strait. In this area, ice from the Arctic basic ﬂows as in a funnel through this region, after which it continues to ﬂow south. The strong shears present in this current regime are reﬂected in the very distorted images of the sea ice. We note that the clarity in terms of cloud cover is extremely unusual for these images and one cannot expect to view this area as clearly as this on any one occasion. The technique used to detect the ice motion is the Maximum Cross Correlation (MCC) technique (34), which uses the cross correlation between the ﬁrst and second images. To seek out the motion between the images, a large search window is used in the second images with a small “template” from the ﬁrst image. The goal is to ﬁnd the location at which the cross correlation between the template and search windows is a maximum. That then is the end of a velocity vector that reaches from the center of the search window to the center of the template window in this maximum

Another application that uses both the visible and thermal infrared channels is that of forest ﬁre detection and monitoring. The vast area of most continents that is covered with forests requires constant observation to be able to spot ﬁres when they ﬁrst start and to deploy ﬁre ﬁghting men and equipment to control and put out the ﬁres. A variety of systems are used today. They include ground-based electrical systems to detect lightening strikes that reach the ground along with observation towers built so that forest observers can see greater parts of the forest. These are supplemented with aerial surveillance, which also requires human observations. All these methods are manpower intensive and cannot possibly cover the entire area where forest ﬁres occur. By comparison, satellite-based methods are or at least can be made fairly automated requiring human intervention mainly in the response to the site suspected to be on ﬁre. Because of the potential for false ﬁre identiﬁcation, an additional conﬁrmation is usually needed before resources will be deployed to ﬁght the ﬁre. It may be that, after considerable experience, it will be clear what satellite image signatures attend deﬁnite ﬁres, reducing the number of ﬁres that require additional conﬁrmation. There are two fundamental signatures of ﬁres in weather satellite imagery. First is the “hot spot,” which is the active ﬁre itself (Fig. 17). This will be seen in the thermal infrared image as an extremely hot group of pixels. The ﬁre must be a fairly large ﬁre to ﬁll the 1 km pixels of the AVHRR. The infrequent coverage of the SPOT and LANDSAT satellites does not make them optimum for rapid detection and monitoring. The frequent coverage of GOES is available only at the 1 km resolution in the visible channel, the lower resolution in the thermal infrared makes it impossible to map the hot spot unless it is greater than 4 km in size. One of the real problems in ﬁnding the hot spot is the fact that the smoke plume often obscures the actively burning part of the ﬁre. But this smoke plume can also be used as a supplemental source of information on the ﬁre itself. Here the problem is distinguishing the smoke plume from the ambient cloud cover. In the visible channel, the smoke and clouds look quite similar, and it is often

18

Visible and Infrared Remote Sensing

Figure 17. Thermal infrared (top) and 3.7 µm image (bottom) from the Yellowstone Fires 1988.

very difﬁcult to discriminate between the two. The thermal infrared image provides some additional information because the cloud should be much colder than the smoke plume. Also the shape and position of the smoke plume can be used to infer something about the intensity of the ﬁre and its position. All this information can be pooled to develop a description of the ﬁre even before it is seen on the ground. The major shortcoming of this approach is the need for more frequent coverage than is available from polarorbiting sensors. With the present two-polar orbiter satellite system, it is only possible to view the ﬁre every 4 to 6 h, which is enough time for the ﬁre to really have changed its intensity, its direction, etc. GOES gives the required temporal resolution with its half-hourly imagery, but the problem is the lower spatial resolution in the thermal infrared channels. This is mostly the effect of sampling at a much higher altitude for GOES (22,000 km) as compared with ∼800 km for the AVHRR. For the present, it is best to combine the visible GOES imagery with the AVHRR images to construct a more complete image of the ﬁre and how it is progressing over time. This information can be used to augment (rather than replace) the ground-based ﬁre detection and monitoring systems. In regions where there are no ﬁremonitoring stations, it may be that the satellite data are the primary sources of information for both detecting and monitoring the ﬁre. Here they will guide the deployment of resources to ﬁght the ﬁre. Although there is a lot of research that still needs to be done regarding the detection and monitoring of forest ﬁres with satellite data, it is possible to construct a system that supplies the pertinent agencies with important information on the outset of the ﬁres, their location and some selected information about ﬁre intensity and progression in space/time. This system can be largely automated with no need for operator intervention until it comes to analyzing image features to evaluate the effect of cloud cover or perhaps to estimating which way the ﬁre will progress. This is an important beneﬁt of the satellite system when compared with any land-based system. SUMMARY There are a great many other applications of visible and infrared satellite imagery than those that have been pre-

sented in this article. In addition, new applications of these data are being discovered all the time, and it is impossible to stay abreast of the different processes being sensed by the satellite or at least being derived from the satellite data. One area of particular potential is merging the visible and infrared traditional satellite imagery with data from passive and active microwave sensors. These different bands are very complimentary with the microwave, offering all weather viewing at a substantially reduced spatial resolution. The optical channels, on the other hand, cannot provide uniform or comprehensive coverage because of cloud cover, but the temporal sampling is excellent. Thus we need to develop those indices that will take advantage of both types of data to yield new information about Earth’s surface processes. Another important research focus will be to ﬁnd the best possible ways that satellite data can be integrated with model data. The assimilation of satellite data is a promising development in weather forecasting and should be equally as beneﬁcial in studies of the Earth’s surface. By merging models and satellite data, we are able to infer something about the fundamental processes inﬂuencing the Earth as sensed by the satellite.

BIBLIOGRAPHY 1. L. J. Allison E. A. Neil Final Report on the TIROS 1 Meteorological Satellite System, NASA Tech. Rep. R-131, Goddard Space Flight Center, Greenbelt, MD, 1962. 2. J. C. Barnes M. D. Smallwood TIROS-N Series Direct Readout Services Users Guide, Washington, DC: National Oceanic and Atmospheric Administration, 1982. 3. A. Schwalb The TIROS-N/NOAA A-G satellite series. NOAA Tech. Memo., NESS 95, NOAA Washington, DC, 1978. 4. A. Schwalb Modiﬁed version of the TIROS N/NOAA A-G Satellite Series (NOAA E-J)—Advanced TIROS N (ATN), NOAA Tech. Memo. NESS 116, Washington, DC, 1982. 5. ITT, AVHRR/2 Advanced Very High Resolution Radiometer Technical Description, prepared by ITT Aerospace/Optical Division, Ft. Wayne, IN, for NASA Goddard Space Flight Center, Greenbelt, Maryland 20771, under NASA Contract No. NAS526771, 1982. 6. K. B. Kidwell (ed.) Global Vegetation Index Users Guide, Washington, DC: NOAA/NESDIS/NCDC/SDSD, 1990. 7. M. Weaks LAC Scheduling; Proc. North Amer. Polar Orbiter Users Group, 1st Meet., available from the NOAA National Geophysical Data Center, Boulder, CO, 1987, pp. 63–70. 8. G. R. Rosborough D. Baldwin W. J. Emery Precise AVHRR image navigation, IEEE Geosc. Remote Sens., 32: 644–657, 1994. 9. D. Baldwin W. J. Emery AVHRR image navigation, Ann. Glaciology, 17: 414–420, 1993. 10. D. Baldwin W. Emery P. Cheeseman Higher resolution earth surface features from repeat moderate resolution satellite imagery, IEEE Geosci. Remote Sens. in press. 11. W. J. Emery M. Ikeda A comparison of geometric correction methods for AVHRR imagery, Can. J. Remote Sens., 10: 46–56, 1984. 12. F. M. Wong A uniﬁed approach to the geometric rectiﬁcation of remotely sensed imagery, U. British Columbia, Tech. Rep. 84-6, 1984.

Visible and Infrared Remote Sensing 13. D. Ho A. Asem NOAA AVHRR image referencing, Int. J. Remote Sens., 7: 895–904, 1986. 14. L. Fusco K. Muirhead G. Tobiss Earthnet’s coordination scheme for AVHRR data, Int. J. Remote Sens., 10: 625–636, 1989. 15. C. J. Tucker et al. Remote sensing of total dry matter accumulation in winter wheat, Remote Sens. Environ., 13: 461, 1981. 16. C. J. Tucker P. J. Sellers Satellite remote sensing of primary production, Int. J. Remote Sens., 7: 139, 1986. 17. C. J. Tucker et al. Satellite remote sensing of total herbaceous biomass production in the Senegalese Sahel: 1980–1984, Remote Sens. Environ., 17: 233–249, 1985. 18. S. N. Goward et al. Comparison of North and South American biomass from AVHRR observations, GeocartoInternational, 1: 27–39, 1987. 19. S. N. Goward et al. Normalized difference vegetation index measurements from Advanced Very High Resolution Radiometer, Remote Sens. Environ., 35: 257–277, 1991. 20. C. Sakamoto et al. Application of NOAA polar orbiter data for operational agricultural assessment, Proc. North Amer. NOAA Polar Orbiter Users Group 1st Meet., NOAA National Geophysical Data Center, Boulder, CO: 1987, pp. 134–160. 21. E. P. McClain W. G. Pichel C. C. Walton Comparative performance of AVHRR-based multichannel sea surface temperatures, J. Geophys. Res., 90: 11587–11601, 1985. 22. P. Schluessel et al. On the skin-bulk temperature difference and its impact on satellite remote sensing of sea surface temperature, J. Geophys. Res., 95: 13341–13356, 1990. 23. J. F. T. Saur A study of the quality of sea water temperatures reported in logs of ships weather observations, J. Appl. Meteor., 2, 417–425, 1963. 24. R. Saunders The temperature at the ocean-air interface, J. Atmos. Sci., 24: 269–273, 1967. 25. E. Clauss H. Hinzpeter J. Mueller-Glewe Messungen der Temperaturstruktur im Wasser an der Grenzﬂaeche OzeanAtmosphare, ¨ Meteor Forschugsergeb., Reihe B. 5: 90–94, 1970. 26. G. Ewing E. D. McAlister On the thermal boundary layer of the ocean, Science, 131: 1374–1376, 1960. 27. H. Grassl H. Hinzpeter The cool skin of the ocean, GATE Rep., 14, 1,pp. 229–236, WMO/ICSU, Geneva, 1975. 28. P. A. Coppin et al. Simultaneous observations of sea surface temperature in the western equatorial Paciﬁc ocean by bulk, radiative and satellite methods. J. Geophys. Res., suppl., 96: 3401–3409, 1991. 29. M. F. Baumgartner Snow cover mapping and snowmelt runoff simulations on microcomputers, Remote Sens. Earth’s Environ., Proc. Summer School, Alpbach, Austria, 1989. 30. T. R. Carroll et al. Operational mapping of snow cover in the United States and Canada using airborne and satellite data, Proc. 1989 Int. Geosci. Remote Sens. Symp. 12th Canada. Symp. Remote Sens., Vancouver, B.C., Canada, 1989. 31. K. Elder J. Dozier J. Michaelsen Snow accumulation and distribution in an alpine watershed, Water Resources Res., 27: 1541–1552, 1991. 32. M. F. Baumgartner K. Seidel J. Martinec Toward snowmelt runoff forecast based on multisensor remote-sensing information, IEEE Trans. Geosci. Remote Sens., GE-25: 746–750, 1987. 33. A. Rango Assessment of remote sensing input into hydrologic models, Water Res. Bull., 21: 423–432, 1985.

19

34. R. N. Ninnis W. J. Emery M. J. Collins Automated extraction of sea ice motion from AVHRR imagery, J. Geophys. Res., 91: 10725–10734, 1986. 35. W. J. Emery C. W. Fowler J. A. Maslanik Satellite derived Arctic and Antarctic Sea Ice Motions: 1988–1994, Geophys. Res. Lett., 24: 897–900, 1997. Links

WILLIAM J. EMERY University of Colorado, Boulder, CO

25• Geoscience and Remote Sensing

Electromagnetic Subsurface Remote Sensing Abstract | Full Text: PDF (219K) Geographic Information Systems Abstract | Full Text: PDF (97K) Geophysical Signal and Image Processing Abstract | Full Text: PDF (766K) Information Processing for Remote Sensing Abstract | Full Text: PDF (339K) Meteorological Radar Abstract | Full Text: PDF (182K) Microwave Propagation and Scattering for Remote Sensing Abstract | Full Text: PDF (321K) Microwave Remote Sensing Abstract | Full Text: PDF (436K) Microwave Remote Sensing Theory Abstract | Full Text: PDF (726K) Oceanic Remote Sensing Abstract | Full Text: PDF (160K) Remote Sensing by Radar Abstract | Full Text: PDF (704K) Remote Sensing Geometric Corrections Abstract | Full Text: PDF (195K) Visible and Infrared Remote Sensing Abstract | Full Text: PDF (753K)

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT....Electromagnetic%20Subsurface%20Remote%20Sensing.htm17.06.2008 15:35:15

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3602.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Electromagnetic Subsurface Remote Sensing Standard Article S. Y. Chen1 and W. C. Chew1 1University of Illinois at Urbana-Champaign Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3602 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (219K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Borehole EM Methods Ground Penetrating Radar Magnetotelluric Methods Airborne Electromagnetic Methods About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3602.htm17.06.2008 15:35:27

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

474

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

does not exist. Even though each system has its own characteristics, they still share some common features. In general, each system has a transmitter, which can be either natural or artificial, to send out the electromagnetic energy that serves as an input signal. A receiver is needed to collect the response signal. The underground can be viewed as a system, which is characterized by the material parameters and underground geometry. The task of subsurface EM methods is to derive the underground information from the response signal. The EM transmitter radiates the primary field into the subsurface, which consists of conductive earth material. This primary field will induce a currents, which in turn radiates a secondary field. Either the secondary field or the total field will be detected by the receiver. After the data interpretation, one can obtain the underground information. One of the most challenging parts of subsurface EM methods is interpretation of the data. Since the incident field interacts with the subsurface in a very complex manner, it is never easy to subtract the information from the receiver signal. Many definitions, such as apparent conductivity, are introduced to facilitate this procedure. Data interpretation is also a critical factor in evaluating the effectiveness of the system. How good the system is always depends on how well the data can be explained. In the early development of subsurface EM systems, data interpretation largely depended on the personal experience of the operator, due to the complexity of the problem. Only with the aid of powerful computers and improvements in computational EM techniques is it possible to analyze such a complicated problem in a reasonable time. Computer-based interpretation and inversion methods are attracting more and more attention. Nevertheless, data interpretation is still ‘‘an artful balance of physical understanding, awareness of the geological constraints, and pure experience’’ (1). In the following sections, we will use several typical applications to outline the basic principles of subsurface EM methods. Physical insight is emphasized rather than rigorous mathematical analysis. Details of each method can be found in the references.

BOREHOLE EM METHODS

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING Subsurface electromagnetic (EM) methods are applied to obtain underground information that is not available from surface observations. Since electrical parameters such as dielectric permittivity and conductivity of subsurface materials may vary dramatically, the response of electromagnetic waves can be used to map the underground structure. This technique is referred to as geological surveying. Another major application of subsurface EM methods is to detect and locate underground anomalies such as mineral deposits. Subsurface EM methods include a variety of techniques depending on the application, surveying method, system, and interpretation procedure, and thus a ‘‘best’’ method simply

Borehole EM methods are an important part of well-logging methods. Since water is conductive and oil is an insulator, resistivity measurements are good indicators of oil presence. Water has an unusually high dielectic constant, and permittivity measurement is a good detector of moisture content. Early borehole EM methods consist of mainly electrical measurements using very simple low-frequency electrodes like the short and the long normal. Then more sophisticated electrode tools were developed. Some of these tools are mounted on a mandrel, which performs measurements centered in a borehole. These tools are called mandrel tools. Alternatively, the sensors can be mounted on a pad, and the corresponding tool is called a pad tool. One of the most successful borehole EM methods is induction logging. Since Doll published his first paper in 1949 (2), this technique has been used widely with confidence in the petroleum industry. Extensive research work has been done in this area. The systems in use now are so sophisticated that many modern electrical techniques are involved. Neverthe-

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

less, the principles still remain the same and can be understood by studying a simple case. The induction logging technique, as proposed by Doll, makes use of several coils wound on an isolating mandrel, called a sonde. Some of the coils, referred to as transmitters, are powered with alternating current (ac). The transmitters radiate the field into the conductive formation and induce a secondary current, which is nearly proportional to the formation conductivity. The secondary current radiates a secondary field, which can be detected by the receiver coils. The receiver signal (voltage) is normalized with respect to the transmitter current and resprented as an apparent conductivity, which serves as an indication of underground conductivity. To obtain information from the apparent conductivity, we need to understand how apparent conductivity and true conductivity are related. According to Doll’s theory, the relation in cylindrical coordinates is given by σa =

+∞

dz −∞

+∞ 0

dρ gD (ρ , z )σ (ρ , z )

(1)

where a is the formation conductivity. The kernel gD(, z) is the so-called Doll’s geometrical factor, which weights the contribution of the conductivity from various regions in the vicinity of sonde. We notice that gD(, z) is not a function of the true conductivity and hence is only determined by the tool configuration. The interpretation of the data would be simple if Doll’s theory were exact. Unfortunately, this is rarely the case. Further studies show that Eq. (1) is true only in some extreme cases. The significance of Doll’s theory, however, is that it relates the apparent conductivity and formation conductivity, even though the theory is not exact. In the early development of induction logging techniques, tool design and data interpretation were based on Doll’s theory, and in most cases it gives reasonable answers. To establish a firm understanding of induction logging theory, we need to perform a rigorous analysis by using Maxwell’s equations as follows: E + J s + σE E ∇ × H = −iωE

(2)

H ∇ × E = iωµH

(3)

∇ ·H = 0

(4)

∇ ·D = ρ

(5)

475

After these simplifications, we have E = Js ∇ × H − σE

(6)

H=0 ∇ × E − iωµH

(7)

∇ ·H = 0

(8)

∇ ·E = 0

(9)

where we assume ⵜ ⭈ Js ⫽ i웆 ⫽ 0. For convenience, the auxiliary vector potential is introduced. Since ⵜ ⭈ H ⫽ 0 and ⵜ ⭈ (ⵜ ⫻ ) ⫽ 0, it is possible to define H ⫽ ⵜ ⫻ A. To specify the field uniquely, we choose E ⫽ i웆애A, which is only true when there is no charge accumulation. Substituting these expressions into Eq. (6), we have A = Js ∇ × ∇ × A − iωµσA

(10)

By using the vector identity, we have Js ∇ 2A + k2A = −J

(11)

k2 = iωµσ

(12)

where

To demonstrate how the apparent conductivity and formation conductivity are related, we first write down the solution of Eq. (11) in a homogeneous medium as follows (3,4): 1 J s (ρ , z , φ ) ikr A (ρ, z, φ) = e 1 dV (13) 4π V r1 where r1 = {(z − z )2 + ρ 2 + ρ 2 − 2ρρ cos(φ − φ )}1/2

(14)

The volume integration is evaluated over regions containing the impressed current sources and the coordinate system used in Eq. (13), as shown in Fig. 1. Usually, a small current Receiver

Z a

(ρ ′, Z′,φ ′)

r2 r2 rs

where ⵜ ⭈ Js ⫽ i웆. In the preceding equations, the time dependence e⫺i웆t is assumed, and Js corresponds to the impressed current source. Parameters 애, ⑀, are the magnetic permeability, dielectric permittivity, and electric conductivity, respectively. To simplify the analysis, we assume that both the impressed source and geometry of the problem are axisymmetric; consequently, all the field components are independent of the azimuthal angle. Furthermore, it can be shown that there is no stored charge under the preceding assumption. The working frequency of induction logging is about 20 kHz, so the displacement current ⫺i웆⑀E is very small compared to the conduction current E and hence is neglected in the following discussion.

rs r1 Transmitter

r1 a (ρ ′, Z′,φ ′)

Formation element

Y

X Figure 1. Induction logging tool transmitter and receiver coil pair used to explain the geometric factor theory. (Redrawn from Ref. 4.)

476

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

loop is used as an excitation, which implies that only A exists. Hence, Eq. (13) can be furthermore simplified as Aφ (ρ, z) =

1 4π

V

J φ (ρ , z ) cos(φ − φ )

eikr 1 dV r1

m ρ (1 − ikr1 )eikr 1 4π r31

(16)

where m ⫽ NTI(앟a2) is the magnetic dipole moment and NT is the number of turns wound on the mandrel. At the receiver point, the voltage induced on the receiver with NR turns can be represented as

V = 2πaNr Eφ =

eikL 2NT NR (πa2 )2 I iωµ(1 − ikL) 3 4π L

σs = σ − σa

(15)

When the radius of the current loop becomes infinitely small, it can be viewed as a magnetic dipole and thus the preceding integration can be approximated as Aφ =

be. The difference between true conductivity and apparent conductivity is defined as the skin effect signal,

(17)

The leading term of the imaginary part VX is not a function of true conductivity. In fact, it corresponds to the direct coupling field, which does not contain any formation information. What remains in VX is the so-called X signal. Since the direct term is much larger than the residual part including VR, it is difficult to separate the X signal. The importance of the X signal is seen by comparing Eqs. (19) and (20), from which we find that the X signal is the first-order approximation of the nonlinear term in VR, the R signal. This fact can be used to compensate for the skin effect. So far we have introduced the concept of apparent conductivity by studying the homogeneous case. In practice, the formation conductivity distribution is far more complicated. The apparent conductivity and formation conductivity are related through a nonlinear convolution. As a proof we derive the solution in an integral form, instead of directly solving the differential equations. To this end, we first rewrite Eq. (11) as

where

Js − Ji ∇ 2A = −J Eφ = iωµAφ (a, L)

(18)

and L is the distance between the transmitter and receiver. Since the voltage is a complex quantity, it can be separated into real and imaginary parts and expanded in powers of kL as follows (3):

VR = −Kσ 1 − VX = Kσ

δ2 L2

1−

2L + ··· 3δ

2 L2 + ··· 3 δ3

(19)

(20)

where K=

(ωµ)2 (πa2 )2 NT NR I 4π L

and δ=

(21)

(25)

where Ji ⫽ ⫺k2A is the induced current. The solution of Eq. (25) can be written in the integral form as 1 1 Js Ji A= dV + dV (26) 4π V rs 4π V r2 The first integral is evaluated over the regions containing the impressed sources, and the second one is performed over the entire formation. Under the same assumption as we have made in the preceding analysis, the receiver voltage can be written as (4) Jφ i2πaNR ωµ V = dV 4π V rs (27) σ (ρ , z )Aφ (ρ , z ) 2πaNR ω2 µ2 − dV 4π r2 V The vector potential can also be separated into real and imaginary parts:

2 ωµσ

VR ∼ 2L =σ 1− K 3δ

Aφ = AφR + iAφI

(22)

The quantity K is known as the tool constant and is totally determined by the configuration of the tool, and is the socalled skin depth, which describes the attenuation of a conductor in terms of the field penetration distance. The quantity VR is called the R signal. The apparent conductivity is defined as (3) σa = −

(24)

(23)

In the preceding analysis, there are some important facts that need to be mentioned. In Eq. (19), we see that the apparent conductivity is a nonlinear function of the true conductivity, even in a homogeneous medium. The lower the working frequency or lower the true conductivity, the more linear it will

(28)

Substituting Eq. (28) into Eq. (27) and separating out the real part of the receiver voltage, we have

−(ωµ)2 (2πaNR ) ∞ ∞ dz 4π −∞ 0 2π cos(φ − φ ) dρ σ (ρ , z )AφR dφ r2 0

VR =

(29)

Applying the same procedure, we obtain the apparent conductivity as

V σa = − R K ∞ = dρ 0

∞ −∞

dz σ (ρ , z )gP (ρ , z )

(30)

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

where gP =

2πLρ A (πa)3 NT I φR

2π 0

cos(φ − φ ) dφ r2

(31)

(πa2 )NT I ρ AφR ∼ e{(1 − ikr1 )eikr 1 } = 4π r 31

The integration with respect to ⬘ in Eq. (31) can also be performed for r2 Ⰷ a. The final result is ∞ ∞ σa = σ gD (ρ , z )e{(1 − ikr1 )eikr 1 }dρ dz (33) −∞

The function gP is the exact definition of the geometrical factor. In comparison with Doll’s geometrical factor, gP depends not only on the tool configuration, but also on the formation conductivity, since the vector potential depends on the formation conductivity. The integral-form solution does not provide any computational advantage, since the differential equation for the vector potential A,R must still be solved. But it is now clear from Eq. (30) that the apparent conductivity is the result of a nonlinear convolution. Equation (30) also represents the starting point of inverse filtering techniques, which make use of both the R and X signals to reconstruct the formation conductivity. Finding the vector potential A is still a challenge. Analytic solutions are available only for a few simple geometries. In most cases, we have to use numerical techniques such as the finite element method (FEM), finite difference method (FDM), numerical mode matching (NMM), or the volume integral equation method (VIEM). Interested readers may find Refs. 5 through 8 useful. Previously, we mentioned that Doll’s geometrical factor theory is only valid under some extreme conditions. In fact, it can be derived from the exact geometrical factor as a special case (4). In a homogeneous medium, the vector potential A,R can be calculated as (32)

477

0

where gD (ρ , z ) =

L ρ 3 2 r 31 r 32

(34)

It is now clear that Doll’s geometric factor and the exact geometric factor are the same when the medium is homogeneous and the wave number approaches zero. So far we have discussed the basic theory of induction logging. We now use a simple example to show some practical concerns and briefly discuss the solutions. In Fig. 2, we show an apparent resistivity (the inverse of apparent conductivity) response of a commercial logging tool 6FF40 (trademark of the Schlumberger Company) in the Oklahoma benchmark. The black line is the formation resistivity, and the red line is the unprocessed data of 6FF40. We notice that the apparent resistivity data roughly indicate the variation of the true resistivity, but around 4850 ft the apparent resistivity Ra is much higher than the true resistivity Rt, which results from the ‘‘skin effect’’ (9). From 4927 to 4955 ft, Ra is substantially lower than Rt, which is caused by the so-called shoulder effect. The shoulder effect arises when two adjacent low-resistance layers generate strong signals, even though the tool is not in these two regions. Around 5000 ft, there are a number of thin layers, but the tool’s response fails to indicate them. This failure results from the tool’s limited resolution, which is represented in terms of the smallest thickness that can be identified by the tool.

HRI LOG and 6FF40 LOG

103

True resistivity Raw data of 6FF40 With DC & SB HRI data

Resistivity (Ω ⋅ m)

102

101

100

10–1 4850

4900

4950 Depth (ft)

5000

Figure 2. Apparent resistivity responses of a different tool in the Oklahoma benchmark. The improvement of resolution ability of the HR1 tool is significant.

478

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

The blue line is the processed 6FF40 data after skin effect boosting and a three-point deconvolution. Skin effect boosting is based on Eq. (19), which is solved iteratively for the true conductivity from the apparent conductivity. The three-point deconvolution is performed under the assumption that the convolution in Eq. (30) is almost linear (10). These two methods do improve the final results to some degree, but they also cause spurious artifacts observed near 4880 ft, since the two effects are considered separately. The green curve is the response of the HRI (high-resolution induction) tool (trademark of Halliburton) (11). A complex coil configuration is used to optimize the geometrical factor. After the raw data are obtained, a nonlinear deconvolution based on the X signal is performed. The improvement in the final results is significant. Schlumberger Company recently released its AIT (array induction image tool), which uses eight induction-coil arrays operating at different frequencies (12). The deconvolutions are performed in both radial and vertical directions, and a quantitative two-dimensional image of formation resistivity is possible after a large number of measurements (13,14). The aforementioned data processing techniques are based on the inverse deconvolution filter, which is computationally effective and easily run in real time on a logging truck computer. An alternative approach is to use inverse scattering theory, which is becoming increasingly practical and promising with the development of high-speed computers (8,15). Besides the induction method, there are other methods, such as electrode methods and propagation methods. Induction methods are suitable for the fresh-water mud, oil-base mud, or air-filled boreholes, since the little or no conductivity in the borehole has a lesser effect on the measurement. If the mud is very conductive, it will generate a strong signal at the receiver and hence seriously degrade the tool’s ability to make a deep reading. In such a case, electrode methods are preferable, since the conductive mud places the electrodes into better electrical contact with the formation. In the electrode methods, very low frequencies (Ⰶ1000 Hz) are used and Laplace’s equation is solved instead of the Helmholtz equation. The typical tools are DLL (dual laterolog) and SFL (spherical focusing log), both from Schlumberger. The dual laterolog is intended for both deep and shallow measurements, while the SFL is for shallow measurements (16–19). In addition, there are many tools mounted on pads to perform shallow measurements on the borehole wall. These may be just button electrodes mounted on a metallic pad. Due to their small size, they have high resolution but a shallow depth of investigation. Their high resolution capability can be used to map out fine stratifications on the borehole wall. When four pads are equipped with these button electrodes, the resistivity logs they measure can be correlated to obtain the dip of a geological bed. An example of this is the SHDT (stratigraphic high-resolution dip meter tool), also from Schlumberger (20). When an array of buttons are mounted on a pad, they can be used to generate a resistivity image of the borehole wall for formation evaluation, such as dips, cracks, and stratigraphy. Such a tool is called an FMS (formation microscanner) and is available from Schlumberger (21). For oil-based mud the SHDT does not work well, and microinduction sensors have been mounted on a pad to dipping bed evaluation. Such a tool is known as the OBDT (oil-based

mud dip meter tool) and is manufactured by Schlumberger (22,23). Sometimes information is needed not only relating to the conductivity but also to the dielectric permittivity. In such cases, the EPT (electromagnetic wave propagation tool), from Schlumberger can be used. The working frequency of EPT can be as high as hundreds of megahertz to 1 GHz. At such high frequencies, the real part of ⑀⬘ is dominant, as follows: = + i

σ ω

(35)

EPT measurements provide information about dielectric permittivity and hence can better distinguish fresh water from oil. Water has a much higher dielectric constant (80⑀0) compared to oil (2⑀0). Phase delays at two receivers are used to infer the wave phase velocity and hence the permittivity. Interested readers can find materials on these methods in Refs. 24 and 25. Other techniques in electrical well logging include the use of borehole radar. In such a case, a pulse is sent to a transmitting antenna in the borehole, and the pulse echo from the formation is measured at the receiver. Borehole radar finds application in salt domes where the electromagnetic loss is low. In addition, the nuclear magnetic resonance (NMR) technique can be used to detect the percentage of free water in a rock formation. The NMR signal in a rock formation is proportional to the spin echos from free protons that abound in free water. An example of such a tool is the PNMT (pulsed nuclear magnetic resonance tool), from Schlumberger (26). GROUND PENETRATING RADAR Another outgrowth of subsurface EM methods is ground penetrating radar (GPR). Because of its numerous advantages, GPR has been widely used in geological surveying, civil engineering, artificial target detection, and some other areas. The GPR design is largely application oriented. Even though various systems have different applications and considerations, their advantages can be summarized as follows: (1) Because the frequency used in GPR is much higher than that used in the induction method, GPR has a higher resolution; (2) since the antennas do not need to touch the ground, rapid surveying can be achieved; (3) the data retrieved by some GPR systems can be interpreted in real time; and (4) GPR is potentially useful for organic contaminant detection and nondestructive detection (27–31). On the other hand, GPR has some disadvantages, such as shallow investigation depth and site-specific applicability. The working frequency of GPR is much higher than that used in the induction method. At such high frequencies, the soil is usually very lossy. Even though there is always a tradeoff between the investigation depth and resolution, a typical depth is no more than 10 m and highly dependent on soil type and moisture content. The working principle of GPR is illustrated in Fig. 3(a) (28). The transmitter T generates transient or continuous EM waves propagating in the underground. Whenever a change in the electrical properties of underground regions is encountered, the wave is reflected and refracted. The receiver R detects and records the reflected waves. From the recorded data, information pertaining to the depth, geometry, and material

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

dimensional map, which is called an echo sounder–type display. To locate objects or interfaces, we need to know the wave speed in the underground medium. The wave speed in a medium of relative dielectric permittivity ⑀r is

Distance TTotal

R Time

T

C Cs = √ 0 r

Interface (a)

T

R

T

R

T

479

R Lens

Cavity

where C0 ⫽ 3 ⫻ 108 m/s. Usually, the transmitter and the receiver are close enough and thus the wave’s path of propagation is considered to be vertical. The depth of the interface is approximated as D = 0.5 × (Cs × Ttotal )

Interface

(36)

(37)

(b)

Time

Distance

(c) Figure 3. Working principle of the GPR. (Redrawn from Ref. 20.)

type can be obtained. As a simple example, we use Figs. 3(b) and 3(c) to illustrate how the data are recorded and interpreted. The underground contains one interface, one cavity, and one lens. At a single position, the receiver signals at different times are stacked along the time axis. After completing the measurement at one position, the procedure is iterated at all subsequent positions. The final results are presented in a two-

Source and modulation

Transmit antenna

Ground (soil, water, ice, etc.)

where Ttotal is the total wave propagation time. A practical GPR system is much more complicated, and a block diagram of a typical baseband GPR system is shown in Fig. 4. Generally, a successful system design should meet the following requirements (27): (1) efficient coupling of the EM energy between antenna and ground; (2) adequate penetration with respect to the target depth; (3) sufficiently large return signal for detection; and (4) adequate bandwidth for the desired resolution and noise control. The working frequency of typical GPR ranges from a few tens of megahertz to several gigahertz, depending on the application. The usual tradeoff holds: The wider the bandwidth, the higher the resolution but the shallower the penetration depth. A good choice is usually a tradeoff between resolution and depth. Soil properties are also critical in determining the penetration depth. It is observed experimentally that the attenuation of different soils can vary substantially. For example, dry desert and nonporous rocks have very low attenuation (about 1 dBm⫺1 at 1 GHz) while the attenuation of sea water can be as high as 300 dBm⫺1 at 1 GHz. Some typical

Targets

Receive antenna

Signal sampling and digitization

Data storage

Signal processing

Display

Figure 4. Block diagram showing operation of a typical baseband GPR system. (Redrawn from Ref. 19.)

480

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

applications and preferred operating frequencies are listed in Table 1 (27). To meet the requirements of different applications, a variety of modulation schemes have been developed and can be classified in the following three categories: amplitude modulation (AM), frequency modulated continuous wave (FMCW), and continuous wave (CW). We will briefly discuss the advantages and limitations of each modulation scheme. There are two types of AM transmission used in GPR. For investigation of low-conductivity medium, such as ice and fresh water, a pulse modulated carrier is preferred (32,33). The carrier frequency can be chosen as low as tens of megahertz. Since the reflectors are well spaced, a relatively narrow transmission bandwidth is needed. The receiver signal is demodulated to extract the pulse envelope. For shallow and high-resolution applications, such as the detection of buried artifacts, a baseband pulse is preferred to avoid the problems caused by high soil attenuation, since most of the energy is in the low-frequency band. A pulse train with a duration of 1 to 2 ns, a peak amplitude of about 100 V, and a repetition rate of 100 kHz is applied to the broadband antenna. The received signal is downconverted by sampling circuits before being displayed. There are three primary advantages of the AM scheme: (1) It provides a real-time display without the need for subsequent signal processing; (2) the measurement time is short; and (3) it is implemented with small equipment but without synthesized sources and hence is cost effective. But for the AM scheme, it is difficult to control the transmission spectrum, and the signal-to-noise ratio (SNR) is not as good as that of the FMCW method. For the FMCW scheme, the frequency of the transmitted signal is continuously swept, and the receiver signal is mixed with a sample of transmitted signals. The Fourier transform of the received signal results in a time domain pulse that represents the receiver signal if a time domain pulse were transmitted. The frequency sweep must be linear in time to minimize signal degradation, and a stable output is required to facilitate signal processing. The major advantage of the

Table 1. Desired Frequencies for Different Applicationsa

Material Cold pure fresh-water ice Temperate pure ice Saline ice Fresh water Sand (desert) Sandy soil Loam soil Clay (dry) Salt (dry) Coal Rocks Walls a

Typical Desired Penetration Depth b

Approximate Maximum Frequency at Which Operation May Be Usefully Performed

10 km

10 MHz

1 km 10 m 100 m 5m 3m 3m 2m 1 km 20 m 20 m 0.3 m

2 50 100 1 1 500 100 250 500 50 10

MHz MHz MHz GHz GHz MHz MHz MHz MHz MHz GHz

Redrawn from from Ref. 19. The figures used under this heading are the depths at which radar probing gives useful information, taking into account the attenuation normally encountered and the nature of the reflectors of interest. b

FMCW scheme is easier control of the signal spectrum; the filter technique can be applied to obtain better SNR. A shortcoming of the FMCW system is the use of a synthesized frequency source, which means that the system is expensive and bulky. Additional data processing is also needed before the display (34,35). A continuous wave scheme was used in the early development of GPR, but now it is mainly employed in synthetic aperture and subsurface holography techniques (36–38). In these techniques, measurements are performed at a single or a few well-spaced frequencies over an aperture at the ground surface. The wave front extrapolation technique is applied to reconstruct the underground region, with the resolution depending on the size of the aperture. Narrowband transmission is used and hence high-speed data capture is avoided. The difficulty of the CW scheme comes from the requirement for accurate scanning of the two-dimensional aperture. The operation frequencies should be carefully chosen to minimize resolution degradation (27). Antennas play an important role in the system performance. An ideal antenna should introduce the least distortion on the signal spectrum or else one for which the modification can be easily compensated. Unlike the antennas used in the atmospheric radar, the antennas used in GPR should be considered as loaded. The radiation pattern of the GPR antenna can be quite different due to the strong interaction between the antenna and ground. Separate antennas for transmission and reception are commonly used, because it is difficult to make a switch that is fast enough to protect the receiver signal from the direct coupling signal. The direct breakthrough signals will seriously reduce the SNR and hence degrade the system performance. Moreover, in a separate-antenna system, the orientation of antennas can be carefully chosen to reduce further the cross-coupling level. Except for the CW scheme, other modulation types require wideband transmission, which greatly restricts the choice of antenna. Four types of antennas, including element antennas, traveling wave antennas, frequency independent antennas, and aperture antennas, have been used in GPR designs. Element antennas, such as monopoles, cylindrical dipoles, and biconical dipoles, are easy to fabricate and hence widely used in GPR system. Orthogonal arrangement is usually chosen to maintain a low level of cross coupling. To overcome the limitation of narrow transmission bandwidth of thin dipole or monopole antennas, the distributed loading technique is used to expand the bandwidth at the expense of reduced efficiency (39–42). Another commonly used antenna type is traveling wave antennas, such as long wire antennas, V-shaped antennas, and Rhombic antennas. The traveling wave antennas distinguish themselves from standing wave antennas in the sense that the current pattern is a traveling wave rather than a standing wave. Standing wave antennas, such as half-wave dipoles, are also referred to as resonant antennas and are narrowband, while traveling wave antennas are broadband. The disadvantage of traveling wave antennas is that half of the power is wasted at the matching resistor (43,44). Frequency-independent antennas are often preferred in the impulse GPR system. It has been proved that if the antenna geometry is specified only by angles, its performance will be independent of frequency. In practice, we have to truncate the antenna due to its limited outer size and inner feed-

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

ing region, which determine the lower bound and upper bound of the frequency, respectively. In general, this type of antenna will introduce nonlinear phase distortion, which results in an extended pulse response in the time domain (27,45). A phase correction procedure is needed if the antenna is used in a high-resolution GPR system. A wire antenna is a one-dimensional antenna that has a small effective area and hence lower gain. For some GPR systems, higher gain or a more directive radiation pattern is sometimes required. Aperture antennas, such as horn antennas, are preferred because of their large effective area. A ridge design is used to improve the bandwidth and reduce the size. Ridged horns with gain better than 10 dBm over a range of 0.3 GHz to 2 GHz and VSWR lower than 1.5 over a range of 0.2 GHz to 1.8 GHz have been reported (46). Since many aperture antennas are fed via waveguides, the phase distortion associated with the different modulation schemes needs to be considered. Generally, antennas used in GPR systems require broad bandwidth and linear phase in the operating frequency range. Since the antennas work in close proximity to the ground surface, the interaction between them must be taken into account. Signal processing is one of the most important parts in the GPR system. Some modulation schemes directly give the time domain data while the signals of other schemes need to be demodulated before the information is available. Signal processing can be performed in the time domain, frequency domain, or space domain. A successful signal processing scheme usually consists of a combination of several kinds of processing techniques that are applied at different stages. Here, we outline some basic signal processing techniques involved in the GPR system. The first commonly used method is noise reduction by time averaging. It is assumed that the noise is random, so that the noise can be reduced to 1/Nt by averaging N identical measurements spaced in time t. This technique only works for random noise but has no effects on the clutter. Clutter reduction can be achieved by subtracting the mean. This technique is performed under the assumption that the statistics of the underground are independent of position. A number of measurements are performed at a set of locations over the same material type to obtain the mean, which can be considered as a measure of the system clutter. The frequency filter technique is commonly used in the FMCW system. Signals that are not in the desired information bandwidth are rejected. Thus the SNR of FMCW scheme is usually higher than that of the AM scheme. In some very lossy soils, the return signal is highly attenuated, which makes interpretation of the data difficult. If the material attenuation information is available, the results can be improved by exponentially weighting the time traces to counter the decrease in signal level due to the loss. In practice, this is done by using a specially designed amplifier. Caution is needed when using this method, since the noise can also increase in such a system (27).

MAGNETOTELLURIC METHODS The basic idea of the magnetotelluric (MT) method is to use natural electromagnetic fields to investigate the electrical

481

conductivity structure of the earth. This method was first proposed by Tikhonov in 1950 (47). In his paper, the author assumed that the earth’s crust is a planar layer of finite conductivity lying upon an ideally conducting substrate, such that a simple relation between the horizontal components of the E and H fields at the surface can be found (48): iµ0 ωHx ∼ = Ey γ cosh(γ l)

(38)

γ = (iσ µ0 ω)(1/2)

(39)

where

The author used the data observed at Tucson (Arizona) and Zui (USSR) to compute the value of conductivity and thickness of the crust that best fit the first four harmonics. For Tucson, the conductivity and thickness were about 4.0 ⫻ 10⫺3 S/m and 1000 km, respectively. For Zui, the corresponding values are 3.0 ⫻ 10⫺1 S/m and 100 km. The MT method distinguishes itself from other subsurface EM methods because very low frequency natural sources are used. The actual mechanisms of natural sources have been under discussion for a long time, but now it is well accepted that the sources of frequency above 1 Hz are thunderstorms while the sources below 1 Hz are due to the current system in the magnetosphere caused by solar activity. In comparison with other EM methods, the use of a natural source is a major advantage. The frequencies used range from 0.001 Hz to 104 Hz, and thus investigation depth can be achieved from 50 m to 100 m to several kilometers. Installation is much simpler and has less impact on the environment. The MT method has also proved very useful in some extreme areas where conventional seismic methods are expensive or ineffective. The main shortcomings of the MT method are limited resolution and difficulty in achieving a high SNR, especially in electrically noisy areas (49). In MT measurements, the time-varying horizontal electrical and magnetic fields at the surface are recorded simultaneously. The data recorded in the time domain are first converted into frequency domain data by using a fast Fourier transform (FFT). An apparent conductivity is then defined as a function of frequency. To interpret the data, theoretical apparent conductivity curves are generated by the model studies. The model whose apparent conductivity curve best matches the measurement data is taken as an approximate model of the subsurface. Since it is more convenient and meaningful to represent the apparent conductivity in terms of skin depth, we first introduce the concept of skin depth by studying a simple case. The model we use is shown in Fig. 5, which consists of a hoO

X

Current sheet

Y

Conductivity: σ

Z Figure 5. Current sheet flowing on the earth’s surface, used to explain the magnetotelluric method.

482

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

Z=0

where h is the thickness of upper layer, and 1, 2 are the conductivities of the upper and lower layers, respectively. Matching the boundary conditions at z ⫽ h, we have √ √ σ1 − σ2 −ah(√σ + √σ ) 1 2 A= e (46) √ 2 σ1 √ √ σ1 + σ2 ah(√σ − √σ ) 1 2 B= e (47) √ 2 σ1

O First layer σ1

h Z=h

Second layer σ 2

Since we are interested in the ratio between the E and H field on the surface, Eq. (44) can be rewritten for z ⫽ 0 as Z

1 Ex M −i(π /4+φ+ψ ) e = √ Hy 2σ1 T N

Figure 6. Two-layer model of the earth’s crust, used to demonstrate the responses of the magnetotelluric method.

(48)

where M, N, , and satisfy the following equations:

Ix = cos ωt, Iy = Iz = 0

(40)

then the current density at depth z is Ix = e−z

√

2ωµσ /2

√ cos(ωt − z 2ωµσ ), Iy = Iz = 0

(41)

When z increases, we notice that the amplitude of the current decreases exponentially with respect to z; meanwhile the phase retardation progressively increases. To describe the amplitude attenuation, we introduce the skin depth p as (50) p=

2 ωµσ

(42)

where the current amplitude decreases to e⫺1 of the current at the surface. Since the unit in Eq. (42) is not convenient, some prospectors like to use the following formula: p=

1 √ 10ρT 2π

(43)

where T is the period in seconds, is the resistivity in ⍀/m, and the unit for p is km. The skin depth indicates the depth the wave can penetrate the ground. For example, if the resistivity of the underground is 10 S/m and the period of the wave is 3 s, the skin depth is 2.76 km. Subsurface methods seldom have such a great penetration depth. The data interpretation of the MT method is based on the model studies. The earth is modeled as a two- or three-layer medium. For a two-layer model as shown in Fig. 6, the general expression for the field can be written as (50) 0 ⱕ z ⱕ h:

Ez = Aea

√

σ1 z

+ be−a

√

σ1 z

√ √ √ Hy = eiπ /4 2σ1 T [−Aea σ 1 z + Be−a σ 1 z ]

(44a) (44b)

h ⱕ z ⱕ 앝:

Ex = e−a

√

σ2 z

√ √ Hx = e iπ /4 2σ2 Te−a σ z

(45a) (45b)

1

h p1 p11 h sinh M sin φ = p1 p1 1 h N cos ψ = sinh p1 p1 1 h cosh N sin ψ = p1 p1 M cos φ =

cosh

1 p2 1 + p2 1 + p2 1 + p2 +

h cos p1 h cosh sin p1 h cosh cos p1 h sinh sin p1 sinh

h p1 h p1 h p1 h p1

(49a) (49b) (49c) (49d)

where p1, p2 are the skin depths of upper and lower layers, respectively. For a multilayer medium, after applying the same procedure, we can obtain exactly the same relation between Ex and Hy as shown in Eq. (48) except that the expressions for M, N, , and are much more complicated. Because of this similarity, we have Ex M 1 1 = (50) √ H = √ N y 2σa T 2σ1 T where a is defined as the apparent conductivity. If the medium is homogeneous, the apparent conductivity is equal to the true conductivity. In a multilayer medium the apparent conductivity is an average effect of all layers. To obtain a better understanding of the preceding formulas, we first study two two-layer models and their corresponding apparent conductivity curves, as shown in Fig. 7 (51). At A

B

Conductive sediments ρ 1

Resistive basement Apparent resistivity

mogeneous medium with conductivity and a uniform current sheet flowing along the x direction in the xy plane. If the density of current at the ground (z ⫽ 0) is represented as (50)

Low

ρ1

ρ2 A B

ρ1 Frequency

High

Figure 7. Diagrammatic two-layer apparent resistivity curves for the models shown. (Redrawn from Ref. 43.)

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

Ex = Zxx Hx + Zxy Hy

(51a)

Ey = Zyx Hx + Zyy Hy

(51b)

Since Ex, Ey, Hx, and Hy are generally out of phase, Zi,j are complex numbers. It can also be shown that Zi,j have the following properties: Zxx + Zyy = 0

(52)

Zxy − Zyx = constant

(53)

Resistive sediments ρ 1 Conductive sediments ρ 2

Apparent resistivity

Resistive basement ρ 3

ρ3

Low

ρ1 ρ2 Frequency

High

Figure 8. Diagrammatic three-layer apparent resistivity curve for the model shown. (Redrawn from Ref. 43.)

Depth

Distance

ρ1

ρ 2 (> ρ 1 ) Model (a) E⊥

ρ2 ρ1

E⊥

Apparent resistivity

E (b)

Hz 1

Tipper

H⊥ 0 (c) 2 H⊥ 1 0

Relative H⊥

Depth

(d) 0 Current flow E⊥ (e) Depth

very low frequencies, the wave can easily penetrate the upper layer, and thus its conductivity has little effect on the apparent conductivity. Consequently, the apparent resistivity approaches the true resistivity of lower layer. As the frequency increases, less energy can penetrate the upper layer due to the skin effect, and thus the effect from the upper layer is dominant. As a result, the apparent resistivity is asymptotic to 1. Comparing the two curves, we note that both of them change smoothly, and for the same frequency, case A has lower apparent resistivity than case B, since the conductive sediments of case B are thicker. Our next example is a three-layer model as shown in Fig. 8 (51). The center layer is more conductive than the two adjacent ones. As expected, the curve approaches 1 and 2 at each end. The existence of the center conductive bed is obvious from the curve, but the apparent resistivity never reaches the true resistivity of center layer, since its effect is averaged by the effects from the other two layers. So far we have only discussed the horizontally layered medium, which is a one-dimensional model. In practice, two-dimensional or even three-dimensional structures are often encountered. In a 2-D case, the conductivity changes not only along the z direction but also along one of the horizontal directions. The other horizontal direction is called the ‘‘strike’’ direction. If the strike direction is not in the x or y direction, we obtain a general relation between the horizontal field components as (51)

483

0

Current flow E (f) Figure 9. Diagrammatic response curves for a simple vertical contact at frequency f. (Redrawn from Ref. 43.)

A simple vertical layer model and its corresponding curves are shown in Fig. 9 (51). In Fig. 9(b), the apparent resistivity with respect to E储 changes slowly from 1 to 2 due to the continuity of H⬜ and E储 across the interface. On the other hand, the apparent resistivity corresponding to E⬜ has an abrupt change across the contact, since the E⬜ is discontinuous at the interface. The relative amplitude of H⬜ varies significantly around the interface and approaches a constant at a large distance, as shown in Fig. 9(d). This is caused by the change in current density near the interface, as shown in Fig. 9(f). We also observe that Hz appears near the interface, as shown in Fig. 9(c). The reason is that the partial derivative of E储 with respect to ⬜ direction is nonzero. We have discussed the responses in some idealized models. For more complicated cases, their response curves can be obtained by forward modeling. Since the measurement data are in the time domain, we need to convert them into the frequency domain data by using a Fourier transform. In practice, five components are measured. There are four unknowns in Eqs. (51a) and (51b), but only two equations. This difficulty can be overcome by making use of the fact that Zi,j changes very slowly with frequency. In fact, Zi,j is computed as an average over a frequency band that contains several frequency sample points. A commonly used method is given in Ref. 52,

484

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

according to which Eq. (51a) is rewritten as ∗

∗

∗

Ex A = Zxx Hx A + Zxy Hy A

(54)

Ex B∗ = Zxx Hx B∗ + Zxy Hy B∗

(55)

and

where A* and B* are the complex conjugates of any two of the horizontal field components. The cross powers are defined as

AB∗ (ω1 ) =

1 ω

ω 1 +( ω/2)

AB∗ dω

(56)

ω 1 −( ω/2)

There are six possible combinations, and the pair (Hx, Hy) is preferred in most cases due to its greater degree of independence. Solving Eqs. (54) and (55), we have Zxx =

Ex A∗ Hy B∗ − Ex B∗ Hy A∗ Hx A∗ Hy B∗ − Hx B∗ Hy A∗

(57a)

Zxy =

Ex A∗ Hx B∗ − Ex B∗ HX A∗ Hy A∗ Hx B∗ − Hy B∗ Hx A∗

(57b)

and

Applying the same procedure to Eq. (51b), we have Zyx =

Ey A∗ Hy B∗ − Ey B∗ Hy A∗ Hx A∗ Hy B∗ − Hx B∗ Hy A∗

(57c)

Zyy =

Ey A∗ Hx B∗ − Ey B∗ Hx A∗ Hy A∗ Hx B∗ − Hy B∗ Hx A∗

(57d)

and

After obtaining Zi,j, they can be substituted into Eqs. (51a) and (51b) to solve for the other pair (Ex, Ey), which is then used to check the measurement data. The difference is due either to noise or to measurement error. This procedure is usually used to verify the quality of the measured data.

primary field to the secondary field becomes very small and thus the resolution of airborne EM methods is not very high. The operating frequency is usually chosen from 300 Hz to 4000 Hz. The lower limit is set by the transmission effectiveness, and the upper limit is set by the skin depth. Based on different design principles and application requirements, many systems have been built and operated all over the world since 1940s. Despite the tremendous diversity, most airborne EM systems can be classified in one of the following categories according to the quantities measured: phase component measuring systems, quadrature systems, rotating field systems, and transient response systems (54). For a phase component measuring system, the in-phase and quadrature components are measured at a single frequency and recorded as parts per million (ppm) of the primary field. In the system design, vertical loop arrangements are preferred, since they are more sensitive to the steeply dipping conductor and less sensitive to the horizontally layered conductor (55). Accurate maintenance of transmitter-receiver separation is essential and can be achieved by fixing the transmitter and receiver at the two wing tips. Once this requirement is satisfied, a sensitivity of a few ppm can be achieved (54). A diagram of the phase component measuring system is shown in Fig. 10 (55). A balancing network associated with the reference loop is used to buck the primary field at the receiver. The receiver signal is then fed to two phasesensitive demodulators to obtain the in-phase and quadrature components. Low-pass filters are used to reject very-high-frequency signals that do not originate from the earth. The data are interpreted by matching the curves obtained from the modeling. Some response curves of typical structures are given in Ref. 56.

Transmitting loop Transmitter

Preamplifier Balance network

Reference loop

AIRBORNE ELECTROMAGNETIC METHODS Airborne EM methods (AEM) are widely used in geological surveys and prospecting for conductive ore bodies. These methods are suitable for large area surveys because of their speed and cost effectiveness. They are also preferred in some areas where access is difficult, such as swamps or ice-covered areas. In contrast to ground EM methods, airborne EM methods are usually used to outline large-scale structures while ground EM methods are preferred for more detailed investigations (53). The difference between airborne and ground EM systems results from the technical limitations inherent in the use of aircraft. The limited separation between transmitter and receiver determines the shallow investigation depth, usually from 25 m to 75 m. Even though greater penetration depth can be achieved by placing the transmitter and receiver on different aircraft, the disadvantages are obvious. The transmitters and receivers are usually 200 ft to 500 ft above the surface. Consequently, the amplitude ratio of the

Receiving loop

Filter

Filter

Amplifier

Amplifier

90° phase shifter

Amplifier

Bucking loop

Amplifier

Demodulator

Demodulator

Integrator and filter

Integrator and filter

Amplifier

Amplifier

Recorder

Recorder

Figure 10. Block diagram showing operation of a typical phase component measuring system. (Redrawn from Ref. 43.)

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING

Receiver

Flight direction

π 2 Output

Transmitter

π 2

485

duced PUlsed Transient) (59), which was designed by Barringer during the 1950s. In the INPUT system, a large horizontal transmitting coil is placed on the aircraft and a vertical receiving coil is towed in the bird with the axis aligned with the flight direction. The working principle of INPUT is shown in Fig. 12 (60). A half sine wave with a duration of about 1.5 ms and quiet period of about 2.0 ms is generated as the primary field, as shown in Fig. 12(a). If there are no conducting zones, the current in the receiver is induced only by the primary field, as shown in Fig. 12(b). In the presence of conductive anomalies, the primary field will induce an eddy current. After the primary field is cut off, the eddy current decays exponentially. The duration of the eddy current is proportional to the conductivity anomalies, as shown in Fig. 12(c). The higher the conductivity, the longer the duration time. The decay curve in the quiet period is sampled successively in time by six channels and then displayed on a strip, as shown in Fig. 13. As we can see, the distortion caused by a good conductor ap-

Figure 11. Working principle of the rotary field AEM system. (Redrawn from Ref. 50.)

The quadrature system employs a horizontal coil placed on the airplane as a transmitter and a vertical coil towed behind the plane as a receiver. The vertical coil is referred to as a ‘‘towed bird.’’ Since only the quadrature component is measured, the separation distance is less critical. To reduce the noise further, an auxiliary horizontal coil, powered with a current 90 degrees out of phase with respect to the main transmitter current, is used to cancel the secondary field caused by the metal body of the aircraft. Since the response at a single frequency may have two interpretations, two frequencies are used to eliminate the ambiguity. The lower frequency is about 400 Hz and the higher one is chosen from 2000 Hz to 2500 Hz. The system responses in different environments can be obtain by model studies. Reference 57 gives a number of curves for thin sheets and shows the effects of variation in depth, dipping angle, and conductivity. In an airborne system, it is hard to control the relative rotation of receiver and transmitter. The rotating field method is introduced to overcome this difficulty. Two transmitter coils are placed perpendicular to each other on the plane, and a similar arrangement is used for the receiver. The two transmitters are powered with current of the same frequency shifted 90 degrees out of phase, so that the resultant field rotates about the axis, as shown in Fig. 11 (58). The two receiver signals are phase shifted by 90 degrees with respect to each other, and then the in-phase and quadrature differences at the two receivers are amplified and recorded by two different channels. Over a barren area, the outputs are set to zero. When the system is within a conducting zone, anomalies in the conductivity are indicated by nonzero outputs in both the in-phase and quadrature channels. The noise introduced by the fluctuation of orientation can be reduced by this scheme, but it is relatively expensive and the data interpretation is complicated by the complex coil system (58). The fundamental problem of airborne EM systems is the difficulty in detecting the relatively small secondary field in the presence of a strong primary field. This difficulty can be alleviated by using the transient field method. A well-known system based on the transient field method is INPUT (IN-

Primary current (and magnetic field) 2.0 ms Time 1.5 ms (a)

Current induced in R by primary field alone

(b)

Total field induced in R

Decay curve due to good conductors Channel sample periods 1

3

5

Decay curve due to poor conductor (c) Figure 12. Working principle of the INPUT system. (Redrawn from Ref. 52.)

486

ELECTROMAGNETIC SUBSURFACE REMOTE SENSING 12. T. D. Barber and R. A. Rosthal, Using a multiarray induction tool to achieve high resolution logs with minimum environmental effects, Trans. SPE, paper SPE 22725, 1991.

Poor conductor anomalies appear on early channels Channel Channel Channel Channel Channel Channel

1 2 3 4 5 6

Good conductor anomalies appear on early channels Figure 13. Responses of different anomalies appearing on different channels. (Redrawn from Ref. 52.)

pears in all the channels, while the distortion corresponding to a poor conductor only registers on the early channels. Since the secondary field can be measured more accurately in the absence of the primary field, transient systems provide greater investigation depths, which may reach 100 m under favorable conditions. In addition, they can also provide a direct indication of the type of conductor encountered (58). On the other hand, this system design gives rise to other problems inherent in the transient method. Since the eddy current in the quiet period becomes very small, a more intense source has to be used in order to obtain the same signal level as that in continuous wave method. The circuitry for the transient system is much more complicated, and it is more difficult to reject the noise due to the wideband property of the transient signal. BIBLIOGRAPHY 1. C. M. Swift, Jr., Fundamentals of the electromagnetic method, in M. N. Nabighian (ed.), Electromagnetic Methods in Applied Geophysics: Theory, Tulsa, OK: Society of Exploration Geophysicists, 1988. 2. H. G. Doll, Introduction to induction logging and application to logging of wells drilled with oil based mud, J. Petroleum Tech., 1: 148–162, 1949. 3. J. H. Moran and K. S. Kunz, Basic theory of induction logging and application to study of two-coil sondes, Geophysics., 27 (6): 829–858, 1962. 4. S. J. Thandani and H. E. Hall, Propagated geometric factors in induction logging, Trans. SPWLA, 2: paper WW, 1981. 5. B. Anderson, Induction sonde response in stratified medium, The Log Analyst, XXIV (1): 25–31. 6. W. C. Chew, Response of a current loop antenna in an invaded borehole, Geophysics, 49: 81–91, 1984. 7. J. R. Lovell, Finite Element Method in Resistivity Logging, Ridgefield, CT: Schlumberger Technology Corporation, 1993. 8. W. C. Chew and Q. H. Liu, Inversion of induction tool measurements using the distorted Born iterative method and CG-FFHT, IEEE Trans. Geosci. Remote Sens., 32 (4): 878–884, 1994. 9. S. Gianzero and B. Anderson, A new look at skin effect, The Log Analyst, 23 (1): 20–34, 1982. 10. L. C. Shen, Effects of skin-effect correction and three-point deconvolution on induction logs, The Log Analyst, July–August issue, pp. 217, 1989. 11. R. Strickland et al., New developments in the high resolution induction log, Trans. SPWLA, 2: paper ZZ, 1991.

13. G. P. Grove and G. N. Minerbo, An adaptive borehole correction scheme for array induction tools, presented at the 32nd Ann. SPWLA Symposium, Midland, TX, 1991. 14. Schlumberger Educational Services, AIT Array Induction Image Tool, 1992. 15. R. Freedman and G. N. Minerbo, Maximum entropy inversion of induction log data, Trans. SPE, 5: 381–394, paper SPE 19608, 1989. 16. S. J. Grimaldi, P. Poupon, and P. Souhaite, The daul laterologRxo tool, Trans. SPE, 2: 1–12, paper SPE 4018, 1972. 17. R. Chemali et al., The sholder bed effect on the daul laterolog and its variation with the resistivity of the borehole fluid, Trans. SPWLA, paper UU, 1983. 18. Q. Liu, B. Anderson, and W. C. Chew, Modeling low frequency electrode-type resistivity tools in invaded thin beds, IEEE Trans. Geosci. Remote Sens., 32 (3): 494–498, 1994. 19. B. Anderson and W. C. Chew, SFL interpretation using high speed synthetic computer generated logs, Trans. SPWLA, paper K, 1985. 20. Y. Chauvel, D. A. Seeburger, and C. O. Alfonso, Application of the SHDT stratigraphic high resolution dipmeter to the study of depositional environments, Trans. SPWLA, paper G, 1984. 21. A. R. Badr and M. R. Ayoub, Study of a complex carbonate reservoir using the Formation MicroScanner (FMS) tool, Proc 6th Middleeast Oil Show, Bahrain, March, 1989, pp. 507–516. 22. R. L. Kleinberg et al., Microinduction sensor for the oil-based mud dipmeter, SPE Formation Evaluation, vol. 3, pp. 733–742, December 1988. 23. W. C. Chew and R. L. Kleinberg, Theory of microinduction measurements, IEEE Trans. Geosci. Remote Sens., 26: 707–719, 1988. 24. W. C. Chew and S. Gianzero, Theoretical investigation of the electromagnetic wave propagation tool, IEEE Trans. Geosci. Remote Sens., GE-19: 1–7, 1981. 25. W. C. Chew et al., An effective solution for the response of electrical welllogging tool in a complex environment, IEEE Trans. Geosci. Remote Sens., 29: 303–313, 1991. 26. D. D. Griffin, R. L. Kleinberg, and M. Fukuhara, Low-frequency NMR spectrometer measurement, Science & Technology, 4: 968– 975, 1993. 27. D. J. Daniels, D. J. Gunton, and H. F. Scott, Introduction to subsurface radar, IEEE Proc., 135: 278–320, 1988. 28. D. K. Butler, Elementary GPR overview, Proc. Government Users Workshop on Ground Penetrating Radar Application and Equipment, pp. 25–30, 1992. 29. W. H. Weedon and W. C. Chew, Broadband microwave inverse scattering for nondestructive evaluation, Proc. Twentieth Annu. Review Progress Quantitative Nondestructive Evaluation, Drunswick, ME, 1993. 30. F. C. Chen and W. C. Chew, Time-domain ultra-wideband microwave imaging radar system, Proc. IEEE Instrum. Meas. Technol. Conf., St. Paul, MN, pp. 648–650, 1998. 31. F. C. Chen and W. C. Chew, Development and testing of the timedomain microwave nondestructive evaluation system, Review of Progress in Quantitative Evaluation, vol. 17, New York: Plenum, 1998, pp. 713–718. 32. M. Walford, Exploration of temperate glaciers, Phys. Bull., 36: 108–109, 1985. 33. D. K. Hall, A review of the utility of remote sensing in Alaskan permafrost studies, IEEE Trans. Geosci. Remote Sens., GE-20: 390–394, 1982.

ELECTROMAGNETIC WAVE SCATTERING 34. A. Z. Botros et al., Microwave detection of hidden objects in walls, Electron. Lett., 20: 379–380, 1984. 35. P. Dennis and S. E. Gibbs, Solid-state linear FM/CW radar systems—their promise and their problems, Proc. IEEE MTT Symp., 1974, pp. 340–342. 36. A. P. Anderson and P. J. Richards, Microwave imaging of subsurface cylindrical scatters from cross-polar backscatter, Electron. Lett., 13: 617–619, 1977. 37. K. Lizuka et al., Hologram matrix radar, Proc. IEEE, 64: 1493– 1504, 1976. 38. N. Osumi and K. Ueno, Microwave holographic imaging method with improved resolution, IEEE Trans. Antennas Propag., AP-32: 1018–1026, 1984. 39. M. C. Bailey, Broad-band half-wave dipole, IEEE Trans. Antennas Propag., AP-32: 410–412, 1984. 40. R. P. King, Antennas in material media near boundaries with application to communication and geophysical exploration, IEEE Trans. Antennas Propag., AP-34: 483–496, 1986. 41. M. Kanda, A relatively short cylindrical broadband antenna with tapered resistive loading for picosecond pulse measurements, IEEE Trans. Antennas Propag., AP-26: 439–447, 1978. 42. C. A. Balanis, Antenna Theory, chapter 9, New York: Wiley, 1997. 43. P. Degauque and J. P. Thery, Electromagnetic subsurface radar using the transient radiated by a wire antenna, IEEE Trans. Geosci. Remote Sens., GE-24: 805–812, 1986. 44. C. A. Balanis, Antenna Theory, chapter 10, New York: Wiley, 1997. 45. C. A. Balanis, Antenna Theory, chapter 11, New York: Wiley, 1997. 46. J. L. Kerr, Short axial length broadband horns, IEEE Trans. Antennas Propag., AP-21: 710–714, 1973. 47. A. N. Tikhonov, Determination of the electrical characteristics of the deep strate of the earth’s crust, Proc. Acad. Sci., USSR, 83 (2): 295–297, 1950. 48. J. R. Wait, Theory of magneto-telluric field, J. Res. Natl. Bur. Stand.-D, Radio Propagation, 66D: 509–541, 1962. 49. K. Vozoff, The magnetotelluric method, in M. N. Nabighian (ed.), Electromagnetic Methods in Applied Geophysics: Application, Tulsa, OK: Society of Exploration Geophysicists, 1988. 50. L. Cagniard, Basic theory of the magneto-telluric method of geophysical prospecting, Geophysics., 18: 605–635, 1952. 51. K. Vozoff, The magnetotelluric method in the exploration of sedimentary basins, Geophysics., 37: 98–141, 1972. 52. T. Madden and P. Nelson, A defense of Cagniard’s magnetotelluric method, Geophysics Reprinted Series No. 5: Magnetotelluric Methods, Tulsa, OK: Society of Exploration Geophysicists, 1985, pp. 89–102. 53. G. J. Palacky and G. F. West, Airborne electromagnetic methods, in M. N. Nabighian (ed.), Electromagnetic Methods in Applied Geophysics: Theory, Tulsa, OK: Society of Exploration Geophysicists, 1988. 54. J. C. Gerkens, Foundation of Exploration Geophysics, New York: Elsevier, 1989. 55. G. V. Keller and F. Frischknecht, Electrical Methods in Geophysical Prospecting, New York: Pergamon, 1966. 56. D. Boyd and B. C. Roberts, Model experiments and survey results from a wing tip-mounted electromagnetic prospecting system, Geophys. Pro., 9: 411–420, 1961. 57. N. R. Patterson, Experimental and field data for the dual frequency phase-shift method of airborne electromagnetic prospecting, Geophysics, 26: 601–617, 1961. 58. P. Kearey and M. Brooks, An Introduction to Geophysical Exploration, Boston, MA: Blackwell Scientific, 1984.

487

59. A. R. Barringer, The INPUT electrical pulse prospecting system, Min. Cong. J., 48: 49–52, 1962. 60. A. E. Beck, Physical Principles of Exploration Methods, New York: Macmillan, 1981.

S. Y. CHEN W. C. CHEW University of Illinois at UrbanaChampaign

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3604.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Geographic Information Systems Standard Article J. Raul Ramirez1 1The Ohio State University, Columbus, OH Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3604 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (97K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Hardware and its Use Software and its Use Using GIS Quality and its Impact in GIS The Future of GIS About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3604.htm17.06.2008 15:35:47

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

320

GEOGRAPHIC INFORMATION SYSTEMS

GEOGRAPHIC INFORMATION SYSTEMS A Geographic Information System (GIS) is a set of computerbased tools that collects, stores, retrieves, manipulates, displays, and analyzes geographic information. Some definitions of GIS include institutions and people besides the computerbased tools and the geographic data. These definitions refer more to a total GIS implementation than to the technology. Here, computer-based tools are hardware (equipment) and software (computer programs). Geographic information describes facts about the earth’s features, for example, the location and characteristics of rivers, lakes, buildings, and roads. Collection of geographic information refers to the process of gathering, in computer-compatible form, facts about features of interest. Facts usually collected are the location of features given by sets of coordinate values (such as latitude, longitude, and sometimes elevation), and attributes such as feature type (highway), name (Interstate 71), and unique characteristics (the northbound lane is closed). Storing of geographic information is the process of electronically saving the collected information in permanent computer memory (such as a computer hard disk). Information is saved in structured computer files. These files are sequences of only two characters, 0 and 1, called bits, organized into bytes (eight bits) and words (16–64 bits). These bits represent information stored in the binary system. Retrieving geographic information is the process of accessing the computer-compatible files, extracting sets of bits and translating them into information we can understand (for example, information given in our national language). Manipulation of geographic data is the process of modifying, copying, and removing from computer permanent memory selected sets of information bits or complete files. Display of geographic information is the process of generating and making visible a graphic (and sometimes textual) representation of the information. Analysis of geographic information is the process of studying, computing facts from the geographic information, and asking questions (and obtaining answers from the GIS) about features and their relationships. For example, what is the shortest route from my house to my place of work?

HARDWARE AND ITS USE The main component is the computer (or computers) on which the GIS run. Currently, GIS systems run on desktop computers to mainframes (used as a stand-alone or as a network configuration). In general, GIS operations require handling large amounts of information (fifty megabytes or larger file sizes are common), and in many cases, GIS queries and graphic displays must be generated very quickly. Therefore, important characteristics of computers used for GIS are processing speed, quantity of random access memory (RAM), size of permanent storage devices, resolution of display devices, and speed of communication protocols. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

GEOGRAPHIC INFORMATION SYSTEMS

Several peripheral hardware components may be part of the system: printers, plotters, scanners, digitizing tables, and other data collection devices. Printers and plotters are used to generate text reports and graphics (including maps). High-speed printers with graphics and color capabilities are commonplace today. The number and sophistication of the printers in a GIS organization depend on the amount of text reports to be generated. Plotters allow the generation of oversized graphics. The most common graphic products of a GIS system are maps. As defined by Thompson (1), ‘‘Maps are graphic representations of the physical features (natural, artificial, or both) of a part or the whole of the earth’s surface. This representation is made by means of signs and symbols or photographic imagery, at an established scale, on a specified projection, and with the means of orientation indicated.’’ As this definition indicates, there are two different types of maps: (1) line maps, composed of lines, the type of map we are most familiar with, usually in paper form, for example a road map; and (2) image maps, which are similar to a photograph. Plotters able to plot only line maps are usually less sophisticated (and less expensive) than those able to plot high-quality line and image maps. Plotting size and resolution are other important characteristics of plotters. With some plotters it is possible to plot maps with a size larger than one meter. Higher plotting resolution allows plotting a greater amount of details. Plotting resolution is very important for images. Usually, the larger the map size needed, and the higher the plotting resolution, the more expensive the plotter. Scanners are devices that sense and decompose a hardcopy image or scene into equal-sized units called pixels and store

321

each pixel in computer-compatible form with corresponding attributes (usually a color value per pixel). The most common use of scanning technology is in fax machines. They take a hardcopy document, sense the document, and generate a set of electric pulses. Sometimes, the fax machine stores the pulses to be transferred later; other times they are transferred right away. In the case of scanners used in GIS, these pulses are stored as bits in a computer file. The image generated is called a raster image. A raster image is composed of pixels. Generally, pixels are square units. Pixel size (the scanner resolution) ranges from a few micrometers (for example, five) to hundreds of micrometers (for example, 100 micrometers). The smaller the pixel size the better the quality of the scanned images, but the larger the size of the computer file and higher the scanner cost. Scanners are used in GIS to convert hardcopy documents to computer-compatible form, especially paper maps. Some GIS cannot use raster images to answer geographic questions (queries). Those GIS that can are usually limited in the types of queries they can perform (they can perform queries about individual locations but not geographic features). Most queries need information in vector form. Vector information represents individual geographic features (or parts of features) and is an ordered list of vertex coordinates. Figure 1 shows the differences between raster and vector. Digitizing tables are devices that collect vector information from hardcopy documents (especially maps). They consist of a flat surface on which documents can be attached and a cursor or puck with several buttons, used to locate and input coordinate values (and sometimes attributes) into the computer. The result of digitizing is a computer file with a list of coordinate values

Raster

Area covered

Feature

00000000000 01100000100 01100000010 00110000001 00011000000 00001100000 00001100000 00001100000 00001110000 00000111000 00000011000 00000000000 Data stored

(a)

(b)

(c)

Finite number of fixed area and dimension pixels

Vector 1

1’ 2

Infinite number of dimensionless arealess geometric points

2’ 3 4 5

X 1, X 2, X 3, X 4, X 5,

Y1 Y2 Y3 Y4 Y5

X’1, Y’1 X’2, Y’2

Area covered

Feature

Data stored

(a)

(b)

(c)

Figure 1. The different structures of raster and vector information, feature representation, and data storage.

322

GEOGRAPHIC INFORMATION SYSTEMS

and attributes per feature. This method of digitizing is called ‘‘heads-down digitizing.’’ Currently, there is a different technique to generate vector information. This method uses a raster image as a backdrop on the computer terminal. Usually, the image has been georeferenced (transformed into a coordinate system related in some way to the earth). The operator uses the computer mouse to collect the vertices of a geographic feature and to attach attributes. As in the previous case, the output is a computer file with a list of coordinate values and attributes for each feature. This method is called ‘‘heads-up digitizing.’’ SOFTWARE AND ITS USE Software, as defined by the AGI dictionary (2), is the collection of computer programs, procedures, and rules for the execution of specific tasks on a computer system. A computer program is a logical set of instructions, which tells a computer to perform a sequence of tasks. GIS software provides the functions to collect, store, retrieve, manipulate, query and analyze, and display geographic information. An important component of software today is a graphical user interface (GUI). A GUI is set of graphic tools (icons, buttons, and dialogue boxes) that can be used to communicate with a computer program to input, store, retrieve, manipulate, display, and analyze information and generate different types of output. Pointing with a device such as a mouse to select a particular software application operates most GUI graphic tools. Figure 2 shows a GUI. GIS software can be divided into five major components (besides the GUI): input, manipulation, database management system, query and analysis, and visualization. Input software allows the import of geographic information (location and attributes) into the appropriate computer-compatible for-

Empty

Order taken

Drinks served

Out 1

Kitchen

2

;;

11

15

16

17

20

5 9

12

Reset

In

4

3

14

21

Shortest Food & drinks route served

Food ready

8

6 7

10 Bar

18

Food ready Food & drink served

;; 19

Shortest route to and from table 18

Drinks served Order taken Empty table

Figure 2. A graphic user interface (GUI) for a GIS in a restaurant setting and the graphic answers to questions about table occupancy, service, and shortest route to Table 18.

mat. Two different issues need to be considered: how to transform (convert) analog (paper-based) information into digital form, and how to store information in the appropriate format. Scanning, and heads-down and heads-up digitizing software with different levels of automation, transforms paper-based information (especially graphic) into computer-compatible form. Text information (attributes) can be imported by a combination of scanning and character recognition software, and/ or by manual input using a keyboard and/or voice recognition software. In general, each commercial GIS software package has a proprietary format, used to store locations and attributes. Only information in that particular format can be used in that particular GIS. When information is converted from paper into digital form using the tools from that GIS, the result is in the appropriate format. When information is collected using other alternatives, then a file format translation needs to be made. Translators are computer programs that take information stored in a given format and generate a new file (with the same information) in a different format. In some cases, translation results in information loss. Manipulation software allows changing the geographic information by adding, removing, modifying, or duplicating pieces or complete sets of information. Many tools in manipulation software are similar to those in word-processors: create, open, and save a file; cut, copy, paste, undo graphic and attribute information. Many other manipulation tools allow drafting operations of the information, such as: draw a parallel line, square, rectangle, circle, and ellipse; move a graphic element, change color, line width, line style. Other tools allow the logical connection of different geographic features. For example, geographic features that are physically different and unconnected, can be grouped as part of the same layer, level, or overlay (usually, these words have the same meaning). By doing this, they are considered part of a common theme (for example, all rivers in a GIS can be considered part of the same layer: hydrography). Then, one can manipulate all features in this layer by a single command. For example, one could change the color of all rivers of the hydrography layer from light to dark blue by a single command. Database management system (DBMS) is a collection of software for organizing information in a database. This software performs three fundamental operations: storage, manipulation, and retrieval of information from the database. A database is a collection of information organized according to a conceptual structure describing the characteristic of the information and the relationship among their corresponding entities (2). Usually, in a database there are at least two computer files or tables and a set of known relationships, which allows efficient access to specific entities. Entities in this concept are geographic objects (such as a road, house, and tree). Multipurpose DBMS are classified into four categories: inverted list, hierarchical, network, and relational. Healy (3) indicates that for GIS, there are two common approaches to DBMS: the hybrid and the integrated. The hybrid approach is a combination of a commercial DBMS (usually, relational) and direct access operating system files. Positional information (coordinate values) is stored in direct access files and attributes, in the commercial DBMS. This approach increases access speed to positional information and takes advantage of DBMS functions, minimizing development costs. Guptill (4) indicates that in the integrated approach the Standard Query Language (SQL) used to ask questions about the database is

GEOGRAPHIC INFORMATION SYSTEMS

replaced by an expanded SQL with spatial operators able to handle points, lines, polygons, and even more complex structures and graphic queries. This expanded SQL sits on top of the relational database. This simplifies geographic information queries. Query and analysis software provides new explicit information about the geographic environment. The distinction between query and analysis is somewhat unclear. Maguire and Dangermond (5) indicate that the difference is a matter of emphasis: ‘‘Query functions are concerned with inventory questions such as ‘Where is . . .?’ Analysis functions deal with questions such as ‘What if . . .?’.’’ In general, query and analysis use the location of geographic features, distances, directions, and attributes to generate results. Two characteristic operations of query and analysis are buffering and overlay. Buffering is the operation that finds and highlights an area of user-defined dimension (a buffer) around a geographic feature (or a portion of a geographic feature), and retrieves information inside the buffer, or generates a new feature. Overlay is the operation that compares layers. Layers are compared two at a time by location and/or attributes. Query and analysis are the capabilities that differentiate GIS from other geographic data applications such as computer-aided mapping, computer-aided drafting (CAD), photogrammetry, and mobile mapping. Visualization in this context refers to the software for visual representation of geographic data and related facts, facilitating the understanding of geographic phenomena, their analysis, and interrelations. The term visualization in GIS encompasses a larger meaning. As defined by Buttenfield and Mackaness (6), ‘‘visualization is the process of representing information synoptically for the purpose of recognizing, communicating, and interpreting pattern and structure. Its domain encompasses the computational, cognitive, and mechanical aspects of generating, organizing, manipulating, and comprehending such representation. Representation may be rendered symbolically, graphically, or iconically and is most often differentiated from other forms of expression (textual, verbal, or formulaic) by virtue of its synoptic format and with qualities traditionally described by the term ‘Gestalt.’ ’’ It is the confluence of computation, cognition, and graphic design. Visualization is accomplished through maps, diagrams, and perspective views. A large amount of information is abstracted into graphic symbols. These symbols are endowed with visual variables (size, value, pattern, color, orientation, and shape) that emphasize differences and similarities among those facts represented. The joint representation of the facts shows explicit and implicit information. Explicit information can be accessed by other means such as tables and text. Implicit information requires, in some cases, performing operations with information such as computing the distance between two points on a road. In other cases, by looking at the graphic representation, we can access implicit information. For example, we can find an unexpected relationship between the relief and erosion that is not obvious from the explicit information. This is the power of visualization!

USING GIS GIS is widely used. Users include national, state, and local agencies; private business (from delivery companies to restau-

323

rants, from engineering to law firms); educational institutions (from universities to school districts, from administrators to researchers); and private citizens. As indicated earlier, the use of GIS requires software (that can be acquired from a commercial vendor), hardware (which allows running the GIS software), and data (with the information of interest). As indicated by Worboys (7), ‘‘data are only useful when they are part of a structure of interrelationships that form the context of the data. Such a context is provided by the data model.’’ Depending on the problem of interest, the data model may be simple or complex. In a restaurant, information about seating arrangement, seating time, drinks, and food are well defined and easily expressed by a simple data model. Fundamentally, you have information for each table about its location, the number of people it seats, and the status of the table (empty or occupied). Once a table is occupied, additional information is recorded: how many people occupy the table; when it was occupied; what drinks were ordered; what food was ordered; the status of the order (drinks are being served, food is being prepared, etc.). Questions such as, What table is empty? How many people can be seated at a table? What table seats seven people? Has the food ordered by table 11 been served? How long before table 11 is free again? are easily answered from the above information with a simple data model (see Figure 2). Of course, a more sophisticated data model will be required if more complex questions are asked of the system. For example, What is the most efficient route to reach a table based on the current table occupancy? If alcoholic drinks are ordered at a table, how much longer will it be occupied than if nonalcoholic drinks are ordered? How long will it be before food is served to Table 11 if the same dish has been ordered nine times in the last few minutes? Many problems require a complex data model. A nonexhaustive list of GIS applications that require complex models is presented next. This list gives an overview of many fields and applications of GIS: Siting of a new business. Find the best location in this region for a new factory, based on natural and human resources. Network analysis. Find the shortest bus routes to pick up students, for a given school. Utility services. Find the most cost-efficient way to extend the electric service to a new neighborhood. Land Information System. Generate an inventory of the natural resources of a region and the property-tax revenue, using land parcels as the basic unit. Intelligent car navigation. What are the recommended speeds, geographic coordinates of the path to be followed, street classification, and route restrictions to go from location A to location B? Tourist Information System. What is the difference in driving time to go from location A to location B, following the scenic route instead of the business route? And where, along the scenic route, are the major places of interest located? Political campaigns. Set the most time-efficient schedule to visit the largest possible number of cities where undecided voters could make the difference during the last week of a political campaign.

324

GEOGRAPHIC INFORMATION SYSTEMS

Marketing branch location analysis. Find the location and major services to be offered by a new bank branch, based on population density and consumer preferences. Terrain analysis. Find the most promising site in a region for oil exploration, based on topographic, geological, seismic, and geomorphological information. QUALITY AND ITS IMPACT IN GIS The unique advantage of GIS is the capability to analyze and answer geographic questions. If no geographic data is available for a region, of course, it is not possible to use GIS. On the other hand, the validity of the analysis and quality of the answers in GIS are closely related to the quality of the geographic data used. If poor quality or incomplete data were used, the query and analysis would provide poor or incomplete results. Therefore, it is fundamental to know the quality of the information in a GIS. Of course, the quality of the analysis and query capabilities of a GIS is also very important. Perfect geographic data used with poor-quality analysis and query tools generates poor results. Quality is defined by the U.S. National Committee Digital Cartographic Data Standard—NCDCDS (8) as ‘‘fitness for use.’’ This definition states that quality is a relative term: Data may be fit to use in a particular application but unfit for another. Therefore, we need to have a very good understanding of the scope of our application to judge the quality of the data to be used. The same committee identifies five quality components in the context of GIS in the Spatial Data Transfer Standard (SDTS): lineage, positional accuracy, attribute accuracy, logical consistency, and completeness. SDTS is the U.S. Federal Information Processing Standard—173 and states that ‘‘Lineage is information about the sources and processing history of the data.’’ Positional accuracy is ‘‘the correctness of the spatial (geographic) location of features.’’ Attribute accuracy is ‘‘the correctness of semantic (nonpositional) information ascribed to spatial (geographic) features.’’ Logical consistency is ‘‘the validity of relationships (especially topological ones) encoded in the data,’’ and completeness is ‘‘the mapping and selection rules and exhaustiveness of feature representation in the data.’’ The International Cartographic Association (ICA) has added two more quality components: semantic accuracy and temporal information. As stated by Guptill and Morrison (9), ‘‘semantic accuracy describes the number of features, relationships, or attributes that have been correctly encoded in accordance with a set of feature representation rules.’’ Guptill and Morrison (10) also state that ‘‘temporal information describes the date of observation, type of update (creation, modification, deletion, unchanged), and validity periods for spatial (geographic) data records.’’ Most of our understanding about the quality of geographic information is limited to positional accuracy, specifically, point positional accuracy. Schmidley (11) has conducted research in line positional accuracy. Research in attribute accuracy has been done mostly in the remote sensing area, and some in GIS [see Chapter 4 of (9)]. Very little research has been done in the other quality components [see (9)]. To make the problem worse, because of limited digital geographic coverage worldwide, GIS users combine, many times, different sets of geographic information, each set of a different

quality level. Most GIS commercial products have no tools to judge the quality of the data used: Therefore, it is up to the GIS user to judge and keep track of information quality. Another limitation of GIS technology today is the fact that GIS systems, including analysis and query tools, are sold as ‘‘black boxes.’’ The user provides the geographic data, and the GIS system provides results. In many cases the methods, algorithms, and implementation techniques are considered proprietary and there is no way for the user to judge their quality. More and more users are starting to recognize the importance of quality GIS data. As a result, many experts are conducting research in the different aspects of GIS quality. THE FUTURE OF GIS GIS is in its formative years. All types of users have accepted the technology and it is a worldwide multibillion-dollar industry. This acceptance will create a great demand for digital geographic information in the near future. Commercial satellites and multisensor platforms generating high-resolution images, mobile mapping technology, and efficient analog-todigital data conversion systems, are some of the promising approaches to the generation of geographic data. GIS capabilities are improving. This is a result of the large amount of ongoing research. This research includes the areas of visualization, user interfaces, spatial relation languages, spatial analysis methods, geographic data quality, three-dimensional and spatio-temporal information systems, and open software design. These efforts will result in better, reliable, faster, and more powerful GIS. BIBLIOGRAPHY 1. M. M. Thompson, Maps for America, 2nd ed. Reston, VA: U.S. Geological Survey, 253, 1981. 2. Association for Geographic Information, AGI GIS Dictionary, 2nd ed., http://www.geo.ed.ac.uk/agidexe/term638, 1993. 3. R. G. Healey, Database management systems, in D. J. Maguire, M. F. Goodchild, and D. W. Rhind (ed.), Geographical Information Systems, Harlow, UK: Logman Scientific Group, 1991. 4. S. C. Guptill, Desirable characteristics of a spatial database management system, Proc. AUTOCARTO 8, ASPRS, Falls Church, VA, 1987. 5. D. J. Maguire and J. Dangermond, The functionality of GIS, D. J. Maguire, M. F. Goodchild, and D. W. Rhind (ed.), Geographical Information Systems, Harlow, UK: Logman Scientific Group, 1991. 6. B. P. Buttenfield and W. A. Mackaness, Visualization, in D. J. Maguire, M. F. Goodchild, and D. W. Rhind (ed.), Geographical Information Systems, Harlow, UK: Logman Scientific Group, 1991. 7. M. F. Worboys, GIS: A Computing Perspective, London: Taylor & Francis, 1995 p. 2. 8. Digital Cartographic Data Standard Task Force, The proposed standard for digital cartographic data, The American Cartographer, 15: 9–140, 1988. 9. Reprinted for S. C. Guptill and J. L. Morrison, Elements of Spatial Data Quality, Copyright 1995, with kind permission from Elsevier Science Ltd, The Boulevar, Langford Lane, Kidlington 0X5 1GB, UK, p. 10, 1995. 10. Ref. 9, p. 11.

GEOMETRIC PROGRAMMING 11. R. W. Schmidley, Framework for the Control of Quality in Automated Mapping, unpublished dissertation, Ohio State University, Columbus, OH: 1966.

J. RAUL RAMIREZ The Ohio State University

GEOMETRIC CORRECTIONS FOR REMOTE SENSING. See REMOTE SENSING GEOMETRIC CORRECTIONS.

325

sign parameter) and its cross-sectional area t (an independent design variable or decision variable). In particular, then, the capital cost is CLt, where C (a design parameter) is the cost per unit volume of the material making up the line. Also, suppose the operating cost is simply proportional to the power loss, which is known to be proportional to both L and the line resistivity R (a design parameter) as well as to the square of the carried current I (a design parameter) while being inversely proportional to t. In particular, then, the operating cost is DLRI2 /t, where the proportionality constant D (a design parameter) is determined from the predicted lifetime of the line as well as the present and future unit power costs

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3603.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Geophysical Signal and Image Processing Standard Article Marwan A. Simaan1 1University of Pittsburgh, Pittsburgh, PA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3603 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (766K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Seismic Data Generation and Acquisition Seismic Wave Propagation Determination of Seismic Propagation Velocity Stacking and Velocity Analysis Seismic Deconvolution Conclusion About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3603.htm17.06.2008 15:36:06

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

GEOPHYSICAL SIGNAL AND IMAGE PROCESSING The science of geophysics is concerned with the application of principles from physics to the study of the earth. Exploration geophysics involves the investigation of properties of the subsurface layers of the earth by taking measurements at or near the earth’s surface. Processing and analysis of these measurements may reveal how the physical properties of the earth’s interior vary vertically and laterally. Information of this type is extremely important in the search for hydrocarbons, minerals, and water in the earth. This article summarizes some of the basic results that deal with the application of signal and image processing techniques to the ﬁeld of exploration geophysics, speciﬁcally as it relates to the search for hydrocarbons. Hydrocarbons are typically found in association with sedimentary sequences in major sedimentary basins in the earth. Thus scientiﬁc methods for hydrocarbon exploration depend heavily on our ability to image the earth’s subsurface geological structures down to about 12,000 m. Potential hydrocarbon deposits, in the form of petroleum or natural gas, are often associated with certain geological formations such as faults, anticlines, salt domes, stratigraphic traps, and others (Fig. 1). Such formations may be detected on a seismic image, also called a seismic section, only if sophisticated data acquisition and processing methods are used to generate this image. One of the most popular and successful methods for imaging the earth’s subsurface is the seismic exploration method. This method involves generating a disturbance of the surface of the earth by means of the detonation of an explosive charge placed either on the ground, in the case of land exploration, or in water, in the case of offshore marine exploration (Fig. 2). The resulting ground motion propagates downwards inside the earth, gets reﬂected at the various interfaces of the geological strata, and ﬁnally is recorded as a time series, or trace, by sensors placed at some distance from the source of the disturbance. An example of many such traces placed side by side is shown in Fig. 3. Geophysical signal processing is a ﬁeld which deals primarily with computer methods for analyzing and ﬁltering a large number of such time series for the purpose of extracting the information necessary to develop an image of the subsurface layers (or geology) of the earth (1–12). In order to give the reader an idea about the volume of geophysical data available for processing, it is estimated that in the 1990s, on the average, approximately 2 million traces are recorded everyday for the purpose of exploring for petroleum and natural gas. Thus careful processing of this enormous amount of data must take advantage of the state of the art in computer technology and make full use of the most advanced techniques in digital signal and image processing.

SEISMIC DATA GENERATION AND ACQUISITION The ﬁrst step in any application of a geophysical dataprocessing method is to understand the process by which

the signals to be processed have been generated and recorded. Seismic data for imaging the subsurface layers of the earth down to possibly 12,000 m, are typically generated by a source of seismic energy and recorded by an array of sensors placed on the surface of the earth at some distance from the source. There are several types of seismic sources. The most common are dynamite explosives or vibroseis for land data, or air guns for offshore marine data. Land dynamite explosives and marine air guns, which inject a bubble of highly compressed air into the water, are short-duration (about 0.5 s or less) sources, usually referred to as wavelets (see Fig. 4). Vibroseis land sources, on the other hand, are long-duration (typically 8 s or more) low-amplitude sinusoidal waveforms whose frequency varies continuously from about 10 Hz to 80 Hz. The sweep signal is matched ﬁltered to give the equivalent of a short duration correlation function for the outgoing signal. The waveform created by the seismic source propagates downward into the earth, gets reﬂected, and returns to the surface carrying information about the subsurface structure. Each sensor on the receiving array is a device that transforms the reﬂected seismic energy into an electrical signal. In the case of land data, recording is done by a geophone, which typically measures particle velocity. In the case of marine data, recording is done by a hydrophone, which typically measures pressure. The recording of every sensor (geophone or hydrophone) is a time series called a seismic trace. Each trace may consist of 1,000 to 12,000 samples of data representing 4 s to 6 s of earth motion sampled at periods which could vary anywhere between 0.5 ms and 8 ms. Typical spacing between the sensors is 10 m to 200 m. The recorded traces can then be sorted so that all traces corresponding to some criterion, such as commonshot-point or common-midpoint, are displayed side-byside to form what is known as a shot gather. Several source–receiver conﬁgurations for recording/sorting seismic traces are illustrated in Fig. 5. A sample commonmidpoint (CMP) shot gather is shown in Fig. 3. A seismic survey typically consists of a large number of shot gathers collected by moving the combination of source and array of receivers along speciﬁed survey lines and repeating the data collection process (see Fig. 2). The ultimate goal of geophysical signal processing is to extract information on the physical properties and construct an image of the subsurface structure of the earth by processing these recorded data. Until about the mid-1980s, emphasis was mostly on 2-D imaging along speciﬁed survey lines. However, the advent of very powerful data acquisition, storage, and processing capabilities has made it possible to construct 3-D images of subsurface structures (12). Most of the geophysical signal processing work in the mid-1990s and beyond has been focused on 3-D imaging. Surveys that yield 3-D data, however, are more complicated and costly than those that yield 2-D data. For example, 3-D surveys normally utilize methods in which seismic sensors are distributed along several parallel lines and the seismic sources along other lines, hence building a dense array of seismic data. A typical 3-D survey could involve collecting anywhere between several hundred thousands to a few millions traces (12).

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.

2

Geophysical Signal and Image Processing

Figure 1. Some geological formations associated with hydrocarbon deposits (2, 5).

Figure 2. A typical land seismic data acquisition experiment (9).

SEISMIC WAVE PROPAGATION A seismic wave propagates outwards from the source at a velocity that is typically determined by the physical properties of the propagation medium (surrounding rocks). Seismic rays are thin pencils of seismic energy traveling along raypaths that are perpendicular to the wavefronts. At the interface between two rock layers, there is a change in the physical properties of the media, which results in a change in the propagation velocity. When encountering such an interface, the energy of an incident seismic wave is partitioned into a reﬂected wave and a transmitted wave. The relative amplitudes of these waves are determined by the

velocities and densities of the two layers. Thus, in order to understand the nature of the recorded seismic data, it is essential to understand the propagation mechanism of a seismic signal through a multilayered medium. Note that, as mentioned earlier, in the case of land seismic data, a geophone records vertical displacement velocity, while in the case of marine data, a hydrophone records water pressure. Consider a simple model of the earth, which consists of a horizontally layered medium (8) where the seismic propagation velocity α(z) and medium density ρ(z) vary only as a function of depth z. In the case of land data, it is known that the vertical stress component σ(z, t) is related to vertical displacement velocity v(z, t) by the standard equation

Geophysical Signal and Image Processing

3

Figure 5. Various source–receiver conﬁgurations.

Figure 3. An example of a common depth point (CDP) shot gather.

In the Fourier transform domain, these expressions are written as

of motion

and Hooke’s law where S(z, ω) and V(z, ω) are the Fourier transforms of σ(z, t) and v(z, t), respectively. The propagating waveform can be decomposed into its upcoming and downgoing components

Figure 4. An example of a marine, single air gun, wavelet.

4

Geophysical Signal and Image Processing

Figure 6. Reﬂection and transmission coefﬁcients at interface zk .

receiver. If one knows the velocity distribution as a function of depth between the surface and reﬂecting planes, the travel times information can be transformed into information on the depths of the reﬂecting boundary planes. In order to illustrate how this is done, consider the simple model of a single horizontal reﬂector at a depth z beneath a homogeneous layer with constant velocity V, as shown in Fig. 7. This simple case will probably never occur in practice, but its understanding will make it easier to understand the more complicated cases that are encountered in real situations. Using simple triangle geometry, the travel time from the source, to the reﬂector, to a receiver at a distance x from the source can be easily computed as

U and D using the linear transformation

This expression can be rewritten in the form where λ(z) = ρ(z)α(z) is the acoustic impedance. Combining Eq. (5) with Eqs. 4, it can be shown that U and D must satisfy the equations

In the above expressions γ(z) is the reﬂectivity function, which is related to the acoustic impedance λ(z) by the expression:

The above equations can be used to derive synthetic seismograms at any depth. As such, they are necessary prerequisites for solving the inverse problem where the requirement is to determine the reﬂectivity function from the available seismogram. If one applies the boundary conditions at the interface between the kth and k + 1st layers, as illustrated in Fig. 6, the reﬂectivity coefﬁcient ck and transmission coefﬁcient tk at the interface zk can be expressed as:

where t0 = 2z/V is the two-way travel time obtained from Eq. (10) by setting x = 0. This is called the zero-offset travel time and corresponds to a hypothetical source–receiver combination placed exactly at the common-midpoint (CMP). The above expression shows that the relationship between tx and x is hyperbolic, as illustrated in Fig. 8. The difference in travel time between a ray path arriving at an offset distance x and one arriving at zero-offset is called normal move-out (NMO). For cases where offset is much smaller than depth (i.e., x < z), which is normally the case in practice, Eq. (11) can be approximated as follows:

and NMO can be expressed as:

The above expression can be rearranged as follows: where λk is the acoustic impedance above the interface and λk+1 is the acoustic impedance below the interface. The seismic trace s(t) recorded at the surface is often modeled as a convolution of the source waveform w(t) and a reﬂectivity function r(t),

The reﬂectivity function r(t) is related to λ(z) by observing that two-way travel time t is related to depth z through the seismic propagation velocity. DETERMINATION OF SEISMIC PROPAGATION VELOCITY The recorded seismic trace can be processed to estimate the travel times of the reﬂected ray paths from the source to the

which means that the velocity of the medium above the reﬂector can be computed from knowledge of the NMO tx and the zero-offset travel time t0 . In practice, however, this calculation is made using a large number of reﬂected ray paths to obtain a statistical average of the velocity. As mentioned earlier, once t0 is known, the depth of the reﬂector can be computed as z = Vt0 /2. The above analysis may be easily extended to the case of a multilayer medium, as illustrated in Fig. 9. In this case, it can be shown that the two-way travel time of the ray path reﬂected from the nth interface at a depth z is given by

Geophysical Signal and Image Processing

5

Figure 7. Travel time versus offset.

Figure 9. Ray path from source to receiver in a multilayered medium.

STACKING AND VELOCITY ANALYSIS

Figure 8. Travel time versus offset.

where t0,n is the zero-offset two-way travel time down to the nth layer, and Vrms,n is the root-mean-square velocity of the section of the earth down to the nth layer. The expression for Vrms,n is

where Vi is the interval velocity of the ith layer and τ i is the one-way travel time of the reﬂected ray through the ith layer. As in the single reﬂector case, the NMO for the nth reﬂector can be approximated as

and this expression can be used to compute the rms velocity value of the layers above the reﬂector. Once, the rms velocities down to different reﬂectors have been determined, the interval velocity Vn of the nth layer can be computed using the formula (known as Dix’ formula)

where Vrms,n and Vrms,n−1 are the rms velocities for layers n and n − 1, respectively, and tn and tn−1 are the corresponding two-way zero-offset travel times (5).

As mentioned earlier, the most common conﬁguration for collecting and arranging seismic data is the commonmidpoint (CMP) reﬂection proﬁling shown in Fig. 8. A typical CMP gather is shown in Fig. 3. It is important to point out that a CMP gather represents the best possible datasorting conﬁguration for using Eq. (15) to estimate seismic velocities from the effects of NMO. To illustrate how this is done, assume that one wishes to estimate the velocity at a given zero-offset time t0,j (where j refers to a sample position on the zero-offset trace). For each value of rms velocity Vrms,j that may be guessed, there is a hyperbola deﬁned by Eq. (15). The sum (stack) of the trace samples falling on this hyperbola can therefore be computed and a measure of coherent energy (the square of the sum) can be determined as illustrated in Fig. 10. The hyperbola that produces the maximum coherent energy represents the best ﬁt to the data and the corresponding velocity represents the best estimate of the rms velocity at time t0,j . This velocity, denoted by Vs,j , is called the stacking velocity at time t0,j . If this process is repeated for all possible sample locations j, a three-dimensional plot of coherent energy as a function of velocity and time on the zero-offset trace can then be produced. An example of such a plot is shown in Fig. 11. The peaks on this, what is often referred to as “velocity spectrum” are used to determine a stacking velocity proﬁle versus two-way travel time. This velocity proﬁle can be used to perform NMO corrections for all times t0,j . Interval velocities can then be calculated from the stacking velocities by means of Dix’ formula [Eq. (18)]. Once accurate velocity information is available, it becomes possible to correct for the effects of NMO. This is achieved by shifting the samples on each trace (i.e., ﬂatten the hyperbola that corresponds to the stacking velocity) to obtain an estimate of a corresponding zero-offset trace at the CMP of the gather. If done correctly, all reﬂections coming from the same horizontal reﬂector will line-up at the same zero-offset time on the corrected traces, as illus-

6

Geophysical Signal and Image Processing

Figure 10. Computation of stacking velocity from a CDP gather.

Figure 12. NMO corrected traces of data in Fig. 10.

line of the survey that produced the section. SEISMIC DECONVOLUTION Figure 11. An example of a velocity spectrum plot.

trated in Fig. 12. Other coherent events such as multiples, which have different rms velocities, and random noise, will not be aligned. These traces can therefore be summed algebraically to produce one trace, that corresponds to the CMP, in which the reﬂected aligned events have been reinforced and the other effects reduced. This process is called stacking and the output is called a stacked trace. When a large number of stacked traces corresponding to successive common-midpoints are placed side-by-side, the resulting image is called a stacked seismic section. An example of a stacked seismic section is shown in Fig. 13. A stacked section represents an image showing geologic formations that would be exposed if the earth were to be sliced along the

A very important step in geophysical signal processing that is often (but not always) performed prior to stacking is deconvolution. Deconvolution is a process by which the effect of the source waveform is compressed so as to improve temporal resolution. In order to understand the concept of deconvolution, go back to the basic model described earlier in Eq. (9), which represents a seismic trace s(t) as a convolution of a source waveform w(t) and a reﬂectivity sequence r(t); that is, s(t) = r(t)*w(t). Note that, for the sake of brevity, the effects of random noise, which are almost always present, have not been included in this model. The reﬂectivity sequence r(t) is also called the earth impulse response. It represents what would be recorded if the source waveform were purely an impulse function δ(t) (a spike). Recall that the reﬂectivity sequence r(t) contains informa-

Geophysical Signal and Image Processing

7

Figure 13. An example of a stacked seismic section. Note the folded and thrust-faulted structure (9).

tion about the subsurface characteristics of the earth. The source waveform w(t) is therefore a blurring (or smearing) function that makes it difﬁcult to recognize the reﬂectivity sequence by directly observing the trace s(t). If it were possible to generate a source waveform that corresponds to an impulse function δ(t), then, except for the effects of random noise, each trace will indeed be a recording of the reﬂectivity sequence. Generating a seismic source that is a close approximation of the impulse function (i.e., where most of the energy is concentrated over a very short interval of time) has been a dilemma that has, for years, received considerable attention in the geophysical industry and related literature. Estimating the reﬂectivity sequence r(t) from s(t) = r(t)*w(t) has probably been one of the most studied problems in geophysical signal processing and much research effort has been devoted to the development of methods for carrying out this operation. Among the most popular such methods are the optimum Wiener ﬁltering, predictive deconvolution, spiking deconvolution, homomorphic deconvolution, and numerous others (15). For the sake of conciseness, the Wiener ﬁltering method will be discussed in some detail, while the others will be brieﬂy summarized.

be the n- and m-dimensional vectors of these samples, respectively. The actual output vector which is the convolution of the input sequence and the ﬁlter coefﬁcients can be expressed in matrix form as:

or

where X is an (n + m − 1) × m lower diagonal matrix whose entries are derived from the input sequence, and y is the (n + m − 1) vector of actual output samples. The error vector can now be deﬁned as

where d is the vector of desired output samples. The optimum ﬁlter is derived by minimizing the norm of the error, that is,

Wiener Filtering Method Assume that one has a signal x(t) and that one wishes to apply a ﬁlter f(t) to this signal in order to make it resemble a desired signal d(t). The Wiener ﬁltering method, illustrated in Fig. 14, involves designing the ﬁlter f(t) so that the leastsquares error between the actual output y(t) = x(t)*f(t) and the desired outputs d(t) is minimized. For simplicity, the steps for deriving the ﬁlter f(t) (for the deterministic case) will be carried out using discrete rather than continuous signals and with matrix notation. Assume that the input sequence has n samples x0 , x1 , . . . , xn−1 and the unknown ﬁlter has m samples f0 , f1 , . . . , fm−1 and let

It can be easily shown that the optimum vector f* that minimizes er is the solution of the linear matrix equation

Note that upon computing the entries of the m × m matrix X X, the above equation can be written as:

where φi are the autocorrelation lags of the input signal and gi are the crosscorrelations between the input signal

8

Geophysical Signal and Image Processing

and the desired output; that is,

It is important to mention that the autocorrelation matrix is Toeplitz in nature, and hence the optimum ﬁlter coefﬁcients can be calculated using the Levinson recursion algorithm (3). Also, note that in the above analysis, the only requirement for the derivation of the ﬁlter coefﬁcients is an a priori knowledge of the autocorrelation coefﬁcients of the input signal and the crosscorrelation coefﬁcients of the input signal with the desired output signal. Clearly, the ﬁlter length m needs to be speciﬁed a priori and cannot be changed during or after the computation of the ﬁlter coefﬁcients without repeating the entire computations.

which means that it would be possible to recover the reﬂectivity sequence r(t). The ﬁlter f(t), if it exists, is called the inverse ﬁlter of the seismic source w(t). The nature of this inverse ﬁlter can also be examined in the frequency domain. Taking the Fourier transform of both sides of Eq. (29), one obtains

where

From this, it follows that:

Prediction Error Filtering A special case of the above derivation is when the desired output signal is an advanced version of the input signal; that is, dk = xk+p . In such a case, the ﬁlter is called a p-step ahead predictor. That is, at sample k, it predicts xk+p from past values of the input. The derivation of such a ﬁlter is essentially the same as described above except that xk+p should be used in place of dk in Eq. (26). That is,

The desired output in this case is the predictable part of the input series. This, for example, could include events such as multiples. The error signal contains the unpredictable part, which is the uncorrelated reﬂectivity sequence that we are trying to extract from the measurements. Of special interest is the one step ahead predictor (p = 1). In that case, it can be shown that the minimum error can be added as an additional unknown to Eq. (24), which can then be solved for, along with the ﬁlter coefﬁcients. Also, the error series (or reﬂectivity sequence) can now be computed using Eq. (22) as:

where the vector h = [1 − f∗ 0 − f∗ 1 .. − f∗ m−1 ] . It should be noted that this (deconvolution) approach for estimating the reﬂectivity sequence, also known as predictive deconvolution, is based on two important assumptions. First, the reﬂectivity sequence represents a random series (i.e., no predictable patterns) and second the wavelet must be minimum phase.

This means that the amplitude spectrum of the inverse ﬁlter is the inverse of that of the seismic wavelet and the phase spectrum of the inverse ﬁlter is the negative of that of the seismic wavelet. A problem therefore will immediately arise if the amplitude spectrum of the wavelet has frequencies at which it is equal to zero. Clearly, at those frequencies the amplitude spectrum of the inverse ﬁlter becomes inﬁnite (or undeﬁned) and hence the ﬁlter will be unbounded in the time domain. Similar problems will also arise even if at some frequencies the values of the amplitude spectrum of w(t) are very small. Clearly, using Eq. (33) for calculating the inverse ﬁlter f(t) is not feasible in almost all realistic applications. Suppose instead, the Wiener ﬁltering approach is used and a ﬁlter f(t) is designed which, when applied to the source waveform w(t), will produce an output which is as close as possible to a desired impulse function (a spike). In other words, referring to the Wiener Filtering approach discussed earlier, suppose the source waveform has n samples, w0 , w1 , . . . , wn−1 , the unknown ﬁlter has m samples f0 , f1 , . . . , fm−1 , and the desired output is an impulse which has n + m −1 samples δ0 , δ1 , . . . , δn+m−1 and whose entries are all 0 except for 1 at one location; say the jth location. Then the ﬁlter coefﬁcients are determined so as to minimize the error function:

where

Spiking Deconvolution Method Assume that it is possible to ﬁnd a ﬁlter f(t) such that when applied to the seismic source waveform w(t), one gets the impulse function δ(t); that is,

Then, if one applies this ﬁlter to the seismic trace s(t) = r(t)*w(t), one gets:

As discussed in the derivation of the Wiener Filter, and using Eq. (24), the optimum vector that minimizes er is given by the expression:

Geophysical Signal and Image Processing

9

Figure 14. The Wiener ﬁltering method.

This can be written as:

where φi are the autocorrelation lags of the source waveform and gi are the crosscorrelations between the source waveform and the desired output:

Note that it is also possible to choose the optimal location j* of the spike in the δ vector in order to achieve the smallest possible error. This can be done by noting that, when Eq. (36) is substituted in Eq. (34), the minimum value of er will reduce to the quadratic expression emin = δ Mδ where M is an m × m matrix equal to M = I − W(WW)−1W . Given that the δ vector is all zeroes except for the number 1 in one location, emin will be smallest when j* is chosen to correspond to the location of the smallest term on the diagonal of the matrix M. Homomorphic Deconvolution In the late 1960s a class of nonlinear systems, called homomorphic systems (16, 17), which satisfy a generalization of the principle of superposition has been proposed. Homomorphic ﬁltering is essentially the use of a homomorphic system to remove an undesired component from a signal. Homomorphic deconvolution involves the use of homomorphic ﬁltering for separating two signals that have been convolved in the time domain. An important aspect of the theory of homomorphic deconvolution is that it can be represented as a cascade of three operations. The following will summarize how this theory can be used to separate the reﬂectivity sequence r(t) and the source waveform w(t) from the seismic trace s(t) = r(t)*w(t). The ﬁrst operation involves taking the Fourier transform of s(t). That is, S(ω) = R(ω)W(ω). The second operation involves taking the logarithm of S(ω):

Note that since the Fourier transform is a complex function, it is necessary to deﬁne the logarithm of a complex quantity. An appropriate such a deﬁnition for a complex function X(ω) is:

In the above expression, the real part, log|X(ω)|, causes no problem. Problems of uniqueness, however, arise in deﬁning the imaginary part since, x (ω) is deﬁned only to within ±π. One approach to dealing with this problem is to require that x (ω) be a continuous odd function of ω. That is, the phase function x (ω) must be unwrapped. It is important to point out that Eq. (39) shows that the multiplication operation in the frequency domain has now been changed to an addition operation in the log-frequency domain. The third operation is to take the inverse Fourier transform of log[S(ω)]. The resulting function is called the complex cepstrum of s(t). Now if the characteristics of the two signals r(t) and w(t) are such that they appear nonoverlapping in this domain, then they can be separated by an appropriate window function. This operation is essentially the ﬁltering operation. In general, it is very unlikely to have a seismic trace where there is complete nonoverlapping in the complex cepstrum of s(t). However, once this is achieved, the reverse process can be applied on each of the separated signals. That is, Fourier transform, followed by inverse logarithm, followed by inverse Fourier transform. The ﬁrst applications of homomorphic systems were in the area of speech processing (18). Homomorphic deconvolution of seismic data was introduced by Ulrych (19) in the early 1970s and later on extended by Tribolet (20). It should be pointed out that one of the major problems encountered in using homomorphic deconvolution is the problem of unwrapping of the phase. CONCLUSION The ﬁeld of geophysical signal processing deals primarily with computer methods for processing geophysical data collected for the purpose of extracting information about the subsurface layers of the earth. In this article, some of the main steps involved in acquiring, processing, displaying, and interpreting geophysical signals have been outlined. Clearly, many other important steps have not been covered and considerably more can be said about each of the steps that were covered. For example, ac-

10

Geophysical Signal and Image Processing

quisition of geophysical data also involves issues in geophone/hydrophone array design and placement, ﬁeld operations, noise control, and digital recording systems. Processing the data is typically an iterative process which also involves issues in static corrections, multiple suppression, numerous deconvolution applications, migration, imaging beneath complex structures, and F-K ﬁltering, to mention several. Displaying and interpreting geophysical data also involves issues in data demultiplexing and sorting, amplitude adjustments and gain applications, 2-D and 3-D imaging, geological modeling, as well as identiﬁcation of stratigraphic boundaries and structural features on the ﬁnal image. BIBLIOGRAPHY 1. A. I. Levorsen Geology of Petroleum, San Francisco: Freeman, 1958. 2. M. B. Dobrin Geophysical Prospecting, New York: McGrawHill, 1960. 3. J. F. Claerbout Fundamentals of Geophysical Data Processing, New York: McGraw-Hill, 1976. 4. E. A. Robinson S. Treitel Geophysical Signal Analysis, Englewood Cliffs, NJ: Prentice-Hall, 1980. 5. C. H. Dix Seismic Prospecting for Oil, Boston: IHRDC, 1981. 6. M. A. Simaan Advances in Geophysical Data Processing, Greenwich, CT: JAI Press, 1984. 7. E. A. Robinson T. S. Durrani Geophysical Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1986. 8. O. Yilmaz Seismic Data Analysis: Processing, Inversion, and Interpretation of Seismic Data, Investigations in Geophysics, No. 10, Tulsa: SEG Press, 2001. 9. L. R. Lines R.T. Newrick Fundamentals of Geophysical interpretation, Tulsa: SEG Press, 2004. 10. L. T. Ikelle L. Amundsen Introduction to Petroleum Seismology, Investigations in Geophysics, No. 12, Tulsa: SEG Press, 2005. 11. D. K. Butler Near Surface GeophysicsInvestigations in Geophysics, No. 13, Tulsa: SEG Press, 2005. 12. B. L. Biondi, 3D Seismic Imaging, Investigations in Geophysics, No. 14, Tulsa: SEG Press, 2006. 13. E. S. Robinson C. Coruh Basic Exploration Geophysics, New York: Wiley, 1988. 14. B. Ursin K. A. Bertussen Comparison of some inverse methods for wave propagation in layered media, Proc. IEEE, 3: 389–400, 1986. 15. V. K. Arya J. K. Aggarwal Deconvolution of Seismic Data, Stroudsburg, PA: Hutchinson & Ross, 1982. 16. A. V. Oppenheim R. W. Schafer T. G. Stockman, Jr. Nonlinear ﬁltering of multiplied and convolved signals, Proc. IEEE, 8: 1264–1291, 1968. 17. A. V. Oppenheim R. W. Schafer Discrete-Time Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1989. 18. L. R. Rabiner R. W. Schafer Digital Processing of Speech Signals, Englewood Cliffs, NJ: Prentice-Hall, 1989. 19. T. Ulrych Application of homomorphic deconvolution to seismology, Geophysics, 36 (4): 650–660, 1971. 20. J. M. Tribolet Seismic Applications of Homomorphic Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1979.

MARWAN A. SIMAAN University of Pittsburgh, Pittsburgh, PA

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3607.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Information Processing for Remote Sensing Standard Article James C. Tilton1, David Landgrebe2, Robert A. Schowengerdt3 1NASA’s Goddard Space Flight Center, Greenbelt, MD 2Purdue University, West Lafayette, IN 3University of Arizona, Tucson, AZ Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3607 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (339K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Feature Extraction Multispectral Image Data Classification Classification Using Neural Networks Image Segmentation Hyperspectral Data About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3607.htm17.06.2008 15:36:26

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

INFORMATION PROCESSING FOR REMOTE SENSING

INFORMATION PROCESSING FOR REMOTE SENSING Remote sensing is a technology through which information about an object is obtained by observing it from a distance.

83

This article is specifically concerned with obtaining information about Earth through remote sensing. Earth can be observed remotely in many ways. One of the earliest approaches to remote sensing was observing Earth from a hot air balloon using a camera, or just the human eye. Today, remotely sensed Earth observational data are routinely obtained from instruments onboard aircraft and spacecraft. These instruments observe Earth through various means, including optical telescopes and microwave devices at wavelengths from optical through microwave, including the visible, infrared, passive microwave, and radar. Other articles in this series discuss the most widely employed approaches for obtaining remotely sensed data. This article discusses methods for effectively extracting information from the data once they have been obtained. Most information processing of Earth remote sensing data assumes that Earth’s curvature and terrain relief can be ignored. In most practical cases, this is a good assumption. It is beyond the scope of this article to deal with the special cases where it is not, such as with a relatively low flying sensor over mountainous terrain or when the sensor points toward Earth’s horizon. This article deals with information processing of two-dimensional image data from down-looking sensors. Remotely sensed image data can have widely varying characteristics, depending on the sensor employed and the wavelength of radiation sensed. This variation can be very useful, as in most cases this variation corresponds to information about what is being sensed on Earth. A key task of information processing for remote sensing is to extract the information contained in the variations of remotely sensed image data with changes in spatial scale, spectral wavelength, and the time at which the data are collected. Data containing these types of variations are referred to as multiresolution or multiscale data, multispectral data, and multitemporal data, respectively. In some cases, Earth scientists may find useful a combined analysis of image data taken at different spatial scales and/ or orientations by separate sensors. Such analysis will become even more desirable over the next several years as the number and variety of sensors increase under such programs as NASA’s Earth Observing System. This type of analysis requires the determination of the correspondence of data points in one image to data points in the other image. The process of finding this correspondence and transforming the images to a common spatial scale and orientation is called image registration. More information on image registration can be found in REMOTE SENSING GEOMETRIC CORRECTIONS. Multispectral data are often collected by an instrument that is designed to collect the data in such a way that they are already registered. In other cases, however, small shifts in location need to be corrected by image registration. Multitemporal data must almost always be brought into spatial alignment using image registration, as must multiresolution data when obtained from separate sensors. Several approaches have been developed for analyzing registered multiscale/spectral/temporal data. Because most of these techniques were originally developed for analyzing multispectral image data, they will be discussed in terms of that context. However, many of these techniques can also be used in analyzing multiscale and/or multitemporal data. In the following discussion, each scale, spectral, or temporal

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

84

INFORMATION PROCESSING FOR REMOTE SENSING

manifestation of the image data is referred to as an image band. Figure 1 gives an example of remotely sensed multispectral image data. Sometimes important information identifying the observed ground objects is contained in the ratios between bands. Ratios taken between spectrally adjacent bands correspond to the discrete derivative of the spectral variation. Such band ratios measure the rate of change in spectral response and distinguish classes with a small rate of change in spectral response from those with a large rate of change. Other spectral ratios have been defined such that they relate to the amount of photosynthetic vegetation on the Earth’s surface.

These are called vegetation indices. Spectral ratios are also useful in the analysis of image data containing significant amounts of topographic shading. The process of spectral ratioing tends to reduce the effect of this shading. The data contained in each band of multispectral image data are often correlated with the data from some of the other bands. When desirable to do so, this correlation can be reduced by transforming the data in such a way that most of the data variation is concentrated in just a few transformed bands. Reducing the number of image bands in this way not only may make the information content more apparent but also serves to reduce the computation time required for analy-

(a)

(b)

(c)

(d)

Figure 1. An example of remotely sensed multispectral imagery data. Displayed are selected spectral bands from a seven-band Landsat 5 Thematic Mapper image of Washington, DC: (a) spectral band 2 (0.52–0.60 애m), (b) spectral band 4 (0.76–0.90 애m), (c) spectral band 5 (1.55–1.75 애m), and (d) spectral band 7 (2.08–2.35 애m).

INFORMATION PROCESSING FOR REMOTE SENSING

sis; it can be used, in effect, to ‘‘compress’’ the data by discarding transformed bands with low variation. There are many such transformations for accomplishing this concentration of variation. One is called Principal Component Analysis (PCA) or the Principal Component Transform (PCT). Other useful transforms are the Canonical Components Transform (CCT) and the Tasseled Cap Transform (TCT). The process of labeling individual pixels in the image data as belonging to a particular ground cover class is called image classification. (An image data vector from a particular spatial location is called an image picture element or pixel.) This labeling process can be carried out directly on the remotely sensed image data, on image features derived from the original image data (such as band ratios or data transforms), or on combinations of the original image data and derived features. Whatever the origin of the data, the classification feature space is the n-dimensional vector space spanned by the data vectors formed at each image pixel. The two main types of image classification are unsupervised and supervised. In unsupervised classification, an analysis procedure is used to find natural divisions, or clusters, in the image feature space. After the clustering process is complete, the analyst associates class labels with each cluster. Several clustering algorithms are available, ranging from the simple K-means algorithm, where the analyst must prespecify the number of clusters, to the more elaborate ISODATA algorithm, which automatically determines an appropriate number of clusters. In supervised classification, the first step is to define a description of how the classes of interest are distributed in feature space. Then each pixel is given the class label whose description is closest to its data value. Determining the description of how the classes of interest are distributed in feature space is the training stage of supervised classification. An approach commonly used in this stage is to identify small areas throughout the image data that contain image pixels of the classes of interest. This is usually done using image interpretation combined with ground reference information (e.g., a map of the locations of areas of classes of interest obtained through a ground survey, knowledge from a previous time, or other generalized knowledge about the area in question). Then the classes are characterized according to the model used for the next step: the classification stage. One of the simplest classification algorithms is the minimum-distance-to-means classifier. When this classifier is used, the vector mean value of each class is calculated in the training stage, and each data pixel is labeled as belonging to the closest class by some distance measure (e.g., the Euclidean distance measure). This classifier can work very well if all classes have similar variance and well-separated means. However, its performance may be poor when the classes of interest have a wide range of variance. A relatively simple classification algorithm that can account for differing ranges of variation of the classes is the parallelepiped classifier. When this classifier is used, the range of pixel values in each band is noted for each class from the training stage, and image data pixels that do not fall uniquely into the range values for just one class are labeled as ‘‘unknown.’’ This classifier gets its name from the fact that the feature space locations of pixels belonging to individual classes form parallelepiped-shaped regions in feature space. The number of pixels in the unknown class can be reduced

85

Table 1. Accuracy Comparison (Percent Correct Classification) Between Classifications of the Original and Presegmented Landsat Thematic Mapper Images [from (1)] Original Image (%)

Presegmented Image (%)

Water/marsh Forest Residential Agricultural and domestic grasses

73.7 74.8 54.4 81.9

79.3 75.6 64.9 83.4

Overall

79.2

80.9

Ground Cover Class

by modeling each class by a union of several parallelepipedshaped regions. One of the most commonly used classification algorithms for remotely sensed data is the Gaussian maximum likelihood classifier (also called the ML classifier). The ML classifier often performs very well in cases where the minimum-distanceto-means classifier or the parallelepiped classifier perform poorly. This is because the ML classifier not only accounts for differences in variance between classes but also accounts for differences in between-band correlations. An even more general classification approach is a neural network classifier. The flexibility of the neural network classifier comes from its ability to generate totally arbitrary feature space partitions. The analysis approaches discussed to this point have treated the data at each spatial location separately. This perpixel analysis ignores the information contained in the spatial variation of the image data. One approach that can exploit the spatial information content in the data is image segmentation. Image segmentation is a partitioning of an image into regions based on the similarity or dissimilarity of feature values between neighboring image pixels. An image region is defined as a collection of image pixels in which, for any two pixels in this collection, there exists a spatial path connecting these two pixels, which travels only through pixels contained in the region. After an image is segmented into regions, the image can be labeled region by region using one of the classification approaches mentioned previously. The combination of image segmentation and image classification often produces superior results to per-pixel image classification (see Table 1). A relatively recent development in remotely sensing instrumentation is imaging spectrometers, such as the Airborne Visible-InfraRed Imaging Spectrometer (AVIRIS). Imaging spectrometers produce hyperspectral data, consisting of hundreds of spectral bands taken at narrow and closely spaced spectral intervals. Two main types of specialized analysis approaches are currently under development for this type of data. One approach is an attempt to match laboratory or field reflectance spectra with remotely sensed imaging spectrometer data. The success of this approach depends on precise calibration of the remotely sensed data and careful compensation or corrections for atmospheric, solar, and topographic effects. The other approach depends on exploiting the unique mathematical characteristics of very high dimensional data. This approach does not necessarily require corrected data. FEATURE EXTRACTION The multispectral image data provided by a remote sensing instrument can be analyzed directly. However, in some cases,

86

INFORMATION PROCESSING FOR REMOTE SENSING

it may be beneficial to analyze features extracted from the original data. Such feature extraction commonly takes the form of subsetting and/or mathematically transforming the original data. It is used to compensate for one or more of the following problems often encountered with remotely sensed data: atmospheric effects, topographic shading effects, spectral band correlation, and lack of optimization for a particular application. Atmospheric Effects Most remote sensing data are collected from sensors on satellite platforms orbiting above Earth’s atmosphere. Earth’s atmosphere can have a significant effect on the quality and characteristics of such satellite-based remote sensing data. For this article, it is sufficient to introduce the following first-order model for the input radiance to an Earth-orbiting sensor (2): L(x, y, λ) =

1 Ts (λ)Tv (λ)E0 (λ) cos[θ (x, y)]ρ(x, y, λ) + Lh (λ) (1) π

The solar irradiance from the sun E0() provides the source radiation for the remote sensing process. This is the irradiance as it would be measured at the top of Earth’s atmosphere and is referred to as the exo-atmospheric solar irradiance. The atmosphere affects the signal received by the sensor on two paths: (1) between the top of the atmosphere and Earth’s surface (solar path) and (2) between the surface and the sensor (view path). The spectral transmittance of the atmosphere Ts() along the solar path or Tv() along the view path is generally high except in prominant molecular absorption bands attributable mainly to carbon dioxide and water vapor, as illustrated in Fig. 2. The cos[(x, y)] term is the spatial variation of irradiance at the surface resulting from the solar zenith angle and topography, which determine the angle at which the incident radiation strikes the surface. The spatial and spectral variations in diffuse surface reflectance are modeled by the function (x, y, ). A Lambertian, or perfectly

1

Transmittance

0.8 0.6 CO2 0.4

H 2O CO2

0.2

CO2, H 2O

CO2, H 2O

CO2

H 2O H O 2 0 0.4

0.8

1.2 1.6 Wavelength ( µ m)

2

2.4

Figure 2. Atmospheric transmittance for a nadir path as estimated with the atmospheric modeling program MODTRAN (3). The transmittance is generally over 50% throughout the visible to short-wave infrared (SWIR) spectral region, except for prominent absorption bands resulting from atmospheric molecular constituents. Remote sensing of the Earth is not possible at wavelengths corresponding to the strongest absorption bands. The relatively lower transmittance below about 0.6 애m results from Rayleigh scattering losses.

diffuse, reflecting surface is assumed in Eq. (1). The atmospheric view path radiance Lh() is additive and increases at shorter visible wavelengths as a result of Rayleigh molecular scattering. This is the effect that causes the clear sky to appear blue. A related, second-order effect from down-scattered radiation (skylight) that is subsequently reflected at the surface into the sensor view path is not included in Eq. (1). This effect allows the surface-related signal in shadowed areas to be recovered, although with a spectral bias toward shorter wavelengths. Correction for atmospheric effects requires modeling or measurement of the various independent terms in Eq. (1), namely, Ts(), Tv(), E0(), and Lh() and, given the remotely sensed data measurements, L(x, y, ), solution of Eq. (1) for the surface spatial and spectral variations (x, y, ). The cos[(x, y)] term is a topographic effect that is described in the next section. The path radiance term Lh() is primarily of concern at short, blue-green wavelengths, and the transmittance terms Ts() and Tv() are usually ignored for coarse multispectral sensing, such as with Landsat TM, where the bands are placed within atmospheric ‘‘windows’’ of relatively high and spectrally flat transmittance. For hyperspectral data, however, knowledge of and correction for transmittance is usually required if the data are to be compared to reflectance spectra measured in a laboratory. Topographic Effects Most areas of Earth have topographic relief. The irradiance from solar radiation is proportional to the cosine of the angle between the normal vector to the surface and the vector pointing to the sun. A surface element normal to the solar vector receives the maximum possible irradiance. Any element at some other angle will receive less. This spatially variant factor is the same in all solar reflective bands and, therefore, introduces a correlation across these bands. Spectral Band Ratios. The pixel-by-pixel ratio of adjacent spectral bands corresponds to the discrete derivative of the spectral function. It therefore measures the rate of change in spectral signature and distinguishes classes with a small rate of change from those with a large rate of change. For example, the ratio of a near infrared (NIR) band to a red band will show a high value across the vegetation edge at 700 nm, whereas a ratio of a red band to a green band will show a small value for both vegetation and soil. For bands where the atmospheric path radiance is small [e.g., in the NIR or short-wave infrared (SWIR) spectral regions], the spectral band ratio will be proportional to the surface reflectance ratio. In this case, the spectral band ratio is insensitive to topographic effects. If the path radiance is not small, then it should be reduced or removed using a technique such as Dark Object Subtraction (DOS) before spectral band ratios are calculated (2). Vegetation Indices. A number of specific ratio formulae have been defined in attempts to obtain features that relate to the amount of photosynthetic vegetation on the Earth’s surface. All depend on the red and NIR spectral reflectances (i.e., calibrated data). They are summarized in Table 2 and plotted as isolines in the NIR-red reflectance space in Fig. 3.

INFORMATION PROCESSING FOR REMOTE SENSING

87

Table 2. Definition of Common Vegetation Indices Index

Formula

Remarks

NIR red

—

NIR ⫺ red NIR ⫹ red

—

Ratio (R) Normalized Difference Vegetation Index (NDVI)

冉

Soil-Adjusted Vegetation Index (SAVI)

冊

NIR ⫺ red (1 ⫹ L) NIR ⫹ red ⫹ L

Even though vegetation indices can be used as features in classifications, they are commonly produced as an end-product indicating photosynthetic activity, particularly on a global scale from Advanced Very High-Resolution Radiometer (AVHRR) data. Spectral Band Correlation Spectral band correlation can result from several factors. First, the sensor spectral sensitivities sometimes overlap between adjacent spectral bands. Second, the spectral reflectance of most natural materials on the earth, particularly over spectral bandwidths of 10 nm or greater, change slowly with wavelength. Therefore, the reflectance in one band will be similar to that in an adjacent band. A notable exception is the ‘‘vegetation edge’’ at about 700 nm where the reflectance of photosynthetic vegetation increases dramatically from the red to the NIR spectral regions. Finally, topographic shading

L is an empirical constant, typically 0.5 for partial cover.

can introduce into remotely sensed data an apparent spectral correlation because it affects all solar reflective bands equally. Principal Components. The Principal Component Transformation is often used to eliminate spectral band correlation. The PCT also produces a redistribution of spectral variance into fewer components, isolates spectrally uncorrelated signal components and noise, and produces features that, in some cases, align with physical variables. It is a data-dependent, linear matrix transform of the original spectral vectors into a new coordinate system that corresponds to a specific coordinate axes rotation in n-dimensions (2,5). The PCT for a particular data set is derived from the eigenvalues and eigenvectors of the spectral covariance of the data, which is represented in matrix form as

=

N 1 (x − µ)(x j − µ)T N − 1 j=1 j

(2)

where N is the number of pixels in the image, xj is the jth image data vector (pixel), the superscript T denotes the vector transpose, and 애 is the vector mean value of the image given by

0.8

0.8

µ=

N 1 x N j=1 j

(3)

0.6

ρ NIR

The eigenvalues and eigenvectors of ⌺ are the solutions of the equation 0.4

0.2

0

φ = λφ

R = 1, NDVI = 0 R = 2, NDVI = 0.33 R = 3, NDVI = 0.5 SAVI = 0 SAVI = 0.33 SAVI = 0.5

assuming is not the zero vector (6,7). The eigenvalues are ordered in decreasing order, and the corresponding eigenvectors are combined to form the eigenvector matrix = [φ1 φ2 · · · φn ]

0

0.2

0.4

0.6

0.8

(4)

1

ρ red

Figure 3. Isolines for three different vegetation indices in the NIR spectral reflectance space. The spectral ratio R and the NDVI are redundant in that either one can be expressed in terms of the other (see Table 2). The SAVI requires an empirically determined constant (4). A value of 0.5 is used for this graph and is appropriate under most conditions of partial vegetation cover with soil background. SAVI has a smaller slope than does NDVI in this graph. Therefore, SAVI is less sensitive to the ratio of the NIR reflectance than NDVI, reflecting the former’s adjustment for soil background.

(5)

The PCT is then given by y = T x

(6)

Each output axis is a linear combination of the input axes (e.g., the spectral bands) and is orthogonal to the other output axes (this characteristic can isolate uncorrelated noise in the original bands). The weights on the inputs x are the eigenvectors, and the variances of the output axes y are the eigenvalues. Because the eigenvalues are ordered in decreasing order,

88

INFORMATION PROCESSING FOR REMOTE SENSING

the PCT achieves a compression of data variation into fewer dimensions when a subset of PCT components corresponding to the larger eigenvalues is selected. A disadvantage of the PCT is that it is a global, data-dependent transform and must be recalculated for each image. The greatest computation burden is usually the covariance matrix for the input features. Figure 4 displays the first four principal components of the Landsat 5 TM scene displayed in Fig. 1. Canonical Components. The Canonical Components Transform is similar to the PCT, except that the data are not lumped into one distribution in n-dimensional space when de-

riving the transformation matrix. Rather, training data for each class are used to find the transformation that maximizes the separability of the defined classes. A compression of significant information into fewer dimensions results, but it is not optimal as in the case of the PCT. Selection of the first three canonical components for a three-band color composite produces a color image that visually separates the classes better than any combination of three of the original bands. The CCT is a linear transformation on the original feature space such that the transformed features are optimized and arranged in order of decreasing maximum separability of the classes. The optimization is accomplished based upon max-

(a)

(b)

(c)

(d)

Figure 4. (a)–(d) The first four principal components from the PCT of the seven-band Landsat 5 TM image of Washington, DC (see Fig. 1). These four principal components contain 98.95% of the data variance contained in the original seven spectral bands.

INFORMATION PROCESSING FOR REMOTE SENSING

imizing the ratio of the between-class variance to the withinclass variance. The specific quantities are

W =

n

P(ωi ) i

(within-class scatter matrix)

(7)

i=1

B =

n

P(ωi )(µi − µ0 )(µi − µ0 )T

i=1

(between-class scatter matrix) µ0 =

n

P(ωi )µi

(8) (9)

i=1

where 애i, ⌺i, and P(웆i) are the mean vector, covariance matrix, and prior probability, respectively, for class 웆i. The optimality criterion then is defined as −1 J1 = tr( W

B )

(10)

The transformation results in new features that are linear combinations of the original bands. The size of the feature eigenvalues indicates the relative class discrimination value. Thus, the size of the eigenvalues gives some idea as to how many features should be used. Tasseled Cap Components. The Tasseled Cap Transform is a linear matrix transform, just as the PCT and CCT, but is fixed and independent of the data. It is, however, sensor dependent and must be newly derived for each sensor. The TCT produces a new set of components that are linear combinations of the original bands. The coefficients of the transformation matrix are derived relative to the ‘‘tasseled cap,’’ which describes the temporal trajectory of vegetation pixels in the n-dimensional spectral space as the vegetation grows and matures during the growing season. The TCT was originally derived for crops in temperate climates, namely the U.S. Midwest, and is most appropriately applied to that type of data (8–11). For the Landsat MSS (Multispectral Scanner) data, four new axes are defined: soil brightness, greenness, yellow stuff, and non-such. For the Landsat TM (Thematic Mapper) data, six new axes are defined, soil brightness, greenness, wetness, haze, and an otherwise unnamed fifth and sixth axes. The transformed data in the tasseled cap space can be compared directly between sensors (e.g., Landsat MSS soil brightness and Landsat TM soil brightness). Spectral Band Selection Global satellite sensors must be designed to image a wide range of materials of interest in many different applications. The sensor design is thus a compromise for any particular application (continuous spectral sensing, such as that produced by hyperspectral sensors, is a way to provide data suitable for all applications, at the expense of large data volumes). A multispectral sensor may have bands in the red and NIR suitable for vegetation mapping but lack bands in the SWIR suitable for mineral mapping. In spectral band selection, an optimal set of spectral bands are selected for analysis. The spectral characteristics of the material classes of interest must be defined before this technique is applied. The spectral characteristics are obtained from training data and may consist of the class mean vectors or the class mean vectors and covariances matrices (second-

89

order statistics), depending on the metric to be used. Various band combinations can be compared to find the combination that best separates (distinguishes) the given classes. Many separability metrics have been defined to measure separability. Each can be interpreted as a type of distance in spectral space (Table 3). The angular distance metric is particularly interesting because it conforms to the general shape of the scattergram between spectral bands in many cases. Topographic shading introduces a scatter of spectral signatures along a line through the origin of the spectral space. The angular metric directly measures angular separation of two distributions and is insensitive to the distance of a class distribution from the origin. To select the optimum spectral bands from a sensor band set, an exhaustive calculation is performed to find the average interclass separability for each possible combination of bands. For example, bands 2, 3, and 4 of Landsat TM may show the highest average transformed divergence of any three-band combination of the seven TM bands for a vegetation and soil classification. The full classification can then be performed using only bands 2, 3, and 4. MULTISPECTRAL IMAGE DATA CLASSIFICATION Data classification is the process of associating a thematic label with elements of the data set. The data elements so labeled are typically individual pixels, but they may be groups of pixels that have been associated with one another, for example, by having previously segmented the scene into regions (i.e., spectrally homogeneous areas). Mathematically, the process of classification may be described as mapping the data from a vector-valued space (spectral feature space) to a scalar space that contains the list of final classes desired by the user (i.e., mapping from the data to the desired output). Classification is carried out based upon ancillary information, often in terms of samples labeled by the analyst as being representative of each class of surface cover to be mapped. These samples are often called training samples or design samples. The development of an appropriate list of classes and the process of labeling these samples into these classes is a key step in the analysis process. A valid list of classes for a given data set must be, simultaneously, 1. exhaustive—There must be a logical and appropriate class to which to associate every pixel in the data set. 2. separable—It must be possible to discriminate accurately each class from the others in the list based on the spectral features available. 3. of informational value—The list of classes must contain the classes desired to be identified by the user. Training Phase Classification is typically carried out in two phases: the training phase and the analysis phase. During the training phase, ancillary information available to the analyst is used to define the list of classes to be used, and, from it, to determine the appropriate quantitative description of each of the classes. How this is done is situation-dependent, based on the form and type of ancillary information available and the desired classification output. In some cases, the analyst may have partial knowledge of the scene contents based upon observa-

90

INFORMATION PROCESSING FOR REMOTE SENSING

Table 3. Separability Metrics for Classification (6,12) Metric

Formula* L1 ⫽ 兩애i ⫺ 애j兩

City block

Normalized city block Euclidean

NL1 ⫽

n

ib

Divergence

Transformed divergence Bhattacharyya

JeffriesMatusita

Normalizes for class variance

jb

ib

jb

L2 ⫽ 储애i ⫺ 애j储 ⫽ [(애i ⫺ 애j)T(애i ⫺ 애j)]1/2

冋冘 n

(mib ⫺ mjb)

ANG ⫽ acos

冋

册 冉

Results in linear decision boundaries

1/2

b⫽1

Mahalanobis

Results in piecewise linear decision boundaries

冘 (兩m ⫹⫺m)/2兩

b⫽1

⫽ Angular

Remarks

MH ⫽ (애i ⫺ 애j)T

2

애Ti애j 储애i储 储애j储

冉兺 兺 冊 i

⫹ 2

⫺1

j

⫺1

冊

Normalizes for topographic shading

册

1/2

(애i ⫺ 애j)

⫺1

Assumes normal distributions; normalizes for class covariance; zero if class means are equal

D ⫽ tr关(兺i ⫺ 兺j)(兺i ⫺ 兺j )兴 ⫺1 ⫺1 ⫹ tr 关(兺i ⫹ 兺j )(애i ⫺ 애j)(애i ⫺ 애j)T 兴

Zero if class means and covariances are equal; does not converge for large class separation

Dt ⫽ 2关1 ⫺ e⫺D/8兴

Asymptotically converges to one for large class separation

B ⫽ MH ⫹ ln

冋 兺兺 兺兺 册 兩( i ⫹ j)/2兩 (兩 i兩兩 j兩)1/2

JM ⫽ 关2(1 ⫺ e⫺B )兴1/2

Zero if class means and covariances are equal; does not converge for large class separation Asymptotically converges to one for large class separation

In the formulae, mib is the mean value for class i and band b, ib is the standard deviation for class i and band b, 애i is the mean vector for class i, 兺i is the covariance matrix for class i, and n is the number of spectral bands.

tions from the ground and photointerpretation of air photographs of a part of the scene or from generalized knowledge of the area that is to be made more quantitative and specific by the analysis. For example, the data set may be of an urban area in which the analyst is partially familiar and can designate in the data areas which are used for classes such as high-density housing, low-density housing, commercial, industrial, and recreational. The analyst would use this generalized knowledge to mark areas in the data set that are typical examples of each class. These then become the training areas from which the quantitative description of each class is calculated. Examples of other types of ancillary data from which training samples may be identified are so-called signature banks, which are databases of spectral responses of the materials to be identified that were collected at another time and location with perhaps different instruments. In this case, the additional problem exists of reconciling the differences in data collection circumstances for the database with those of the data set to be analyzed. Examples of these circumstances are the differences in the instruments used to collect the data, the spatial and spectral resolution, the atmospheric conditions, the time of day, the illumination and direction of view variables, and the season.

Another example of an ancillary data source that might be used for deriving training data is more fundamental knowledge about the materials to be identified. For example, in a geological mapping problem, it might be known that certain minerals of interest have molecular absorption features as used by chemical spectroscopists to identify specific molecules. If such spectral features can be extracted from the data to be analyzed, they can be used to label training samples for such classes. Analysis Phase During the second phase of classification, the analysis phase, the pixel or region features are compared quantitatively to the class descriptions derived during the training phase to accomplish the mapping of each of the data elements to one of the defined classes. Classifiers may be of two types: relative and absolute. A relative classifier is one that assigns a data element to a class after having compared it to the entire list of classes to see to which class it is most similar. An absolute classifier compares the data element to only one class description to see if it is sufficiently similar to it. Generally speaking, in remote sensing, relative classifiers are the more common and more powerful.

INFORMATION PROCESSING FOR REMOTE SENSING

Many different algorithms are used for classification in the analysis phase (6,12). A common approach for implementing a relative classifier is through the use of a so-called discriminant function. Designate the data element to be classified as vector X, in which the elements of the vector are the values measured for that pixel in each spectral band. Then, for a kclass situation, assume that we have k functions of X, 兵g1(X), g2(X), . . ., gk(X)其 such that gi(X) is larger than all others whenever X is from class i. Let 웆i denote the ith class. Then the classification rule can be stated as Decide X is in ωi if and only if gi (X) ≥ g j (X) for all j = 1, 2, . . ., k

(11)

The functions gi(X) are referred to as discriminant functions. An advantage of using this scheme is that it is easy to implement in computer software or hardware. A common scheme for defining discriminant functions is to use the class probability density functions. The classification process then amounts to evaluating the value of each class density function at X. The value of a probability density function at a specific point is called the likelihood of that value. Such a classifier is called a maximum likelihood classifier because it assigns the data element to the most likely class. Another example for a classification rule is the so-called Bayes rule strategy (13). Bayes’ Theorem from the theory of probability states that p(ωi |X) =

p(X|ωi ) p(X, ωi ) p(ωi ) = p(X) p(X)

(12)

where p(웆i兩X) is the probability of class 웆i given the data element valued X, p(X兩웆i) is the probability density function for class 웆i, p(웆i) is the probability that class 웆i occurs, p(X, 웆i) of the value X and the class 웆i, and p(X) is the probability density function for the entire data set. Then, to maximize the probability of correct classification, one must select the class that maximizes p(웆i兩X). Because p(X) is the same for any i, one may use as the discriminant function, just the numerator of Eq. (12), p(X兩웆i) p(웆i). Thus, the classification rule becomes Decide X is in ωi if and only if p(X|ωi ) p(ωi ) ≥ p(X|ω j ) p(ω j ) for all j = 1, 2, . . ., k

(13)

This classification strategy leads to the minimum error rate. Note that if all the classes are equally likely, the p(웆i) terms may be canceled and the Bayes rule strategy reduces to the maximum likelihood strategy. Because, in a practical remote sensing problem, the prior probabilities p(웆i) are not known, it is common practice to assume equal priors. Other factors that are significant in the analysis process are the matter of how the class probability density functions are modeled and, related to this, how many training samples are available by which to train the classifier. Parametric models, assuming that each class is modeled by one or a combination of Gaussian distributions, are very common and powerful. Within this framework, one can also make various simplifying assumptions. Some common ones, in parametric form, and the corresponding discriminant functions follow: • Assume that all classes have the same covariance, in which there is no correlation between bands, and that all

91

bands have unit variance: gi (X) = (X − µi )T (X − µi )

(14)

The decision boundary that results is linear in spectral feature space and is oriented perpendicular to the line connecting the class mean values at the midpoint of the line. This is the minimum-distance-to-means classifier. • Assume that all classes have the same covariance but account for correlation between bands and for different variances in each: gi (X) = (X − µi )T −1 (X − µi )

(15)

The resulting decision boundary in spectral feature space is linear, but its orientation and location are dependent upon the common covariance ⌺. • Assume that classes have different covariances: gi (X) = − 12 ln | i | − 12 (X − µi )T i−1 (X − µi )

(16)

The resulting decision boundary in spectral space is a second-order hypersurface whose shape and location is dependent upon the individual mean vectors 애i and covariance matrices ⌺i. This is the maximum likelihood classifier. • Assume that the class densities have a more complex structure such that a combination of a small number of Gaussian densities is not adequate: gi (X) =

X − Xji 1 K Ni λ

(17)

The resulting decision boundary in spectral space can be of nearly arbitrary shape. It can be seen that this list of discriminant functions has a steadily increasing generality and a steadily increasing complexity, such that a rapidly increasing number of training samples is required to adequately estimate the rapidly growing number of parameters in each. The latter one, for example, which, though it is still parametric in form, is referred to as a nonparametric Parzen density estimator with kernel K. The kernel function K, as well as the number of kernal terms to be used Ni, is selectable by the analyst. For example, one possible selection is a Gaussian-shaped function, thus making this discriminant function a direct generalization of the previous ones. There are many additional variations to this list of discriminant functions. There are also additional variations to the possible training procedures. For example, one variation that is popular at the present time is the neural network method. This method uses an iterative scheme for determining the location of the decision boundary in spectral feature space. A network is designed, consisting of as many inputs as there are spectral features, as many outputs as there are classes, and threshold devices with weighting functions connecting the inputs to the outputs. Training samples are applied to the input sequentially, and the resulting output for each is observed. If the correct classification is obtained for a given sample, as evidenced by the output port for the correct

92

INFORMATION PROCESSING FOR REMOTE SENSING

class being the largest, the weights for correct output are augmented, and incorrect class output weights are diminished. The training set is reused for as many times as necessary to obtain good classification results. The advantage of this approach is its generality and that it can be essentially automatic. Characteristics generally regarded as disadvantages are that it is nearly entirely heuristic, thus making analytical calculations and performance predictions difficult, its generality means that very large training sets are required to obtain robust performance, and there is a great deal of computation required in the training process. Because, in practical circumstances, classifiers must be retrained for every new data set, characteristics affecting the training phase are especially significant. Unsupervised Classification A second form of classification that finds use in remote sensing is unsupervised classification, also known as clustering. In this case, data elements, usually individual pixels, are assigned to a class without the use of training samples, thus the ‘‘unsupervised’’ name. The purpose of this type of classification is to assign pixels to a group whose members have similar spectral properties (i.e., are near to one another in spectral feature space). There are, again, many algorithms to accomplish this. Generally speaking, three capabilities are needed. 1. A measure of distance between points. Euclidean distance is a common choice. 2. A measure of distance or separability between the sets of points comprising each cluster. Any separability measure, such as listed in Table 3 could be used, but usually simpler measures are selected. 3. A cluster compactness criterion. An example might be the sum of squared distances from the cluster center for all pixels assigned to a cluster. The process typically begins when one selects (often arbitrarily) a set of initial cluster centers and then assigns each pixel to the nearest cluster center using step 1. After assigning all the pixels, one computes the new cluster centers. If any of the cluster centers have moved, all the pixels are reassigned to the new cluster centers. This iterative process continues until the cluster centers do not move or the movement is smaller than a prescribed threshold. Then steps 2 and 3 are used to test if the clusters are sufficiently distinct (separated from one another) and compact. If they are not adequately distinct, the two that are the closest are combined, and the process is repeated. If they are not sufficiently compact, an additional cluster center is created within the least distinct cluster, and the process is repeated. Clustering is ordinarily not useful for final classification as such because it is unlikely that the data would be clustered into classes of specific interest. Rather it is primarily useful as an intermediate processing step. For example, in the training process, it is often used to divide the data into spectrally homogenous areas that might be useful in deciding on supervised classifier classes and subclasses and in selecting training samples for these classes and subclasses.

CLASSIFICATION USING NEURAL NETWORKS Unlike statistical, parametric classifiers, Artificial Neural Network (ANN) classifiers rely on an interative error minimization algorithm to achieve a pattern match. A network consists of interconnected input (feature) nodes, hidden layer nodes, and output (class label) nodes. A wide range of network architectures have been proposed (14); here a simple threelayer network is considered to explain the basic operation. The input nodes do no processing but simply provide the paths for the data into the hidden layer. Each input node is connected to each hidden layer node by a weighted link. In the hidden layer, the weighted input features are summed and compared to a thresholding decision function. The decision function is usually ‘‘soft,’’ with a form known as a sigmoid, output(input) =

1 1 + exp(−input)

(18)

The output from each hidden layer node is then fed through a weighted link to each output layer node. The same processing, summation and comparison to a threshold, is performed in each output node. The output node with the highest resulting value is selected as the label for the input feature vector. The decision information of the ANN is contained in its weights. To adapt the weights to the data, an iterative algorithm is required. The classic example is the Back Propagation (BP) algorithm (15,16). The BP algorithm minimizes the output error over all classes for a given set of training data. It achieves this by measuring the output error and adjusting the ANN’s link weights progressively backward through each layer to reduce the error. If local minima in the decision space of the ANN can be avoided, the BP algorithm will converge to a global minimum for the output error [although one is never sure that it is not in reality a local minimum (i.e., the algorithm cannot be proven to result in a global error minimum)]. Other convergence algorithms, such as Radial Basis Functions, have been used and are faster than BP. One parameter that must be set for ANNs is the number of hidden layer nodes. A way to specify this is to relate the total number of Degrees-Of-Freedom (DOF) in the ANN to that of another classifier for comparison (2). For example, in a three-layer ANN, the DOF are NANN = H(K + L)

(19)

where H is the number of hidden layer nodes, K is the number of input features, and L is the number of output classes. For the same number of features and classes, the ML classifier has the following DOF: NML =

LK(K + 3) 2

(20)

Therefore, to compare the two classifiers, it is logical to set their DOF equal, obtaining H=

LK(K + 3) 2(K + L)

(21)

for the number of hidden layer nodes in the ANN. This analysis yields only 20 hidden layer nodes for six bands of nonther-

INFORMATION PROCESSING FOR REMOTE SENSING

mal TM imagery, even for as many as 20 classes. Fewer hidden layer nodes result in faster BP training. Performance Comparison to Statistical Classifiers The ANN type of classifier has some unique characteristics that are important in comparing it to other classifiers: 1. Because the weights are initially randomized, the final output results of the ANN are stochastic (i.e., they will vary from run to run on the same training data). It has been estimated that this variation is as much as 5% (17). 2. The decision boundaries move in the feature space to reduce the total output error during the optimization process. The network weights and final classification map that result will depend on when the process is terminated. The ANN classifier is nonparametric (i.e., it makes no assumptions about an underlying statistical distribution for each class). In contrast, the ML classifier assumes a Gaussian distribution for each class. These facts make the feature space decision boundaries totally different. It appears that the boundaries from a three-layer ANN trained with the BP algorithm are often more similar to those from the minimum-distance-to-means classifier than to those from the ML classifier. Experiments with a land-use/land-cover classification involving heterogeneous class spectral signatures indicate that the nonparametric characteristic of the ANN classifier results in superior classifications (18).

IMAGE SEGMENTATION Image segmentation is a partitioning of an image into regions based on the similarity or dissimilarity of feature values between neighboring image pixels. It is often used in image analysis to exploit the spatial information content of the image data. Most image segmentation approaches can be placed in one of three categories (19): 1. characteristic feature thresholding or clustering, 2. boundary detection, or 3. region growing. Characteristic feature thresholding or clustering does not exploit spatial information. The unsupervised classification (clustering) approaches discussed previously are a form of this type of image segmentation. Boundary detection exploits spatial information by examining local edges found throughout the image. For simple noise-free images, detection of edges results in straightforward boundary delineation. However, edge detection on noisy, complex images often produces missing edges and extra edges that cause the detected boundaries to not necessarily form a set of closed connected curves that surround connected regions. Image segmentation through region growing uses spatial information and guarantees the formation of closed, connected regions. However, it can be a computationally intensive process.

93

Edge Detection Edge detection approaches generally examine pixel values in local areas of an image and flag relatively abrupt changes in pixel values as edge pixels. These edge pixels are then extended, if necessary, to form the boundaries of regions in an image segmentation. Derivative-Based Methods for Edge Detection. The simplest approaches for finding abrupt changes in pixel values compute an approximation of the gradient at each pixel. The mathematical definition of the gradient of the continuous function f(x, y) is ∂f ∂f (x, y), (x, y) (22) ∇ f (x, y) = ∂x ∂y where ⵜf(x, y) is the gradient at position (x, y), and (⭸f /⭸x)(x, y) and (⭸f /⭸y)(x, y) are the first derivatives of the function f(x, y) with respect to the x and y coordinates, respectively. The gradient magnitude is 2 2 ∂f ∂f (x, y) + (x, y) |∇ f (x, y)| = (23) ∂x ∂y and the gradient direction (angle) is ∂f ∂x (x, y) φ = arctan ∂f (x, y) ∂y

(24)

In order to apply the concept of a mathematical gradient to image processing, (⭸f /⭸x)(x, y) and (⭸f /⭸y)(x, y) must be approximated by values on a discrete lattice corresponding to the image pixel locations. Such a simple discretization is ∂f (x, y) ∼ = f (x + 1, y) − f (x, y) ∂x

(25)

for edge detection in the x direction and ∂f (x, y) ∼ = f (x, y + 1) − f (x, y) ∂y

(26)

for edge detection in the y direction. These functions are equivalent to convolving the image with one of the two templates in Fig. 5, where (x, y) is the upper left corner of the window.

–1

1

–1

0

0

0

1

0

∂f (x, y) ∂x

∂f (x, y) ∂y

Figure 5. Convolution templates corresponding to the discretized first deriviative of the image function f(x, y) in the x and y directions. These templates can be used as image edge detectors. However, their small 2 ⫻ 2 window size makes these templates very susceptible to noise.

94

INFORMATION PROCESSING FOR REMOTE SENSING

Figure 6. The Sobel and Prewitt edge detection templates. These 3 ⫻ 3 window templates are somewhat less susceptible to noise as compared to the 2 ⫻ 2 window templates illustrated in Fig. 5.

–1

0

1

1

2

1

–1

0

1

1

1

1

–2

0

2

0

0

0

–1

0

1

0

0

0

–1

0

1

–1

–2

–1

–1

0

1

–1

–1

–1

∂f (x, y) ∂x

∂f (x, y) ∂y Sobel template

A disadvantage of this and other similar [e.g., Roberts template (20)] approximations of the gradient function is that the small 2 ⫻ 2 window size makes them very susceptible to noise. Somewhat less susceptible to noise are the 3 ⫻ 3 window templates devised by Sobel [see Duda and Hart (21)] and Prewitt (22), which are illustrated in Fig. 6. The edge detection templates given in Figs. 5 and 6 are approximations of an image gradient or a discretation of the first derivative of the image function. The second derivative of the image function, called the Laplacian operator, can also be used for edge detection. Whereas the first derivative produces positive or negative peaks at and image edge, the second derivative produces a zero value at the image edge, surrounded closely by positive and negative peaks. Edge detection then reduces to detecting these ‘‘zero-crossing’’ values from the Laplacian operator. For a continuous function f(x, y), the Laplacian operator is defined as ∇ 2 f (x, y) =

∂ 2 f (x, y) ∂ 2 f (x, y) + ∂x2 ∂y2

(27)

The usual discrete approximation is ∇ 2 f (x, y) = 4 f (x, y) − f (x − 1, y) − f (x, y + 1) − f (x + 1, y) (28) This can be represented by convolving a two-dimensional image with the image template shown in Fig. 7. Note that the Laplacian operator is directionally symmetric. Image Filtering for Edge Detection. All these methods for edge detection are intrinsically noise sensitive (some more than others) because they are based upon differences between pixels in local areas of the image. Marr and Hildreth (23) suggested the use of Gaussian filters with relatively large window sizes to remove noise in images. Combining the Gaussian filter with the Laplacian operator yields the Laplacian of

0

–1

0

–1

4

–1

0

–1

0

∂f (x, y) ∂x

Figure 7. The Laplacian edge detection template. This edge detection template is the discretized second derivative of the image function f(x, y). This operator produces a zero value at image edges, which is surrounded closely by positive and negative peaks.

∂f (x, y) ∂y

Prewitt template

Gaussian (LOG) function

∇ 2 G(x, y) =

1 2πσ 4

2

x + y2 −(x2 + y2 ) · − 1 exp σ2 2σ 2 (29)

where controls the amount of smoothing provided by the filter. Edge detection through convolving the image with the LOG function and searching for zero-crossing locations is less sensitive to noise than the previously discussed methods. Even more sophisticated filtering and edge location techniques have been devised. These techniques were unified in a paper by Shen and Castan (24), in which they derive the optimal filter for the multi-edge case. Region Growing Region growing is a process by which image pixels are merged with neighboring pixels to form regions, based upon a measure of similarity between pixels and regions. The basic outline of region growing follows (25–27): 1. Initialize by labeling each pixel as a separate region. 2. Merge all spatially adjacent pixels with identical feature values. 3. Calculate a similarity or dissimilarity criterion between each pair of spatially adjacent regions. 4. Merge the most similar pair of regions. 5. Stop if convergence has been achieved; otherwise, return to step 3. Beaulieu and Goldberg (25) describe a sequential implementation of this algorithm in which step 3 is kept efficient through updating only those regions involved in or adjacent to the merge performed in step 4. Tilton (26) describes a parallel implementation of this algorithm in which multiple merges are allowed in step 4 (best merges are performed in image subregions) and the (dis)similarity criterion in step 3 is calculated in parallel for all regions. Schoenmakers (27) simultaneously merges all region pairs with minimum dissimilarity criterion value in step 4. The similarity or dissimilarity criterion employed in step 3 should be tailored to the type of image being processed. A simple criterion that has been used effectively with remotely sensed data is the Euclidean spectral distance (27), as in Table 3. Other criteria that have been employed are the Normalized Vector Distance (28), criteria based on minimizing the mean-square error or change in image entropy (29), and a

INFORMATION PROCESSING FOR REMOTE SENSING

criterion based on minimizing a polynomial approximation error (25). Clear-cut convergence criteria have not been developed for region growing segmentation. Simple criteria that are satisfactory in some applications are the number of regions or a ratio of number of regions to the total number of image pixels. Direct thresholding on the dissimilarity criterion value (i.e., perform no merges between regions with a dissimilarity criterion value greater than a threshold) has also been used with mixed results. More satisfactory results have been obtained by defining convergence as the iteration prior to the iteration at which the maximum change in dissimilarity criterion value occurred. Extraction and Classification of Homogeneous Objects An image segmentation followed by a maximum likelihood classification is the basic idea behind the Extraction and Classification of Homogeneous Objects (ECHO) classifier (30,31). The segmentation scheme used by ECHO was designed for speed on the computers of mid-1970s and could be replaced by a segmentation approach of more recent vintage. However, the formalization of the maximum likelihood classification for image regions (objects) is still appropriate. For single pixels, the maximum likelihood decision rule is Decide X is in ωi if and only if p(X|ωi ) ≥ p(X|ω j ) for all j = 1, 2, . . ., k

(30)

The rule is just Eq. (13) with p(웆j) ⫽ 1. Suppose that an image region consists of m pixels. To apply the maximum likelihood decision rule to this region, X must be redefined to include the entire region, that is, X ⫽ 兵X1, X2, . . ., Xm其. The evaluation of p(X兩웆i), where X is redefined as a collection of pixels, is very difficult. However, this collection of pixels belongs to a homogeneous region. In this case, it is reasonable to assume that the pixels are statistically independent. This assumption allows the evaluation of p(X兩웆i) as the product

p(X|ωi ) = p(X1 , X2 , . . ., Xm |ωi ) =

m

p(X j |ωi )

(31)

j=1

Split and Merge Seeking more efficient methods for region-based image segmentation has led to the development of split-and-merge approaches. Here the image is repeatedly subdivided until each resulting region has a minimum homogeneity. After the region-splitting process converges, the regions are grown as previously described. This approach is more efficient when large homogenous regions are present. However, some segmentation detail may be lost. See Cross et al. (32) for an example of split-and-merge image segmentation. Hybrids of Edge Detection and Region Growing A number of approaches have been offered for combining edge detection and region growing. Pavlidis and Liow (33) perform a split-and-merge segmentation such that an oversegmented result is produced and then eliminate or modify region boundaries based on general criteria including the contrast between the regions, region boundary smoothness, and the variation of the image gradient along the boundary. LeMoigne and Til-

95

ton (34) use region growing to generate a hierarchical set of image segmentations and make local selections of the best level of segmentation detail based on edges produced by an edge detector. Hybrids of Spectral Clustering and Region Growing Tilton (35) has recently demonstrated the potential of a hybrid of spectral clustering and region growing. In this approach, spectral clustering is performed in between each region growing iteration. The spectral clustering is constrained to merge regions that are at least as similar as the last pair of regions merged by region growing, and is not allowed to merge any spatially adjacent regions. This approach to image segmentation is very computationally intensive. However, practical processing times have been achieved by a recursive implementation on a cluster of 64 Pentium Pro PCs configured as a Beowulf-class parallel computer (35,36). HYPERSPECTRAL DATA Hyperspectral Data Normalization Hyperspectral imagery contains significantly more spectral information than does multispectral imagery such as that from Landsat TM. Imaging spectrometers produce hundreds of spectral band images, with narrow (typically 10 nm or less) contiguous bands across a broad wavelength range (e.g., 400– 2400 nm). Also, such new sensor systems are capable of generating more precise data radiometrically, with signal-tonoise ratios justifying 10 or more bit data systems (1024 or more shades of gray per band), as compared to 6 or 8 bit precision in previous systems. This potentially high precision requires concomitant substantially improved calibration for atmospheric, solar, and topographic effects, particularly if comparisons are to be made to laboratory or field reflectance spectra for classification. To convert remote sensing data to reflectance, one must first correct for the additive and multiplicative factors in Eq. (1). Even though in some circumstances (e.g., multitemporal analysis) this correction may be useful for all spectral data, it is especially critical for hyperspectral imagery when the intention is to use narrow band spectral absorption features in a deterministic sense because 1. narrow atmospheric absorption bands have a severe effect on corresponding sensor bands, and 2. some algorithms for physical constituent estimation, either in the atmosphere or on the Earth’s surface, require precise measurements of absorption band locations, widths, and depths. The computation burden for calibration is, of course, much larger for hyperspectral imagery than it is for multispectral imagery. In one effective calibration technique, the empirical line method (37), the sensor values are linearly correlated to field reflectance measurements. In this single process, all the coefficients in Eq. (1) are determined except for topographic shading. Obtaining field reflectance measurements is difficult and expensive at best, so a number of indirect within-scene approaches have also been used.

INFORMATION PROCESSING FOR REMOTE SENSING

An example of the use of within-scene information to achieve a partial calibration of hyperspectral imagery is termed flat-fielding (38). An object that can be assumed spectrally uniform and with high radiance (‘‘white’’ in a visual sense) must be located within the scene. Its spectrum as seen by the sensor contains the atmospheric transmittance terms of Eq. (1). If the data are first corrected for the haze level, or if it can be ignored (at longer wavelengths such as in the NIR and SWIR), then a division of each pixel’s spectrum by the bright reference object’s spectrum will tend to cancel the solar irradiance and atmospheric transmittance factors in Eq. (1). An example is shown in Fig. 8. The data are from an Airborne Visible-InfraRed Imaging Spectrometer (AVIRIS) flight over Cuprite, Nevada, in 1990. The mineral kaolinite contains a doublet absorption feature centered at about 2180 nm, which is masked in the at-sensor radiance data by the downward trend of the solar irradiance. An atmospheric carbon dioxide absorption feature can also be seen at 2060 nm. After the flatfield operation, the relative spectral reflectance closely matches a sample reflectance curve in shape, including no atmospheric absorption features. The reflectance magnitude does not agree because the flat-field correction does not correct for topographic shading [the cosine term in Eq. (1)]. After the hyperspectral data are normalized in this way, it is possible to characterize the surface absorption features by such parameters as their location, depth (relative to a continuum, which is a hypothetical curve with no absorption features), and width (Fig. 9). These features can be used to distinguish one mineral (or any other material with narrow absorption features) from another (39). The feature extraction algorithms first detect local minima in the spectral data using

0.8 0.7

200

0.6 0.5 0.4

150

0.3 100

Kaolinite (flat field) Kaolinite CM9 Kaolinite (radiance)

Relative reflectance

At-sensor radiance (W-m–2-sr –1-µ m–1)

250

0.2 0.1

0 50 2000 2050 2100 2150 2200 2250 2300 2350 2400 Wavelength (nm)

Figure 8. AVIRIS radiance data for the mineral kaolinite at Cuprite, Nevada, before and after flat-field normalization, compared to spectral reflectance data from a mineral reflectance library (sample designated CM9). It is evident that the normalization process produces a spectral signal from the image radiance data that more closely matches the shape of the spectral reflectance curve. If classification of image radiance data is performed with library spectral reflectance as the reference signal, either an empirical normalization of this type, or a difficult calibration of the sensor radiance data to reflectance would be required.

Continuum Relative reflectance

96

Depth Width

Location Wavelength Figure 9. Definition of spectral absorption features in hyperspectral data. After an absorption band is detected in the spectral data, these features can be measured and compared with the same features derived either from labeled training pixels within the image itself or from library spectral reflectance data. For surface materials with characteristic absorption bands, this approach can considerably reduce the amount of computation required for classification of hyperspectral imagery.

operations such as a spectral derivative and then calculate the depth and width of those minima. Zero crossings in second-order derivatives and a spectral scale-space can also be used to detect and measure significant spectral absorption features (40). Classification and Analysis of Hyperspectral Data Data in higher-dimensional spaces have substantially different characteristics than that in three-dimensional space, such that the ordinary rules of geometry of three-dimensional space do not apply. For example, two class distributions can lie right on top of one another, in the sense of having the same mean values and yet they may be perfectly separable by a well-designed classifier. Examples of these differences of data in high-dimensional space follow (41). As dimensionality increases, 1. The volume of a hypercube concentrates in the corners. 2. The volume of a hypersphere concentrates in an outside shell. 3. The diagonals are nearly orthogonal to all coordinate axis. When data sets contain a large number of spectral bands or features, more than 10 or so, the ability to discriminate between classes with higher accuracy and to derive greater information detail increases substantially, but some additional aspects become significant in the data analysis process in order to achieve this enhanced potential. For example, as the data dimensionality increases, the number of samples necessary to define the class distributions to adequate precision increases very rapidly. Furthermore, both first- and secondorder statistics are significant in achieving optimal separability of classes. The fact that second-order statistics are significant, in addition to first-order statistics, tends to exacerbate the need for a larger number of training samples. For example, if one were to attempt to analyze a 200-dimensional data set at full di-

INFORMATION PROCESSING FOR REMOTE SENSING

mensionality using conventional estimation methods, many thousands of samples may be necessary in order to obtain the full benefit of the 200 bands. Rarely would this number of samples be available. Hyperspectral Feature Extraction Quantitative feature extraction methods are especially important because of the large number of spectral bands on the one hand and the significantly enhanced amount and detail of information that is potentially extractable from such data on the other. Given the large number of spectral bands in such data, feature selection, choosing the best subset of bands of size m from a complete set of N bands, quickly becomes intractable or impossible. For example, to choose the best subset of bands of size 10 out of 100 bands, there are more than 1.7 ⫻ 1013 possible subsets of size 10 that must be examined if the optimum set is to be determined. It is possible to avoid working directly at such high dimensionality without a penalty in classifier performance and with a substantial improvement in computational efficiency. This is the case because, as implied by the preceding geometric characteristics, the volume in a hyperdimensional feature space increases very rapidly as the dimensionality increases. A result of this is that, for remote sensing problems, such a space is mostly empty, and the important data structure in any given problem will exist in a subspace. The particular subspace is very problem-dependent and is different for every case. Thus, if one can determine which subspace is needed for the problem at hand, one can have available all the separability that the high-dimensional data can provide, but with reduced need for training set size and reduced amounts of computation. The problem then becomes focused on finding the correct subspace containing this key data structure. Feature extraction algorithms may be used for this purpose. The CCT introduced earlier is one of several approaches that is suitable for extraction features from hyperspectral data. Each approach tends to have unique advantages and some disadvantages. The CCT, for example, is a relatively fast calculation. However, it does not perform well when the classes to be discriminated have only a small difference in their mean values, and it produces predictably useful features only up to one less than the number of classes to be separated. It has another disadvantage common to many such algorithms, namely that it depends on parameters, class means, and covariances, which must be estimated from the training samples at full dimensionality. Thus, it may produce a suboptimal feature subspace resulting from the imprecise estimation of the class parameters. Another possible scheme is the Decision Boundary Feature Extraction algorithm (42). This scheme does not have the disadvantages of CCT. However, it tends to be a lengthy calculation because it is based directly upon the training samples instead of the class statistics derived from them. A very fortunate characteristic of high-dimensional spaces is that for most high-dimensional data sets, lower-dimensional linear projections tend to be normally distributed or with a combination of normal distributions. This tends to add to the credibility of using a Gaussian model for the classification process, reduces the need to consider nonparametric schemes, and reduces the need to consider the use of higherorder statistics.

97

Thus, based upon the algorithms referred to previously, one can expect to do a very effective analysis of high-dimensional multispectral data and, in a practical circumstance, achieve a near to optimal extraction of desired information with performance substantially enhanced over that possible with more conventional multispectral data. BIBLIOGRAPHY 1. J. C. Tilton, Image segmentation by iterative parallel region growing with applications to data compression and image analysis, Proc. 2nd Symp. Frontiers Massively Parallel Computat., pp. 357–360, Fairfax, VA, 1988. 2. R. A. Schowengerdt, Remote Sensing—Models and Methods for Image Processing, 2nd ed. Chestnut Hill, MA: Academic Press, 1997. 3. A. Berk, L. S. Bernstein, and D. C. Robertson, MODTRAN: A Moderate Resolution Model for LOWTRAN 7, U.S. Air Force Geophysics Laboratory, No. GL-TR-89-0122, 1989. 4. A. R. Huete, A soil adjusted vegetation index (SAVI), Remote Sens. Environ., 25: 295–309, 1988. 5. J. A. Richards, Remote Sensing Digital Image Analysis: An Introduction, 2nd ed. Berlin: Springer-Verlag, 1993. 6. K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd ed., Boston: Academic Press, 1990. 7. G. W. Stewart, Introduction to Matrix Computations, New York: Academic Press, 1973. 8. R. J. Kauth and G. S. Thomas, The Tasselled Cap—A graphic description of spectral-temporal development of agricultural crops as seen by Landsat, Proc. 2nd Int. Symp. Remotely Sensed Data, 4B: 41–51, Purdue University, West Lafayette, IN, 1976. 9. D. R. Thompson and O. A. Whemanen, Using Landsat digital data to detect moisture stress in corn-soybean growing regions, Photogrammetric Eng. Remote Sens., 46: 1087-1093, 1980. 10. E. P. Crist and R. C. Cicone, A physically-based transformation of Thematic Mapper data—the TM Tasseled Cap, IEEE Trans. Geosci. Remote Sens., GE-22: 256–263, 1984. 11. E. P. Crist, R. Laurin, and R. C. Cicone, Vegetation and soils information contained in transformed Thematic Mapper data. Proc. 1986 Int. Geosci. Remote Sens. Symp., pp. 1465–1470, Zurich, 1986. 12. P. H. Swain and S. M. Davis, Remote Sensing: The Quantitative Approach, New York: McGraw-Hill, 1978. 13. A. Papoulis, Probability, Random Variables, and Stochastic Processes, Tokyo: McGraw-Hill, 1984. 14. J. D. Paola and R. A. Schowengerdt, A review and analysis of backpropagation neural networks for classification of remotelysensed multi-spectral imagery, Int. J. Remote Sens., 16: 3033– 3058, 1995. 15. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, eds., Parallel Distributed Processing: Explorations in the Microstruction of Cognition, Vol. I, pp. 318–362, Cambridge, MA: MIT Press, 1986. 16. R. P. Lippmann, An introduction to computing with neural nets, IEEE ASSP Magazine, 4 (2): 4–22, 1987. 17. J. D. Paola and R. A. Schowengerdt, The effect of neural network structure on a multispectral land-use classification, Photogrammetric Eng. Remote Sens., 63: 535–544, 1997. 18. J. D. Paola and R. A. Schowengerdt, A detailed comparison of backpropagation neural network and maximum-likelihood classifiers for urban land use classification, IEEE Trans. Geosci. Remote Sens., 33: 981-996, 1995.

98

INFORMATION RETRIEVAL AND ACCESS

19. K. S. Fu and J. K. Mui, A survey on image segmentation, Pattern Recognition, 13: 3–16, 1981.

northern Grapevine Mountains, Nevada and California, Remote Sens. Environment, 24: 31–51, 1988.

20. L. G. Roberts, Machine perception of three dimensional solids, Proc. Symp. Optical Electro-Optical Image Processing Technol., pp. 159–197, Cambridge, MA: MIT Press, 1965.

40. M. A. Piech and K. R. Piech, Symbolic representation of hyperspectral data, Appl. Opt., 26: 4018–4026, 1987.

21. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis. New York, Wiley, 1973. 22. J. M. S. Prewitt, Object enhancement and extraction. In B. S. Lipkin and Rosenfeld, eds., Picture Processing and Psychopictorics, pp. 75–149. New York: Academic Press, 1970. 23. D. Marr and E. Hildreth, Theory of edge detection, Proc. Roy. Soc. London, B 207: 187–217, 1980.

41. L. Jimenez and D. A. Landgrebe, Supervised classification in high dimensional space: geometrical, statistical, and asymptotical properties of multivariate data, IEEE Trans. Syst. Man. Cybern., 286: 39–54, 1998. 42. C. Lee and D. A. Landgrebe, Feature extraction based on decision boundaries, IEEE Trans. Pattern Anal. Mach. Intell., 15: 338– 500, 1993.

JAMES C. TILTON

24. J. Shen and S. Castin, Towards the unification of band-limited derivative operators for edge detection, Signal Process., 31: 103– 119, 1993.

NASA’s Goddard Space Flight Center

DAVID LANDGREBE

25. J.-M. Beaulieu and M. Goldberg, Hierarchy in picture segmentation: A stepwise optimization approach, IEEE Trans. Pattern Anal. Mach. Intell., 11: 150–163, 1989.

Purdue University

ROBERT A. SCHOWENGERDT University of Arizona

26. J. C. Tilton, Image segmentation by iterative parallel region growing and splitting, Proc. 1989 Int. Geosci. Remote Sens. Symp., pp. 2235–2238, Vancouver, Canada, 1989. 27. R. P. H. M. Schoenmakers, Integrated Methodology for Segmentation of Large Optical Satellite Image in Land Applications of Remote Sensing, Agriculture series, Catalogue number : CL-NA16292-EN-C, Luxembourg: Office for Official Publications of the European Communities, 1995. 28. A. Baraldi and F. Parmiggiani, A neural network for unsupervised categorization of multivalued input parameters: An application to satellite image clustering, IEEE Trans. Geosci. Remote Sens., 33: 305–316, 1995. 29. J. C. Tilton, Experiences using TAE-Plus command language for an image segmentation program interface, Proc. TAE Ninth Users’ Conf., New Carrollton, MD, pp. 297–312, 1991. 30. R. L. Kettig and D. A. Landgrebe, Computer classification of remotely sensed multispectral image data by Extraction and Classification of Homogeneous Objects, IEEE Trans. Geosci. Electron., GE-14: 19–26, 1976. 31. D. A. Landgrebe, The development of a spectral-spatial classifier for Earth observational data, Pattern Recognition, 12: 165–175, 1980. 32. A. M. Cross, D. C. Mason, and S. J. Dury, Segmentation of remotely sensed image by a split-and-merge process, Int. J. Remote Sens., 9: 1329–1345, 1988. 33. T. Pavlidis and Y.-T. Liow, Integrating region growing and edge detection, IEEE Trans. Pattern Anal. Mach. Intell., 12: 225– 233, 1990. 34. J. Le Moigne and J. C. Tilton, Refining image segmentation by integration of edge and region data, IEEE Trans. Geosci. Remote Sens., 33: 605–615, 1995. 35. J. C. Tilton, Image segmentation by region growing and spectral clustering with natural convergence criteria, Proc. 1998 Int. Geosci. Remote Sens. Symp., Seattle, Washington, 1998. 36. D. J. Becker et al., Beowulf: A parallel workstation for scientific computation, Proc. Int. Conf. Parallel Process., 1995. 37. F. A. Kruse, K. S. Kierein-Young, and J. W. Boardman, Mineral mapping at Cuprite, Nevada with a 63-channel imaging spectrometer, Photogrammetric Eng. Remote Sens., 56: 83–92, 1990. 38. M. Rast et al., An evaluation of techniques for the extraction of mineral absorption features from high spectral resolution remote sensing data, Photogrammetric Eng. Remote Sens., 57: 1303– 1309, 1991. 39. F. A. Kruse, Use of Airborne Imaging Spectrometer data to map minerals associated with hydrothermally altered rocks in the

INFORMATION PROCESSING, OPTICAL. See OPTICAL NEURAL NETS.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3608.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Meteorological Radar Standard Article Richard J. Doviak1 and Dušan S. Zrni1 1National Severe Storms Laboratory The University of Oklahoma, Norman, OK Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3608 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (182K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Fundamentals of Meteorological Doppler Radar Reflectivity and Velocity Fields of Precipitation Rain, Wind, and Observations of Severe Weather Wind and Temperature Profiles in Clear Air Trends and Future Technology About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3608.htm17.06.2008 15:36:45

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

596

METEOROLOGICAL RADAR

METEOROLOGICAL RADAR The origins of meteorological radar (RAdio Detection And Ranging) can be traced to the 1920s when the first echoes from the ionosphere were observed with dekametric (tens of meters) wavelength radars. However, the greatest advances in radar technology were driven by the military’s need to detect aircraft and occurred in the years leading to and during WWII. (A brief review of the earliest meteorological radar development is given in Ref. 1; a detailed review of the development during and after WWII is given in Ref. 2.) Precipitation echoes were observed almost immediately with the deployment of the first decimeter (앒0.1 m) wavelength military radars in the early 1940s. Thus, the earliest meteorological radars used to observe weather (i.e., weather radars) were manufactured for military purposes. The military radar’s primary mission is to detect, resolve, and track discrete targets such as airplanes coming from a particular direction, and to direct weapons for interception. Although weather radars also detect aircraft, their primary objective is to map the intensity of precipitation (e.g., rain, or hail), which can be distributed over the entire hemisphere above the radar. Each hydrometeor’s echo is very

weak; nevertheless, the extremely large number of hydrometeors within the radar’s beam returns a continuum of strong echoes as the transmitted pulse propagates through the field of precipitation. Thus, the weather radar’s objective is to estimate and map the fields of reflectivity and radial velocities of hydrometeors; from these two fields the meteorologists need to derive the fall rate and accumulation of precipitation and warnings of storm hazards. The weather radar owes its success to the fact that centimetric waves penetrate extensive regions of precipitation (e.g., hurricanes) and reveal, like an X-ray photograph, the morphology of weather system. The first U.S. national network of weather radars, designed to map the reflectivity fields of storms and to track them, were built in the mid 1950s and operated at 10 cm wavelengths. 1988 marked the deployment of the first network of Doppler weather radars (i.e., the WSR-88D), which in addition to mapping reflectivity, have the capability to map radial (Doppler) velocity fields. This latter capability proved to be very helpful in identifying those severe storm cells that harbor tornadoes and damaging winds. If a hydrometeor’s diameter is smaller than a tenth of the radar’s wavelength, its echo strength is inversely proportional to the fourth power of the wavelength. Thus, shorter wavelength (i.e., millimeter) radars are usually the choice to detect clouds. Cloud particle diameters are less than 100 애, and attenuation due to cloud particles is not overwhelming. However, if clouds bear rain of moderate intensity, precipitation attenuation can be severe (e.g., at a wavelength of 6.2 mm, and rainrate of 10 mm h⫺1, attenuation can be as much as 6 dB km⫺1 (3). Spaceborne meteorological radars also operate in the millimetric band of wavelengths in order to obtain acceptable angular resolution with reasonable antenna diameters required to resolve clouds at long ranges (4). Airborne weather radars operate at short wavelengths of approximately 3 and 5 cm; these are used to avoid severe storms that produce hazardous wind shear and extreme amounts of rainwater (which extinguish jet engines), and to study weather phenomena (5). The short wavelength waves are used to obtain acceptable angular resolution with small antennas on aircraft; however, short waves are strongly attenuated as they propagate into heavy precipitation. At longer wavelengths (e.g., ⬎10 cm), only hail and heavy rain significantly attenuate the radiation. Weather radars are commonly associated with the mapping of precipitation intensity. Nevertheless, the earliest, of what we now call, meteorological radars detected echoes from the nonprecipitating troposphere in the late 1930s (1,2). Scientists determined that these echoes are reflected from the dielectric boundaries of different air masses (1,6). The refractive index n of air is a function of temperature and humidity, and spatial irregularities in these parameters, caused by turbulence, were found to be sufficiently strong to cause detectable echoes from refractive index irregularities.

FUNDAMENTALS OF METEOROLOGICAL DOPPLER RADAR The basic principles for meteorological radars are the same as for any radar that transmits a periodic train of short duration pulses (i.e., with period Ts called the pulse repetition time (PRT), and duration ) of microwaves, and measures the delay between the time of emission of the transmitted pulse and the time of reception of any of its echoes. The PRT (i.e., Ts) is

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

METEOROLOGICAL RADAR

V (r, t) = Ae j[2π f (t−2r/c)+ψ )U (t − 2r/c)

(1)

at the input to the synchronous detectors is a replica of the signal transmitted; A is the echo amplitude that depends on the hydrometeor’s range r and its backscattering cross section b, and 2 앟f(t ⫺ 2r/c) ⫹ is the echo phase. The microwave carrier frequency is f, t is time after emission of the transmitted pulse, and is the sum of phase shifts introduced by the radar system and by the scatterer; these shifts are

usually independent of time. The function U locates the echo; it is one when its argument is between zero and , and zero otherwise. The output of one synchronous detector is called the in-phase (I) voltage and the other is called the quadrature-phase (Q) voltage (Fig. 1); these are the imaginary and real parts of the echo’s complex voltage [Eq. (1)] after its carrier frequency f is shifted to zero. Thus I(t, r) = A cos ψeU (t − 2r/c), where ψe = −

λ

) S(r, θ ,

Scatterer

θ

θ0 Antenna

T/R switch

D φ0

(4)

where 웆d is the Doppler shift (in radians per second). For typical transmitted pulse widths (i.e., 앒 10⫺6 s) and hydrometeor velocities (tens of m s⫺1), the changes in phase are extremely small during the time that U(t ⫺ 2r/c) is nonzero. Therefore, the echo phase change is measured over the longer PRT period (Ts 앒 10⫺3 s) and, consequently, the pulse Doppler weather radar is both an amplitude– and phase–sampling system. Samples are at s ⫹ mTs, where s is the time delay between a transmitted pulse and an echo, and m is an integer; s is a continuous time scale and always lies in the interval 0 ⱕ s ⱕ Ts, and mTs is called sample time, which increments in Ts steps. Because the transmissions are periodic, echoes repeat, and thus, there is no way to determine which transmitted pulse produced which echo (Fig. 2). That is, because s is measured with respect to the most recent transmitted pulse and has values ⬍Ts, the apparent range cs /2 is always less than the unambiguous range ra ⫽ cTs /2. However, the true range r can be cs /2 ⫹ (Nt ⫺ 1)ra, where Nt is the trip number, and Nt ⫺ 1 designates the number of cTs /2 intervals that need to be

Surface of constant range

Klystron amplifier transmitter

Filter amplifer G(f)

(3)

dψe 4π dr 4π =− =− ν = ωd dt λ dt λ

φ – φ0

Filter amplifer

4πr +ψ λ

is the echo phase, and ⫽ c/f is the wavelength of the transmitted microwave pulse. The time rate of the echo phase change is related to the scatterer’s radial (Doppler) velocity ,

Pulse modulator

"Stalo" microwave oscillator 0° 90° Synchronous detectors

Q(r, t) = A sin ψeU (t − 2r/c) (2)

ic Electr field θ ,φ ) E(r,t ,

typically of the order of milliseconds, and pulsewidths are of the order of microseconds. The radar has Doppler capability if it can measure the change in frequency or wavelength between the backscattered and transmitted signals. The Doppler radar’s microwave oscillator (Fig. 1) generates a continuous wave sinusoidal signal, which is converted to a sequence of microwave pulses by the pulse modulator. Therefore, the sinusoids in each microwave pulse are coherent with those generated by the microwave oscillator; that is, the crests and valleys of the waves in the pulse bear a fixed or known relation between the crests and valleys of the waves emitted by the microwave oscillator. The microwave pulses are then amplified by a high-power amplifier (a klystron is used in the WSR-88D) to produce about a megawatt of peak power. The propagating pulse has a spatial extent of c, and travels at the speed of light c along the beam (beamwidth 1 is the one-way, 3 dB width of the beam, and is of the order of 1 degree). The transmit/receive (T/R) switch connects the transmitter to the antenna (an 8.53 m diameter parabolic reflector is used for the WSR-88D) during , and the receiver to the antenna during the interval Ts ⫺ . The echoes are mixed in the synchronous detectors with a pair of phase quadrature signals (i.e., sine, 90⬚, and cosine, 0⬚, outputs from the oscillator). The pair of synchronous detectors and filter amplifiers shift the carrier frequency from the microwave band to zero frequency in one step for the homodyne radar and allow measurement of both positive and negative Doppler shifts (most practical radars use a two step process involving an intermediate frequency). A hydrometeor intercepts the transmitted pulse and scatters a portion of its energy back to the antenna, and the echo voltage

597

θ1 r

cτ

F Pu Side lobes R

lse

θ – θ0

I(t) Q(t) Data processing and display Figure 1. Simplified block diagram of a homodyne radar (no intermediate-frequency circuits are used to improve performance) showing the essential components needed to illustrate the basic principles of a meteorological Doppler radar.

598

METEOROLOGICAL RADAR

yy;y ;y ;; τ s2 +Ts

τ s1

Figure 2. Range-ambiguous echoes. The nth transmitted pulse and its echoes are crosshatched. This example assumes that the larger echo at delay s1 is unambiguous in range, but the smaller echo, at delay s2, is ambiguous. This smaller second trip echo, which has a true range time delay Ts ⫹ s2, is due to the (n ⫺ 1)th transmitted pulse.

n–1

n

n+1

τ s2

Ts

Transmitter pulses

added to the apparent range to obtain r. There is range ambiguity if r ⱖ ra. The I and Q components of echoes from stationary and moving scatterers are shown in Fig. 3 for three successive transmitted pulses. The echoes from the moving scatterer clearly exhibit a systematic change, from one mTs period to the next, caused by the scatterers’ Doppler velocity, whereas there is no change in echoes from stationary scatterers. Echo phase e ⫽ tan⫺1(Q/I) is measured, and its change over Ts is proportional to the Doppler shift given by Eq. (4). The periodic transmitted pulse sequence also introduces velocity ambiguities. A set of e samples cannot be related to one unique Doppler frequency. As Fig. 4 shows, it is not possible to determine whether V(t) rotated clockwise or counterclockwise and how many times it circled the origin during the interval Ts. Therefore, any of the frequencies ⌬e /Ts ⫹ 2앟p/Ts (where p is a ⫾ integer, and ⫺앟 ⬍ ⌬e ⱕ 앟) could be correct. All such Doppler frequencies are called aliases, and f N ⫽ 웆N /2앟 ⫽ 1/(2Ts) is the Nyquist frequency (in units of Hertz). All Doppler frequencies between ⫾f N are the principal aliases, and frequencies higher or lower than ⫾f N are ambiguous with those between ⫾f N. Thus, hydrometeor radial velocities must lie within the unambiguous velocity limits, a ⫽ ⫾ /4Ts, to avoid ambiguity. Signal design and processing methods have been advanced to deal with range-velocity ambiguities (1).

REFLECTIVITY AND VELOCITY FIELDS OF PRECIPITATION A weather signal is a composite of echoes from a continuous distribution of hydrometeors. After a delay (the roundtrip time of propagation between the radar and the near boundary of the volume of precipitation), echoes are continuously received (Fig. 5) during a time interval equal to twice the time it takes the microwave pulse to propagate across the volume containing the hydrometeors. Because one cannot resolve each of the hydrometeor’s echoes, meteorological radar circuits sample the I and Q signals at uniformly spaced intervals along s, and convert the analog values of the I, Q voltages to digital numbers. For each sample, there is a resolution volume V6 (i.e., the volume enclosed by the surface on which angular and range-weighting functions (1) are smaller than 6 dB below their peak value) along the beam within which hydrometeors contribute significantly to the sample. Each scatterer within V6 returns an echo and, depending on its precise position to within a wavelength, its corresponding I or Q can have any value between maximum positive and negative excursions. Echoes from the myriad of hydrometeors constructively or destructively (depending on their phases) interfere with each other to produce the composite weather signal voltage V(mTs, s) ⫽ I(mTs, s) ⫹ jQ(mTs, s) for the mth Ts interval. The random size and location of hydrometeors cause the I and Q weather signals to be a random function of s. How-

Range time,τ s

1 µs

I(τ s) 0

0

Figure 3. I(s) and Q(s) signal traces vs s for three successive sampling intervals Ts have been superimposed to show the relative change of I, Q for both stationary and moving scatterers.

τs

Q(τ s)

τs

Stationary scatterers

Moving scatterers

METEOROLOGICAL RADAR

scatterers accurately. The samples’ average power P is P(ro ) = η(rr )I(rr o , r ) dV

Q V(t+Ts)

599

(5)

V(t)

∆ψ e

ψe(t)

in which the reflectivity , the sum of the hydrometeor backscattering cross sections b per unit volume, is

I

∞

η(rr ) = 0

Figure 4. A phasor diagram used to depict frequency aliasing. The phase of the signal sample V(t) could have changed by ⌬e over a period Ts.

ever, these random signals have a correlation time c (Fig. 5), dependent on the pulsewidth and the receiver’s bandwidth (1). Thus, V(mTs, s has noise-like fluctuations along s even if the scatterer’s time averaged density is spatially uniform. The sequences of M(m ⫽ 1 씮 M) samples at any s are analyzed to determine the motion and reflectivity of hydrometeors in the corresponding V6. The dashed line in Fig. 5 depicts a possible sample time mTs, dependence of I(mTs, s1) for hydrometeors having a mean motion that produces a slowly changing sample amplitude along mTs. The rate of change of I and Q vs mTs is determined by the radial motion of the scatterers. Because of turbulence, scatterers also move relative to one another and, therefore, the I, Q samples at any s change randomly with a correlation time along mTs dependent on the relative motion of the scatterers. For example, if turbulence displaces the relative position of scatterers a significant fraction of a wavelength during the Ts interval, the weather signal at s will be uncorrelated from sample to sample, and Doppler velocity measurements will not be possible; Doppler measurements require a relatively short Ts. The random fluctuations in the I, Q samples have a Gaussian probability distribution with zero mean; the probability of the signal power I2 ⫹ Q2 is exponentially distributed (e.g., the weakest power is most likely to occur). Using an analysis of the V(mTs, s) sample sequence along mTs, the meteorological radar’s signal processor to estimate both the average sample power and the power-weighted velocity of the

σb (D)N(D, r ) dD

(6)

The factor I(ro, r) in Eq. (5) is a composite angular and rangeweighting function; its center at ro depends on the beam direction as well as s. The values of I(ro, r) at r depend on the antenna pattern, transmitted pulse shape and width, and the receiver’s frequency or impulse transfer function (1). In general, I(ro, r) has significant values only within V6; N(D), the particle size distribution, determines the expected number density of hydrometeors with equivolume diameters between D and D ⫹ dD. The meteorological radar equation, P(ro ) =

Pt g2 gs λ2 ηcτ πθ1 (4π )3 r2o l 2 lr 16 ln 2

(7)

is used to determine from measurements of P, wherein Pt is the peak transmitted power, g is the gain of the antenna (a larger antenna directs more power density along the beam and hence has larger gain), and gs is the gain of the receiver (e.g., the net sum of losses and gains in the T/R switch, the synchronous detectors, and the filter/amplifiers in Fig. 1). Here ro 앒 cs /2, l is one-way atmospheric transmission loss, and lr is the loss due to the receiver’s finite bandwidth (1). Equation (7) is valid for transmitted radiation having a Gaussian function dependence on distance from the beam axis, and for uniform reflectivity fields. Radar meteorologists have related reflectivity , which is general radar terminology for the backscattering cross section per unit volume, to a reflectivity factor Z which has meteorological significance. If hydrometeors are spherical and have diameters much smaller than (i.e., the Rayleigh approximation), the reflectivity factor,

∞

Z=

N(D, r )D6 dD

(8)

π5 |Km |2 Z λ4

(9)

0

is related to by I(τ s1) Sample

η=

Range time

τs

Ts

m Sa

pl

e

e tim τ s1

τc

τ s2

Figure 5. Idealized traces for I(s) of weather signals from a dense distribution of scatterers. A trace represents V(mTs, s) vs s for the mth Ts interval. Instantaneous samples are taken at sample times s1, s2, etc. The signal correlation time along s is c. Samples at fixed s are acquired at Ts intervals and are used to compute the Doppler spectrum for scatterers located about the range cs /2.

where Km ⫽ (m2 ⫺ 1)/(m2 ⫹ 2), and m ⫽ n(1 ⫺ j) is the hydrometeor’s complex refractive index, and is the attenuation index (1). The relation between radial velocity (r) at a point r and the power–weighted Doppler velocity (ro) is ν(rr )η(rr )I(rro , r ) dV (10) ν(rr o ) P(rr o ) It can be shown (1) that (ro) is the first moment of the Doppler spectrum. An example of a Doppler spectrum for echoes

600

METEOROLOGICAL RADAR

Power (dB below peak)

Velocity (m s–1) –100

∞

D3 N(D)ωt (D) dD ms−1

Power (dB below peak)

0

20

40

60

80

100

Noise level

Velocity (m s–1) –80

–60

–40

–20

0

20

0

40

60

80

100

Hann

–20 –40

Fields of reflectivity factor Z and the power-weighted mean velocities (ro) are displayed on color TV monitors to depict the morphology of storms. The Z at low elevation angles is used to estimate rain rates because hydrometeors there are usually rain drops, and vertical air motion can be ignored so that the drops are falling at their terminal velocity wt, a known function of D. The rainfall rate R is usually measured as depth of water per unit time and is given by

–20

–40

RAIN, WIND, AND OBSERVATIONS OF SEVERE WEATHER

π 6

–40

–20

from a tornado is plotted in Fig. 6. This power spectrum is the magnitude squared of the spectral coefficients obtained from the discrete Fourier transform for M ⫽ 128 V(mTs, s) samples at a s corresponding to a range of 35 km. The obscuration of the maximum velocity (i.e., 앒60 m s⫺1) of the scatterers in this tornado, and the power of stronger spectral coefficients leaked through the spectral sidelobes of the rectangular window (i.e., uniform weighting function) are evident. The von Hann weighting function reduces this leakage and better defines both true signal spectrum and maximum velocity. Where spectral leakage is not significant (e.g., samples weighted with the von Hann window function), the spectral coefficients have an exponential probability distribution and hence there is a large scatter in their power. Thus, a 5point running average is plotted to show the spectrum more clearly. The spectral density of the receiver’s noise is also obscured by the leakage of power from the larger spectral components if voltage samples are uniformly weighted.

R=

–60

Rect

–100 Figure 6. The spectral estimates (denoted by ⫻) of the Doppler spectrum of a small tornado that touched down on 20 May 1977 in Del City, Oklahoma. V6 is located at azimuth: 6.1⬚; elevation: 3.1⬚; altitude: 1.9 km. Rect signifies the spectrum for weather signal samples weighted by a rectangular window function (i.e., uniform weight), whereas Hann signifies samples weighted by a von Hann window function (1).

–80

0

(11)

0

where mks units are used. To convert to the more commonly used units of millimeters per hour, multiply Eq. (11) by the factor 3.6 ⫻ 106. The simple and often observed N(D) is an exponential one, and even in this case, we need to measure or specify two parameters of N(D) to use Eq. (11). A real dropsize distribution requires an indefinite number of parameters to characterize it and thus, the radar-determined value of Z

alone cannot provide a unique measurement of R. Although radar meteorologists have attempted for many years to find a useful formula that relates R to Z, there is unfortunately no universal relation connecting these parameters. Nonetheless, it is common experience that larger rainfall rates are associated with larger Z. For stratiform rain, the relation Z = 200R1.6

(12)

has often proved quite useful. Although the Doppler radar measures only the hydrometeor motion toward and away from the radar, the spatial distribution of Doppler velocities can reveal meteorological features such as tornadoes, microbursts (i.e., the divergent flow of strong thunderstorm outflow near the ground), buoyancy waves, etc. For example, the observed Doppler velocity pattern for a tornado in a mesocyclone (a larger-scale rotating air mass) is shown in Fig. 7. The strong gradient of Doppler velocities associated with a couplet of closed ⫾ isodops (i.e., contours of constant Doppler velocity) is due to the tornado having a diameter of about 700 m, and the larger-scale closed isodop pattern (i.e., the ⫺30 and ⫹20 contour) is due to the larger scale (i.e., 3.8 km diameter) mesocyclone. In practice, the data are shown on color TV displays wherein the regions between ⫾ isodops are often colored with red and green hues of varying brightness to signify values of ⫾ (ro). A front is a relatively narrow zone of strong temperature gradients separating air masses. A dry line is simply the boundary between dry and moist air masses. Turbulent mixing along these boundaries creates relatively intense irregularities of refractive index, which return echoes through the Bragg scatter mechanism (1,2,6; also described in the following section). Figure 8 shows the reflectivity fields associated with a cold front and a dry line, as well as the storms that initiated at the intersection of these boundaries. From Doppler velocity fields and observations of the cold front position at subsequent times, it was established that the cold air mass to the northwest of the front is colliding with the ambient air flowing from the SSW. The convergence along the boundary creates a line of persistent vertical motion that can lift scat-

METEOROLOGICAL RADAR

601

ward, and storms are initiated at this moving intersection point.

4

0

Distance (km)

WIND AND TEMPERATURE PROFILES IN CLEAR AIR

10

2 –10 –40 –50

0 –30 –20 –2

20

N

Norman radar Binger tornado 22 May 1981 10 1909 CST, 0.4° Elev

–10 0 –4 –4

30 20 ms–1

–2

0 Distance (km)

2

4

Figure 7. The isodops for the Binger, Oklahoma tornadic storm on 22 May 1981. The center of the mesocyclone is 70.8 km from the Norman Doppler radar at azimuth 284.4⬚; the data field has been rotated so that the radar is actually below the bottom of the figure.

tering particles, normally confined to the layers closer to the ground, making them visible as a reflectivity thin line. Thus, the reflectivity along the two boundaries could be due to these particles as well as to Bragg scatter. The intersection of cold fronts and dry lines is a favored location for the initiation of storms (seen to the northeast of the intersection). As the cold front propagates to the southeast, the intersection of it and the relatively stationary dry line progresses south-southwest-

z(dBz) 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75

In addition to particles acting as tracers of wind, irregularities of the atmosphere’s refractive index can cause sufficient reflectivity to be detected by meteorological radars. Although irregularities have a spectrum of sizes, only those with scales of the order of half the radar wavelength provide echoes that coherently sum to produce a detectable signal (1). This scattering mechanism is called stochastic Bragg scatter because the half-wavelength irregularities are in a constant state of random motion due to turbulence, and thus the echo signal intensity fluctuates exactly like signals scattered from hydrometeors. The reflectivity is related to the refractive index structure parameter Cn2 that characterizes the itensity of the irregularities (1,2,7) η = 0.38Cn2 λ−1/3

(13)

Mean values of Cn2 range from about 10⫺14 m⫺2/3 near sea level, to 10⫺17 m⫺2/3 at 10 km above sea level. Meteorological radars that primarily measure the vertical profile of the wind in all weather conditions and, in particular during fair weather, are called profilers. A wind profile is obtained by measuring the Doppler velocity vs range along beams in at least two directions (about 15 ⬚ from the vertical) and along the vertical beam and by assuming that wind is uniform within the area encompassed by the beams. The vertical profile of the three components of wind can be calculated from these three radial velocity measurements along the range and the assumption of wind uniformity (7). A prototype network of these profilers has been constructed across the central United States to determine potential benefits for weather forecasts (8). Temperature profiles are measured using a Radio-Acoustic Sounding System (RASS; 1,7). This instrument consists of a vertically pointed Doppler radar (for this application the wind profiling radar is usually time-shared) and a sonic transmitter that generates a vertical beam of acoustic vibrations, which produce a backscattering sinusoidal wave of refractive index propagating at the speed of sound. The echoes from the acoustic waves are strongest under Bragg scatter conditions (i.e., when the acoustic wavelength is one-half the radar wavelength). The backscatter intensity at various acoustic frequencies is used to identify those frequencies that produce the strongest signals. Because the acoustic wave speed (and thus wavelength) is a function of temperature, this identification determines the acoustic velocity and hence the temperature. Allowance must be made for the vertical motion of air, which can be determined by analyzing the backscatter from turbulently mixed irregularities. TRENDS AND FUTURE TECHNOLOGY

Figure 8. Intersecting reflectivity thin lines in central Oklahoma on 30 April 1991 at 2249 U. T. The thin line farthest west is along a NESW oriented cold front; the thin line immediately east is along a NNE-SSW oriented dry line. The reflectivity factor (dBZ) categories are indicated by the brightness bar. (Courtesy of Steve Smith, OSF/ NWS.)

Networks of radars producing Doppler and reflectivity information in digital form have had a major impact on our capability to provide short-term warnings of impending weather hazards. Still, there are additional improvements that could enhance the information derived from meteorological radars

602

METROPOLITAN AREA NETWORKS

significantly. Resolution of velocity and range ambiguities, faster coverage of the surveillance volume and better resolution, estimates of cross beam wind, and better measurements of precipitation type and amounts are some of the outstanding problems. Signal design techniques that encode transmitted pulses or stagger the PRT are candidates to mitigate the effects of ambiguities (1). Faster data acquisition can be achieved with multiple-beam phase-array radars, and better resolution can be obtained at the expense of a larger antenna. The cross beam wind component can be obtained using a bistatic dual-Doppler radar (i.e., combining the radial component of velocity measured by a Doppler weather radar with the Doppler shift measured at the distant receiver), or by incorporating the reflectivity and Doppler velocity data into the equations of motion and conservation. Vector wind fields in developing storms could be used in numerical weather prediction models to improve short-term forecasts. We anticipate an increase in application of millimeter wavelength Doppler radars to the study of nonprecipitating clouds (5). For better precipitation measurements, radar polarimetry offers the greatest promise (1,2). Polarimetry capitalizes on the fact that hydrometeors have shapes that are different from spherical and a preferential orientation. Therefore, differently shaped hydrometeors interact differently with electromagnetic waves of different polarization. To make such measurements, the radar should have the capability to transmit and receive orthogonally polarized waves, e.g., horizontal and vertical polarization. Both backscatter and propagation effects depend on polarization; measurements of these can be used to classify and quantify precipitation. Large drops are oblately shaped and scatter more strongly the horizontally polarized waves; they also cause larger phase shift of these waves along propagation paths. The differential phase method has several advantages for measurement of rainfall compared to the reflectivity method [Eq. (12)]. These include independence from receiver or transmitter calibrations errors, immunity to partial beam blockage and attenuation, lower sensitivity to variations in the distribution of raindrops, less bias from either ground clutter filtering or hail, and possibilities to make measurements in the presence of ground reflections not filtered by ground clutter cancelers. Polarimetric measurements have already proved the hail detection capability, but discrimination between rain, snow (wet, dry), and hail also seems quite possible. These areas of research and development could lead to improved short-term forecasting and warnings. BIBLIOGRAPHY 1. R. J. Doviak and D. S. Zrnic´, Doppler Radar and Weather Observations, 2nd ed., San Diego: Academic Press, 1993. 2. D. Atlas, Radar in Meteorology, Boston: Amer. Meteorol. Soc., 1990. 3. L. J. Battan, Radar Observations of the Atmosphere, Chicago: Univ. of Chicago Press, 1973. 4. R. Meneghini and T. Kozu, Spaceborne Weather Radar, Norwood, MA: Artech House, 1990. 5. Proc. IEEE; Spec. Issue Remote Sens. Environ., 82 (12): 1994. 6. E. E. Gossard and R. G. Strauch, Radar Observations of Clear Air and Clouds, Amsterdam: Elsevier, 1983. 7. S. F. Clifford et al., Ground-based remote profiling in atmospheric studies: An overview, Proc. IEEE, 82 (3): 1994.

8. U.S. Department of Commerce, National Oceanic and Atmospheric Administration, Wind profiler assessment report and recommendations for future use 1987–1994, Prepared by the staffs of the National Weather Service and the Office of Oceanic and Atmospheric Research, Silver Spring, MD, 1994.

RICHARD J. DOVIAK DUSˇAN S. ZRNIC´ National Severe Storms Laboratory The University of Oklahoma

METER, PHASE. See PHASE METERS. METERS. See OHMMETERS; POWER SYSTEM MEASUREMENT. METERS, ELECTRICITY. See WATTHOUR METERS. METERS, REVENUE. See WATTHOUR METERS. METERS, VOLT-AMPERE. See VOLT-AMPERE METERS. METHOD OF MOMENTS SOLUTION. See INTEGRAL EQUATIONS.

METHODS, OBJECT-ORIENTED. See BUSINESS DATA PROCESSING.

METHODS OF RELIABILITY ENGINEERING. See RELIABILITY THEORY.

METRIC CONVERSIONS. See DATA PRESENTATION. METRICS, SOFTWARE. See SOFTWARE METRICS.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3610.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Microwave Propagation and Scattering for Remote Sensing Standard Article Adrian K. Fung1 1University of Texas at Arlington, Arlington, TX Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3610 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (321K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Basic Terms in Scattering and Transmission Radiative Transfer Formulation Scattering from Soil Surfaces Scattering from a Vegetated Area Scattering from Snow-Covered Ground About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3610.htm17.06.2008 15:37:03

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

182

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING The main objective of remote sensing is to learn about the properties of the target (which can be an object, a scene, or a medium) by processing the signal scattered from it. For this reason it is desirable to remove system effects such as the

antenna gain and the range of observation from the received signal. This leads to the introduction of quantities proportional to the scattered power—for example, the scattering cross section for isolated targets and the scattering coefficient for area extensive targets such as a rough surface or a snowfield. These quantities are dependent on the exploring frequency, view angle, polarization, and the geometric and electric properties of the target as well as their scattering and propagation properties which are the subjects of interest here. This article treats the scattering and propagation of microwave fields and power in various earth environments. We begin with a discussion of a number of relevant terms, as well as reflection and transmission of polarized electromagnetic waves at a plane boundary between air and a finitely conducting medium. Then a vector radiative transfer formulation (1) for scattering and propagation within an inhomogeneous layer and scattering at its boundaries is discussed to provide a framework to understanding the interrelationships among wave propagation, volume scattering, and surface scattering. A rough surface, or an inhomogeneous half-space, is a special case of this formulation. Three application areas are considered. The first area of application is scattering and transmission at an irregular soil boundary. Scattering of waves that takes place at a surface boundary between two homogeneous media is called surface scattering. For natural ground surfaces where the roughness can only be described statistically, the scattered field will vary from location to location. Such a variation in the received signal is called fading, and the associated field amplitude and power distributions are its fading statistics. A meaningful signature of the rough surface is the statistically averaged, received power. It follows that this average power must be a function of the statistical parameters of the surface such as the standard deviation of the surface height (root mean square or rms height) and its height correlation function. In remote sensing it is the scattered field that is received by the observing antenna. Thus scattering is the key mechanism. However, in the presence of an inhomogeneous medium such as vegetation, sea ice, or a snow layer, the propagating fields within the medium are equally important, since they are part of the sources of the scattered field. For an inhomogeneous layer with irregular boundaries, an incident wave will generate scattering throughout the volume of the layer. Such a scattering mechanism is called volume scattering. In general, there will also be surface scattering at the boundaries and hence surface–volume interaction; multiple volume scattering and surface scattering are also present within the layer. When the phase relationship between scatterers is needed in volume scattering calculation, the type of scattering is said to be coherent. Otherwise, the total scattered power can be calculated by adding the scattered power from individual scatterers, and the associated scattering is said to be incoherent or independent. In the radiative transfer formulation coherent calculation is used to derive the scattering phase function which describes single scattering by a single scatterer in a sparse medium or a group of scatterers in a dense medium. Our second application area is scattering by and propagation through a vegetation layer above an irregular ground surface. The volume occupied by the vegetation biomass relative to the total volume of a vegetation layer is generally less

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

than 1%. For this reason a vegetation medium is taken to be a sparse medium where multiple scattering beyond the second order is assumed to be negligible. A more precise definition of a sparse medium is a situation in which the scatterers are in the far field of one another. The usual condition for far field is that the range between scatterers is greater than 2D2 / , where D is the largest dimension of the scatterer and is the operating wavelength in the medium in which the wave is propagating. Under this condition the phase of the propagating wave is a linear function of the range. In such a medium it is possible to ignore phase relations (or coherency) between scatterers in scattering calculations. A vegetated medium does not really satisfy this far-field condition. However, estimates of scattering using far-field calculations have been shown to give results that compare well with measurements (2). The third application area we consider is scattering and propagation in a snow medium above a ground surface. The volume fraction of ice particles in snow generally ranges from about 10% to 40%, while the ice particle size is in the range of 0.03 cm to 0.3 cm. Two scenarios are possible: 1. Within the distance of a wavelength there are two or more scatterers. Such a medium is called an electrically dense medium. In this case two or more scatterers scatter as a group, so a scattering phase function for the group is needed in the scattering calculation. 2. The adjacent scatterers are not in the far field of each other, but the average spacing between them is more than a wavelength. In this case the scattering phase function is for a single scatterer, but the far-field approximation is not valid. Such a medium is a spatially dense medium. A general dense medium may be spatially and electrically dense. Hence the general scattering phase function for snow must include both of these effects (3–5). In a dense medium scattered fields from scatterers interact at all distances. They are said to interact in the near field, when the distance between scatterers is small compared to the operating wavelength. BASIC TERMS IN SCATTERING AND TRANSMISSION In radar remote sensing the quantity measured is the radar cross section for an isolated target or the scattering coefficient for an area extensive target. To measure the radar cross section of a target, the size of the target must be smaller than the coverage of the radar beam; the converse is true in measuring the scattering coefficient. Intuitively, an object can scatter an incident wave into all possible directions with varying strength, and this scattering pattern should vary with the incident direction. To compare between the scattering strengths of objects in a given direction, some common reference is needed. For the radar cross section of an object, the common reference is an idealized isotropic scatterer. Thus the radar cross section of an object observed in a given direction is the cross section of an equivalent isotropic scatterer that generates the same scattered power density as the object in the observed direction. Mathematically the radar cross section r of an object observed in a given direction is the ratio

183

of the total power scattered by an equivalent isotropic scatterer to the incident power density on the object: σr =

4πR2 |E s |2 ≡ 4π|S|2 |E i |2

(1)

where R is the range between target and the radar receiver; Ei is the incident field; Es is the scattered field along the direction under consideration, and S is the scattering amplitude of the object defined by (R2兩Es兩2)/兩Ei兩2. For an area-extensive target such as a randomly rough soil surface, the scattered field comes from the area illuminated by the radar antenna. To avoid dependence on the size of the illuminated area A0, we want to define a per unit area quantity, the scattering coefficient of the surface 0, which is the statistically averaged radar cross section of A0 divided by A0. Let 具 典 be the symbol for statistical average. Then 0 can be written as σ0 =

σr 4πR2 |E s |2 4π|S|2 = ≡ i 2 A0 A0 |E | A0

(2)

When the transmitting and receiving antennas of the radar are co-located, the radar is said to operate in the monostatic mode. If the locations of these antennas are separated, it is said to operate in a bistatic mode. The scattering coefficient corresponding to the bistatic operation is referred to as a bistatic scattering coefficient. Reflection and Transmission at a Finitely Conducting Boundary In remote sensing of the earth’s environment, we generally encounter plane wave reflection from and transmission through a weakly finitely conducting medium. This problem has been extensively treated in Ref. 6. As shown in Fig. 1 a plane wave incident at a plane boundary is said to be horizontally polarized, or a transverse electric (TE) wave, if its electric field vector is perpendicular to the plane of incidence, which is the plane parallel to the wave propagation direction and the normal vector to the boundary, the xz-plane in Fig. 1. In this case the magnetic field vector is parallel to the plane of incidence. The incident wave is said to be parallel or vertically polarized, or a transverse magnetic (TM) wave, if the direction of the electric field vector is parallel to the plane of incidence. The law of reflection requires the incident and re-

z Hr

Medium 1

Ei Hi

θ (

1,

Er

θr

µ 1) x

(

2,

µ 2, σ 2)

θt

Medium 2 Et

Ht Figure 1. Reflection and transmission at a plane boundary between a dielectric upper medium and a finitely conducting lower medium.

184

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

flected angles to be the same, r ⫽ , and Snell’s law for dielectric media shows that the angle of transmission can be computed from √µ sin θ θt = sin−1 (3) √1 1 µ2 2 where 애 and ⑀ denote, respectively, the permeability and permittivity of a medium. When the lower medium is finitely conducting but the conductivity is small, Eq. (3) still gives a good estimate of the transmission angle, and the attenuation of the transmitted field may be estimated by the loss factor exp[−0.5σ2 η2 |z|]

(4)

where 2 is the conductivity, z is the distance into medium 2, and 2 ⫽ 兹애2 / ⑀2 is the intrinsic impedance of medium 2. The reader is referred to Ref. 6 for t and attenuation calculations, when the loss is not small. For most naturally occurring media 애1 앒 애2. From Eq. (3) we see that if ⑀1 ⬎ ⑀2 the sine of t may exceed unity for some range of . The for which sin t ⫽ 1 is called the critical angle c. When ⬎ c, sin t ⬎ 1, and there is no real angle of transmission. Physically the incident field is totally reflected. The Fresnel reflection and transmission coefficients for horizontal polarization with 애1 앒 애2 앒 애0 may be written for the electric fields as (6) Rh =

Er k cos θ − k2 cos θt η cos θ − η1 cos θt = 1 = 2 Ei k1 cos θ + k2 cos θt η2 cos θ + η1 cos θt

(5)

While it is not possible for Rh to be zero with ⑀1 ⬆ ⑀2 between dielectric media for some incident angle, it is possible for Rv to be zero. This particular incident angle is called the Brewster angle B. At this incident angle Tv ⫽ 1, Rv ⫽ 0 for dielectric media. The Brewster angle can be found from θB = tan−1

r 2

(10)

1

To extend the scattering coefficient 0 to include polarization dependence, note that its dependence on polarization is through the incident and scattered electric fields. Let p denote the incident polarization and q the scattered polarization. The symbols p and q may represent either vertical or horizontal polarization. Then we can add qp as subscripts to the scatter0 ing coefficient as qp . Like and Cross Polarizations Radar measurements are generally acquired in both like (or co-polarized) and cross polarizations. The polarization of an antenna used for measurement is defined to be the same as that of the wave it transmits and the polarization of a wave is the direction of its electric field. The term, like polarization, means transmitting and receiving with matched antennas: |aˆ r · aˆ t | = 1

(11)

where aˆt, aˆr are the polarizations of the transmitting and receiving antennas, respectively. Cross or orthogonal polarization is defined to be associated with zero reception:

and

|aˆ r · aˆ t | = 0

Et 2k1 cos θ 2η2 cos θ Th = i = = = 1 + Rh E k1 cos θ + k2 cos θt η2 cos θ + η1 cos θt (6) where k1,2 ⫽ 웆兹애0⑀1,2 is the wave number of the medium. For a finitely conducting lower medium, k2 cos t is actually a complex quantity. Its exact representation may be found in Ref. (6). For a medium with small conductivity, it is possible to approximate Rh, Th by replacing ⑀2 by ⑀2 ⫺ j2 /웆 and use Eq. (3) to calculate t. For example,

k2 cos θt

jσ ≈ω µ − cos θ µ /( − jσω/ω)

η2 ≈ cos θt

2

0

0

2

2

t

(7)

2

Hr k cos θ − k1 cos θt η cos θ − η2 cos θt = 2 = 1 Hi k2 cos θ + k1 cos θt η1 cos θ + η2 cos θt

(8)

and

Tν =

To illustrate the meaning of Eq. (11), consider the transmitting antenna, with polarization defined by Eq. (13) which represents a left-handed elliptically polarized plane wave from a transmitting antenna. The term, left-handed, means that when the thumb of the left hand is in the direction of propagation, the fingers are pointing in the direction of rotation of the electric field vector as time increases (Fig. 2) E t = xˆ cos τt cos(ωt − kz) − yˆ sin τt sin(ωt − kz)

(13)

The angle t defines the relative magnitudes of the semi-axes of the ellipse and is known as the ellipticity angle. A sign change in t or z would make the wave right handed. If the receiving antenna is chosen to have the same polarization, its

cos θt

Analogous relations can be obtained for the Fresnel reflection and transmission coefficients of vertically polarized waves of the magnetic fields by interchanging k1, 1 with k2, 2 as follows: Rν =

(12)

x′ x z y′

τt y

Ht 2k2 cos θ 2η1 cos θ = = = 1 + Rν Hi k2 cos θ + k1 cos θt η1 cos θ + η2 cos θt (9)

z′

Figure 2. Illustration of the polarizations of transmitting and receiving antenna systems. A left-handed elliptically polarized transmitted field is shown.

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

radiated field will have the same mathematical form but will be expressed in coordinates for the receiving antenna (the primed coordinates in Fig. 2). As illustrated in Fig. 2, transmitting and receiving antenna systems must point in opposite directions. This means that when the radiated field of the receiving antenna is expressed in the coordinate system of the transmitting antenna, its propagation phase must take the form 웆t ⫹ kz, and the rectangular coordinate system for the receiving antenna may be related to that of the transmitting antenna as follows: xˆ = xˆ yˆ = −yˆ

(14)

Thus the radiating field from the receiving antenna expressed in the coordinates of the transmitting antenna is E r = xˆ cos τr cos(ωt + kz) + yˆ sin τr sin(ωt + kz)

(15)

When we convert it to phasor form, it becomes E r = (xˆ cos τr − j yˆ sin τr ) exp[ j(ωt + kz)] ≡ aˆ r e j(ωt+kz)

(16)

Similarly from Eq. (13), the polarization unit vector in phasor form for the transmitting antenna is aˆ t = xˆ cos τ + j yˆ sin τ when we set t ⫽ r ⫽ . Clearly it is the complex conjugate of aˆr and 兩aˆr ⭈ aˆt兩 ⫽ 1. The polarization states of both antennas are left-handed elliptic. Hence this case is referred to as likepolarization. The special case, where is zero, yields linear polarization in the xˆ direction for both the transmitting and receiving antennas. To illustrate cross or orthogonal polarization, consider the receiving antenna defined by Eq. (15), and set r ⫽ t ⫺ 앟/2. Thus the polarization vector of the receiving antenna expressed in the transmitting coordinates becomes aˆ r = xˆ sin τ + j yˆ cos τ

(17)

after we set t ⫽ r ⫽ . If we take the dot product according to Eq. (12), we find that aˆr ⭈ aˆt ⫽ 0. When we check the polarization state of the receiving antenna, it is right-hand elliptic. Thus left-handed elliptic and right-handed elliptic polarizations are mutually orthogonal. The special case, where is zero, yields linear polarization in the xˆ direction for the transmitting antenna and in the yˆ direction for the receiving antenna. These directions are clearly orthogonal.

RADIATIVE TRANSFER FORMULATION In this section we present the basic development of the radiative transfer theory and its formulation for scattering from and propagation through an inhomogeneous layer with irregular boundaries. In the classical formulation of the radiative transfer equation (7) the fundamental quantity used is the specific intensity I. It is defined in terms of the amount of power dP (watts) flowing in the rˆ direction within a solid

185

angle d⍀ through an elementary area dS in a frequency interval (, ⫹ d) as follows: dP = Iν cos α dS d dν

(18)

where 움 is the angle between the outward normal sˆ to dS and the unit vector rˆ. The dimension of I is W m⫺2 sr⫺1 Hz⫺1. In most remote-sensing applications, the radiation at a single frequency is considered. Thus it is more convenient to consider the intensity I at a frequency , which is defined as the integral of I over the frequency interval ( ⫺ d /2, ⫹ d /2). In terms of intensity, the amount of power at a single frequency can be written as dP = I cos α dS d

(19)

The transfer equation governs the variation of intensities in a medium that absorbs, emits, and scatters radiation. Within the medium, consider a cylindrical volume of unit cross-section and length dl. The change in intensity may be a gain or a loss. The loss in intensity I propagating through the cylindrical volume along the distance dl is due to absorption and scattering away from the direction of propagation, and the gain is from thermal emission and scattering into the direction of propagation: dI = −κa Idl − κs Idl + κa Ja dl + κs Js dl

(20)

where a, s are the volume absorption and volume-scattering coefficients. In Eq. (20) Ja and Js are the absorption source function (or emission source function) and the scattering source function. Equation (20) is the radiative transfer equation in which the definition of Js is Js (θs , φs ) =

1 4π

2π 0

π

P(θs , φs ; θ, φ)I(θ, φ) sin θ dθ dφ

(21)

0

where P(s, s; , ) is the phase function accounting for scattering within the medium to be defined in the next subsection. It is clear from Eq. (21) that Js is not an independent source of the medium but is itself a function of the propagating intensity. On the other hand, Ja is an independent source function proportional to the temperature profile of the medium; namely it is the source function in passive remote-sensing problems. As such, it should be dropped in active remotesensing problems in which the source is an incident wave from the radar transmitter outside the scattering medium. For the active sensing problem to be considered in this section, we will treat partially polarized waves by introducing the Stokes parameters. Then we will generalize the scalar radiative transfer equation to a matrix equation. In so doing, it is helpful first to establish the relation between the scattered intensity and the incident intensity and then to relate these intensities to the corresponding electric fields. Stokes Parameters, Phase Matrices, and Radiative Transfer Equations For an elliptically polarized monochromatic plane wave, E ⫽ (Evvˆ ⫹ Ehhˆ) exp( jk ⭈ r), propagating through a differential solid angle d⍀ in a medium with intrinsic impedance , where vˆ and hˆ are the unit vectors denoting vertical and horizontal polarization, respectively, k is the wave number, and

186

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

r is the propagation distance. The modified Stokes parameters I, Ih, U, and V in the dimension of intensity can be defined in terms of the electric fields as 2

0.5 Re

|Eν | Iν d = 0.5 Re η∗ |Eh |2 Ih d = 0.5 Re η∗ Eν Eh∗ U d = Re η∗ Eν Eh∗ V d = Im η∗

(22) (23)

0.5 Re

(25)

Phase Matrix for Rough Surfaces. To relate the scattered intensity to the incident intensity, consider a plane wave illuminating a rough surface area A0. The relation between the vertically and horizontally polarized scattered field components Esv, Esh and those of the incident field components Evi , Ehi is

Eνs e jkR Sν ν = R Shν Ehs

Sν h Shh

Eνi Ehi

|S |2 |E i |2 |Sν ν |2 |Eνi |2 + νh ∗ h ∗ η η ∗ Sν ν S∗ν h Eνi Ehi +2 Re η∗ 1 R2

S

∗ i i∗ ν ν Sν h Eν Eh η∗

= 2 Re

[Re(Sν ν S∗ν h )

+

E i E i

j Im(Sν ν S∗ν h )]

Re

Ei Ei

+ j Im

h

η∗

h

η∗

Ei Ei ∗

=

2 Re(Sν ν S∗ν h )Re

ν

η∗

h

Ei Ei ∗

− 2 Im(Sν ν S∗ν h )Im

d

R2

+ Re(Shν S∗hh )U − Im(Shν S∗hh )V Re

Im

(27)

= |Shν |2 Iν + |Shh |2 Ih d

(28) R2

Eνs Ehs∗ d

= [2Re(Sν ν S∗hν )Iν + 2Re(S∗hh Sν h )Ih ] 2 η∗ R + [Re(Sν ν S∗hh + Sν h S∗hν )U

(29)

− Im(Sν ν S∗hh − Sν h S∗hν )V ]d /R2

Eνs Ehs∗ d

= [2Im(Sν ν S∗hν )Iν + 2Im(S∗hh Sν h )Ih ] 2 η∗ R ∗ ∗ + [Im(Sν ν Shh + Sν h Shν )U

(30)

− Re(Sν ν S∗hh − Sν h S∗hν )V ]d /R2 The left-hand sides of the above equations are in watts per square meter. To convert them to intensity, we need to divide both sides of the equation by the solid angle subtended by the illuminated area A0 at the point of observation, (A0 cos s)/R2, where s is the angle between the scattered direction and the direction normal to A0. Equation (27) becomes

0.5R2 Re

|Eνs |2 ∗ η A0 cos θs

= |Sν ν |2 Iν + |Sν h |2 Ih d

A0 cos θs d

− Im(Shν S∗hh )V A0 cos θs

+ Re(Sν ν S∗ν h )U

(31)

The term on the left-hand side of the above equation is the intensity of the scattered field. In view of Eq. (2), we can rewrite the above equation in terms of the scattering coefficients as

Is =

∗

ν

∗

ν

h

η∗

+ Re(Sν ν S∗ν h )U − Im(Sν ν S∗ν h )V

d

4π cos θs

(32)

Similarly we can convert the left-hand sides of Eqs. (28) through Eq. (30) into intensities and rewrite all four resulting equations into a matrix equation. This matrix equation relates the scattered intensities Is to the incident intensities Ii through a dimensionless quantity known as the phase matrix P,

|E s |2

= |Sν ν |2 Iν + |Sν h |2 Ih

0 Iνs = (σν0ν Iν + σν0h Ih + σν0ν ν hU + σhν hhV )

where

2 Re

ν

η∗

(26)

where Spq(p, q ⫽ v or h) is the scattering amplitude in meters, R is the distance from the center of the illuminated area to the point of observation, and k is the wave number. Consider 兩Esv兩2 / *

|Eνs |2 /η∗ =

|E s |2

(24)

where * is the symbol for the complex conjugate, ⫽ 兹애/(⑀ ⫺ j /웆) for a finitely conducting medium and the righthand side of Eq. (22) or Eq. (23) is the average Poynting vector representing power density in units of watts per meter squared. These four parameters have the same dimensions and hence are more convenient to use than amplitude and phase, which have different dimensions. It has been shown that the amplitude, phase, and polarization state of any elliptically polarized wave can be completely characterized by these parameters (8).

Recognizing the above relation, we can obtain the following quantities using Eq. (26) and Eqs. (22) through Eq. (25):

ν

η∗

h

1 PIi d

4π

(33)

The components of Ii are the Stokes parameters as defined by Eqs. (22) through Eq. (25) for the incident plane wave. The components of the scattered intensity Is are also Stokes parameters but are defined for spherical waves. They differ from the plane wave definition in the normalizing solid angle (A0 cos s)/R2. The element of the phase matrix relating Isv to Ivi is

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING 0 vv /cos s. To sum up all possible incident intensities from all directions contributing to Is along a given direction, we integrate over all solid angles: 1 Is = PIi d

(34) 4π 4π

Equation (34) is the generalized version of Eq. (21) for partially polarized waves. Here Is, Ii are column vectors whose components are the Stokes parameters. The detailed contents of the phase matrix written in terms of scattering amplitudes are summarized below: P=

4πM A cos θs

(35)

the free-space wave number, we define the absorption coefficient for p polarization as √ κa p = 2k0 |Im a p |

|Sν ν |2 |Shν |2 2 Re(Sν ν S∗hν ) 2 Im(Sν ν S∗hν )

Qa p =

Re(Sν ν S∗ν h ) Re(Shν S∗hh ) Re(Sν ν S∗hh + Sν h S∗hν ) Im(Sν ν S∗hh + Sν h S∗hν )

−Im(Sν ν S∗ν h ) −Im(Shν S∗hh ) −Im(Sν ν S∗hh − Sν h S∗hν ) Re(Sν ν S∗hh − Sν h S∗hν )

Phase Matrix for an Inhomogeneous Medium. Consider a homogeneous medium with randomly embedded scatterers. Each scatterer is characterized by a bistatic radar cross section p due to a p-polarized (p ⫽ v or h) incident intensity. The scattering cross section of the scatterer Qsp is defined as the cross section that would produce the total scattered power surrounding the scatterer due to a unit incident Poynting vector of polarization p, 1 Qs p (θ, φ) = σ p d s = |Sν p |2 + |Sh p |2 d s (36) 4π 4π 4π where , indicate the incident direction, and integration is over the scattered solid angle. The volume-scattering coefficient for the inhomogeneous medium and polarization p is κs p = Nν Qs p

1 Qs p V

(41)

and the extinction coefficient is ep ⫽ NvQep. In Eq. (41), Qep is the effective area that generates the total scattered and absorbed power due to a unit incident Poynting vector of polarization p. The ratio of sp to ep is the albedo of the random medium. Conceptually either Qep or Qsp may be used in place of A cos s in Eq. (35) to define the phase matrix of a single scatterer. However, unlike A cos s, Qep and Qsp have polarization dependence in general (10) and hence are matrices. Let us denote them as Qe and Qs. The choice of the definition for the phase matrix depends on the assumed form of the scattering source term in Eq. (20). When the term is written as sJs the definition is (11) Ps = 4πQ−1 s M

(42)

If the term is written as eJ⬘s , then the definition should be (8) Pe = 4πQ−1 e M

(43)

Both definitions have appeared in the literature. Clearly the phase matrix is a term created for convenience; the scattering source term is the fundamental quantity. Thus, while Eq. (42) is not the same as Eq. (43), the source terms are the same in both cases, as they should be, κs Js = κe Js =

Nν MI d

(44)

4π

In view of Eq. (44) and Eq. (20), the radiative transfer equation for partially polarized waves in a discrete inhomogeneous medium is

dI κe = −κe I + Pe I d + κa Ja dl 4π 4π κs = −κe I + Ps I d + κa Ja 4π 4π

(38)

Another important parameter for characterizing an inhomogeneous medium is its absorption loss, represented by the volume-absorption coefficient, ap. This quantity may be defined in terms of the average relative permittivity ⑀ap of the medium, where p denotes the incident polarization. Letting k0 be

(40)

From Eq. (37) and Eq. (40) the total cross section, also known as the extinction cross section, for a scatterer is

(37)

where Nv is the number of scatterers per unit volume (8). The scattering coefficient sp represents the scattering loss per unit length and has the units of Np m⫺1. In the case of a continuous, inhomogeneous medium defined by a spatially varying permittivity function, the scattering amplitudes Svp, Shp are for an effective volume V. The volume scattering coefficient is defined as (9) κs p =

κa p Nν

Qe p = Qa p + Qs p

|Sν h |2 |Shh |2 2 Re(Sν h S∗hh ) 2 Im(Sν h S∗hh )

(39)

This equation may be used for either a continuous inhomogeneous medium or a discrete inhomogeneous medium. In the latter case the absorption cross section Qap for one particle and p polarization can be defined as

where the Stokes matrix M is

187

(45)

or dI = −κe I + dl

Nν MI d + κa Ja 4π

(46)

188

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

In a continuous, inhomogeneous medium, it is more convenient to use the Stokes matrix instead of the phase matrix. Making use of Eq. (38), we have dI = −κe I + dl

4π

M I d + κa Ja V

(47)

In Eq. (47), V is the effective illuminated volume as given in Eq. (38) and will cancel out upon evaluating 具M典. It is clear from Eq. (46) and Eq. (47) that the fundamental quantity in scattering is the scattering amplitude, while phase function is an artificially created quantity. The radiative transfer equation is formulated on the basis of energy balance. The phase changes of the scattered wave and its cross-correlation terms are ignored in the solution of the transfer equation. For a sparsely populated random medium it is not necessary to track the phase change between scatterers. In a dense medium a group of scatterers may scatter coherently. Thus phase relation among scatterers must be taken into account in the derivation of the phase function. However, phase effects in multiple scattering calculations can still be ignored because the mechanism of multiple scattering tends to destroy phase relations between scatterers. To date, the radiative transfer formulation is still the most practical approach to compute multiple scattering from an inhomogeneous medium. Furthermore it provides a natural way to combine surface boundary scattering with volume scattering from within an inhomogeneous layer, as we will see next. Scattering from an Inhomogeneous Layer with Irregular Boundaries For bounded media, scattering or reflection may occur at the boundary. Both incident and scattered intensities are needed in the boundary conditions. Therefore it is necessary to split the intensity matrix into upward I⫹ and downward I⫺ components and rewrite Eq. (46) as two equations. For active sensing applications, the thermal source term is not needed. It is also standard practice to express the slant range in terms of the vertical distance, that is, let l ⫽ z/cos (Fig. 3). Consider the problem of a plane wave in air incident on an inhomogeneous layer above a ground surface. The geometry of the scattering problem is depicted in Fig. 3. The inhomogeneous layer is assumed to have such characteristics that the

z Ii

z=0

θ µ1

1

µ2

2

Is

θs φs φt

x

θt I+ I–

z = –d Figure 3. Scattering geometry for an inhomogeneous layer above a homogeneous half-space.

upward intensity I⫹ and the downward intensity I⫺ satisfy the radiative transfer equation. On rewriting Eq. (46) in terms of these intensities, we obtain (1)

µs

µs

d + I (z, µs , φs ) dz = −κe I+ (z, µs , φs ) 2π 1 1 + κs Ps (µs , µ, φs − φ)I+ (z, µ, φ) dµ dφ 4π 0 0 2π 1 1 + κs Ps (µs , −µ, φs − φ)I− (z, µ, φ) dµ dφ 4π 0 0 (48)

d − I (z, µs , φs ) dz = κe I− (z, µs , φs ) 2π 1 1 − κs Ps (−µs , µ, φs − φ)I+ (z, µ, φ) dµ dφ 4π 0 0 2π 1 1 − κs Ps (−µs , −µ, φs − φ)I− (z, µ, φ) dµ dφ 4π 0 0 (49)

where 애s ⫽ cos s, 애 ⫽ cos , I⫹ and I⫺ are column vectors containing the four modified Stokes parameters, and Ps is the phase matrix. To find the upward intensity due to an incident intensity Ii, where Ii ⫽ I0웃(애 ⫺ 애i)웃( ⫺ i), 웃( ) is the Dirac delta function, and (i, i) denotes the direction of propagation of the incident wave, we need to solve Eq. (48) and Eq. (49) subject to the following boundary conditions: At z ⫽ ⫺d the upward and downward intensities are related through the groundscattering phase matrix G as

I+ (−d, µs , φs ) 2π 1 1 = G(µs , −µ, φs − φ)I− (−d, µ, φ) dµ dφ 4π 0 0

(50)

If the ground surface is flat, G may be written in terms of the reflectivity matrix Rg as G = 4πRg δ(µs − µ)δ(φs − φ)

(51)

At the top boundary z ⫽ 0, the upward and downward intensities are related through the surface-scattering and transmission phase matrices SR and ST (12)

I− (0, µs , φs ) 2π 1 1 = SR (−µs , µ, φs − φ)I+ (0, µ, φ) dµ dφ (52) 4π 0 0 2π 1 1 + ST (−µs , −µ, φs − φ)Ii (0, µ, φ) dµ dφ 4π 0 0 Once I⫹ (0, 애s, s) is determined within the inhomogeneous layer, the upward intensity transmitted from the layer into air can be found using the transmission scattering matrix of the surface, ST as

I+ (µs , φs ) =

1 4π

2π 0

1 0

ST (µs , µ, φs − φ)I+ (0, µ, φ) dµ dφ (53)

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

The total scattered intensity in air is given by the sum of I⫹ (애s, s) and Is, where Is is the intensity due to random surface scattering by the top layer boundary: Is =

1 4π

2π 0

1 0

SR (−µs , −µ, φs − φ)Ii (0, µ, φ) dµ dφ

(54)

The explicit forms of the matrices Rg, SR, and ST are available in Ref. 1. The expressions for SR and ST are for an irregular boundary. The G matrix is assumed to have the same mathematical form as SR. Once the total scattered intensity for a ppolarized component Isp of the intensity matrix is found, the scattering coefficient for this component is defined relative to the incident intensity Iqi ⫽ Iq0웃(애 ⫺ 애i)웃( ⫺ i) of polarization q along (애i, i) direction as 0 σ pq =

4πI sp cos θs

(55)

Iq0

The transfer equations given by Eq. (48) and Eq. (49) can be solved exactly by using numerical techniques (13,14). Analytic solutions by iteration are available to the first order in albedo (1, ch. 2) which is also known as the Born approximation. It is generally practical to carry the iterative solution process to the second order in albedo. This additional complexity is justified only for cross-polarization in the plane of incidence, because its first-order result is zero. For like-polarization the difference between the first and second-order results is generally within experimental error. First-Order Solution of the Layer Problem. In many practical applications the volume scattering from an inhomogeneous layer can be approximated by using a first-order solution whenever the albedo of the medium is smaller than about 0.3. Furthermore, because volume scattering has a slowly varying angular behavior over incident angles in the range between 0⬚ and 70⬚, we can approximate the transmission across an irregular boundary by a plane boundary. Under these conditions the first-order solution to the radiative transfer equations consists of four major terms: (1) volume scattering by the inhomogeneous layer transmitted across the top boundary, (2) scattering by the bottom layer boundary passing through the layer into the upper medium, (3) scattering between layer volume and lower boundary passing through the layer into the upper medium, and (4) surface scattering by the top boundary. Note that only the last term represents pure surface scattering and does not involve propagation through the layer volume. The second and third terms are dependent on contributions from the lower boundary and can be ignored in dealing with a half-space or a very thick layer. The volume backscattering term for p polarized scattering has the form

σν0p p (θ ) = 0.5 cos θt

κ s

Tp (θ, θt )Tp (θt , θ ) 1 − exp κe Ps p p (θt , π; π − θt , 0) −2κ d e = 2π cos θt Tp (θ, θt ) 1 − exp cos θt |S p p (θt , π; π − θt , 0)|2 Tp (θt , θ ) σe

−2κ d e

cos θt

(56)

189

where t is the angle of transmission, Tp(t, ) is the Fresnel power transmission coefficient for p polarization, d is the depth of the layer, Spp(t, 앟; 앟 ⫺ t, 0) is the scattering amplitude and the ensemble average is over the distribution of the orientation of the scatterer, e ⫽ Nve; Nv is the number density of scatterers, and the extinction cross section is given by σe = −

4π k

Im[S p p (π − θt , 0; π − θt , 0)]

(57)

The extinction coefficient e is the controlling factor for propagation through the layer. Both the scattering and the extinction coefficients are dependent on the scattering amplitude of the scatterer. In Eq. (56) we provide two forms of the scattering coefficient, for the reason that for some problems such as Rayleigh scattering the phase function Pspp is known, and for others only the scattering amplitude is available. Surface backscattering from the lower boundary is given by the surface scattering coefficient from the lower boundary, 0 spp (), modified by propagation loss through the layer and transmission across the top boundary as σl0p p (θ ) = cos θTp (θ, θt )

σs0p p (θt ) cos θt

Tp (θt , θ ) exp

−2κ d e

cos θt

(58)

0 The explicit form of the surface scattering coefficient spp () is given in the next section. Finally we give the expression for the volume–surface interaction term resulting from the incident wave transmitted through the layer, reflected by the lower boundary, and then scattered by layer inhomogeneities back into the direction of the receiver. By reciprocity the wave that traverses the same path in the reverse direction will make the same contribution to the receiver. This term has the form

κs d|R p (θt )|2 −2κe d Tp (θt , θ ) exp cos θt cos θt {Ps p p (π − θt , π; π − θt , 0) + Ps p p (θt , π; θt , 0)}

0 σlv p p (θ ) = cos θTp (θ, θt )

= cos θTp (θ, θt )

Nν d|R p (θt )|2 −2κe d Tp (θt , θ ) exp cos θ cos θt

(4π ){|Ss p p (π − θt , π; π − θt , 0)|2 + |Ss p p (θt , π; θt , 0)|2 }

(59)

In Eq. (59), 兩Rp(t)兩2 is the p polarized Fresnel reflectivity and Nv is the number density. The relative importance of each of the four terms and the actual contents of the phase function or scattering amplitude are dependent on the specific application. This is illustrated in the subsequent sections. SCATTERING FROM SOIL SURFACES When an incident electromagnetic wave impinges on an irregular surface, it induces a current on it. The waves radiated by this current are called scattered waves. To calculate surface scattering, one needs to solve the integral equation that governs this induced current on the surface. In general, there is no closed-form, analytic solution for this integral equation. An approximate solution to it is available in Chapter 4 of Ref. 1 and a bistatic scattering coefficient was derived from it. It

190

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

is shown in Chapter 5 of Ref. 1 that the classical scattering coefficients under high- and low-frequency conditions for rough surfaces, i.e., those based on the Kirchhoff and the small perturbation approximations, are special cases of this bistatic scattering coefficient. Hence, we can examine rough surface scattering properties over the entire frequency band based on this coefficient. Only single-scatter, backscattering from a randomly rough soil surface is considered here. Readers interested in bistatic scattering and multiple surface scattering are referred to Refs. 1 and 15, respectively.

In Eq. (62) the polarization-dependent coefficients, f pp and Fpp, are

2Rh 2Rν , f hh = − cos θ cos θ 2 sin2 θ (1 + Rν )2 (r − 1) (sin2 θ + r cos2 θ ) = r2 cos3 θ

fν ν = Fν ν and

Fhh = − Backscattering from a Randomly Rough Soil Surface To compute backscattering from a soil surface, we need to know both the electric and geometric properties of the surface. In general, the permeability of the soil can be taken to be the same as that of air and only the complex dielectric constant is needed. An empirical formula for the relative complex dielectric constant of soil is available in Ref. 16. It has the form

1/0.65 sbd(4.70.65 − 1) + mvb (w0.65 − 1) r = 1 + 2.65

(60)

where sbd stands for soil bulk density in g cm⫺3; mv is the volumetric soil moisture; ⑀w is the permittivity of water given by

(w0 − 4.9) 1 + j f Tτ 1.1109 3.824T 6.938T 2 5.096T 3 − Tτ = + − 3 5 10 10 10 107 2 6.295T 1.075T 3 w0 = 88.045 − 0.4147T + + 4 10 105 f = frequency in GHz w = 4.9 +

T = temperature in degrees centigrade and b = 1.09 − 0.11S + 0.18C is the parameter accounting for the percent of clay, C, and the percent of sand, S, in the soil. For a randomly rough surface not skewed by natural forces such as the wind, it is sufficient to describe the geometry of the surface by its first- and second-order statistics. In this case it is the surface root mean square (rms) height, , and its autocorrelation function, (), normalized to its height variance, 2. The expression for the single-scatter, backscattering coefficient is

σ p0p (θ ) =

∞ k2 W (n) (−2k sin θ, 0) exp(−2k2 σ 2 cos2 θ ) σ 2n |I np p |2 2 n! n=1 (61)

where k is the wave number, is the incident angle, p stands for either vertical () or horizontal (h) polarization and I np p = (2k cos θ )n f p p exp(−k2 σ 2 cos2 θ ) +

(k cos θ )n Fp p 2

(62)

2 sin2 θ (1 + Rh )2 (r − 1) cos3 θ

The surface roughness spectrum W(n)(⫺2k sin , 0) is related to the surface autocorrelation function by 1 ρ n (ξ , ζ )e− j2kξ sin θ dξ dζ W (n) (−2k sin θ, 0) = (63) 2π 2 L exp[−(kL sin θ )2 ] 2 −ξ 2 if ρ(ξ ) = exp L2 2 xn−1 K−(xn−1) (2kL sin θ ) L (2kL sin θ ) = 2xn−1 (xn) 1 + ξ 2 −x if ρ(ξ ) = ,x ≥ 1 L2 2 2 −1.5 L [1 + (2kL sin θ ) ] −ξ if ρ(ξ ) = exp , ξ ≥ 0 (64) L where ⌫( ) is the gamma function and K⫺v( ) is the Bessel function of the second kind and of order v with an imaginary argument. In the above we have assumed that the local incident angle in the Fresnel reflection coefficients, Rv, Rh, in f pp can be approximated by the incident angle unless the surface correlation length is larger than a wavelength and the rms slope exceeds 0.4. In the latter case Rp() 씮 R(0) in f pp. This assumption leads to a restriction on the applicability of Eq. (61). Such a restriction is a complex function of the surface roughness parameters relative to the incident wavelength, the shape of the correlation function, and the relative permittivity ⑀r of the surface. An examination of this problem can be found in Ref. 1 (ch. 6). Fortunately we rarely have to deal with this complication in applications because natural soil surfaces have many roughness scales. Almost without exception there is a smaller scale roughness than the incident wavelength under consideration which will dominate scattering. Hence Eq. (61) is generally applicable as long as the surface rms slope is less than about 0.4. Note that when the grain size of the soil is comparable to or larger than the incident wavelength such as at optical frequencies, the wave no longer sees a surface. It sees a collection of grains and the scattering phenomenon changes from surface to volume. Models for Large-Scale Roughness. If k is larger than 1.5, the second term in Eq. (62) can be ignored. This is because the first term in Eq. (62) has a large enough growth factor to compensate for the associated exponential decay factor, while the second term does not. Thus, for large k, only the first term in Eq. (62) (also known as the Kirchhoff term) remains, and the series in Eq. (61) should be rewritten in exponential form. The final form for the scattering coefficient depends on

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

the assumed surface height density function. Let us consider the Gaussian and a modified exponential height density function

where c ⫽ 兹2v/ , 애 ⫽ v ⫺ 0.5, 0.75 ⬍ v ⬍ 1, ⌫ and K애 are the gamma and modified Bessel functions, and is the surface rms height. In backscattering we have

Gaussian ν, 0.75

(65)

Backscattering coefficient (dB)

0

|z|µ cµ+1 Kµ (c|z|) √ 2µ π(µ + 0.5)

|R p (0)|2 − tan2 θ exp , 2σs2 cos4 θ (2σs2 ) √ 3|R p (0)|2 − 3 tan θ σ p0p (θ ) = exp , 2σs2 cos4 θ σs 2|R p (0)|2 sin θ 2 tan θ K−1 , σs3 cos5 θ σs

Correlation comparisons 20

– 20

– 40 VV (1.5) HH (1.5) VV (Exp) HH (Exp) VV (Gau) HH (Gau)

–60

–80

–100 –20

ν, 1.0

where s2 ⫽ 2兩(0)兩 is the variance of the surface slope; (0) is the second derivative of the normalized surface correlation at the origin. Equation (65) is independent of the special form of the surface correlation function. It is useful for surfaces with large-scale roughness that are larger than the exploring wavelength and do not have smaller-scale roughness so that the geometric optics condition can be realized. This can happen for sea surfaces under low wind conditions. Theoretical Model Behaviors Large Roughness or High Frequency Models. We want to first plot the expressions in Eq. (65) to show the differences in the backscattering behaviors of these scattering coefficients. In Fig. 4 we see that there is not much difference between ⫽ 0.75 and ⫽ 1.0, expressions at large angles of incidence. At small angles the ⫽ 1.0 expression gives the highest values and the Gaussian expression the lowest. Furthermore the angular trends for different height density functions are different.

0

20 40 60 Incident angle (dB)

80

100

(a)

Correlation (0° to 30°) 5 0 Backscattering coefficient (dB)

p(z) =

191

–5 –10 VV (1.5) HH (1.5)

–15

VV (Exp)

–20

HH (Exp) VV (Gau)

–25

HH (Gau) –30 –5

0

5

10 15 20 Incident angle (deg)

25

30

35

(b)

Figure 5. (a) Effects of surface correlation on backscattering, k ⫽ 0.2, kL ⫽ 5, and dielectric constant ⫽ 15. Exponential correlation function gives the highest scattering level at large angles of incidence. (b) An enlarged comparison of the effects of surface correlation function in the small angular region. k ⫽ 0.2, kL ⫽ 5, and ⑀r ⫽ 15. Angular shapes of the backscattering curves are shown to be controlled by the surface correlation function when k is small.

Backscattering coefficient (dB)

5

0

–5

–10

–15

σs = 0.35 MED/1.0 MED/0.75 Gaussian

–10

– 25 –10

0

10

20 30 40 Incident angle (deg)

50

60

Figure 4. Comparisons of the backscattering coefficients in Eq. (65) when the rms slope is 0.35. Results indicate significant differences in angular trends.

Low to Moderate Roughness or Frequency Models. Next we illustrate the dependence of the surface scattering coefficient given by Eq. (61) on the type of correlation function, roughness scales and the dielectric constant. Effects of Surface Correlation. To show the effect of the surface correlation function on surface scattering, we plot in Fig. 5 the backscattering coefficients corresponding to exponential, x-power, and Gaussian correlation functions in Eq. (63). At large angles of incidence the exponential function gives the highest level, and the Gaussian gives the lowest level among the three functions. The exponential correlation function is commonly used in theoretical models to compare with mea-

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING 10

0

5 Backscattering coefficient (dB)

10

–10

–20

VV (kσ = 0.1) HH (kσ = 0.1)

–30

VV (kσ = 0.2) HH (kσ = 0.2) VV (kσ = 0.5) HH (kσ = 0.5)

–40

–50 –10

0

10

20 30 40 Incident angle (deg)

50

60

Figure 6. Increase in backscattering with k occurs when k is in the range, 0 to 0.5, with kL ⫽ 4 and ⑀r ⫽ 25.

surements. In most cases it gives a better agreement than the Gaussian correlation, especially for small incident angles or for surface roughness values that fall into the low- or intermediate-frequency regions. However, it is not differentiable at the origin and does not have a slope distribution; that is, it represents a surface with 90⬚ slopes at some points. Thus it is incompatible with theoretical model development. For the purpose of theoretical analysis, it should be viewed as an approximation to a more complicated correlation function that is differentiable at the origin. At low frequencies it is the functional form of the correlation function not its property at the origin that is important. We can view the use of the exponential function as a means to simplify our model to facilitate its application. The curvatures of the three sets of backscattering curves in Fig. 5(a) are different. More details can be seen after we replot them over a smaller angular region (0⬚ to 30⬚) as shown in Fig. 5(b). Here we see that the Gaussian correlation leads to a bell-shaped curve, as expected, while the 1.5-power function generates a fairly straight line and the exponential correlation function produces an exponentially shaped angular curve. The latter two curves coincide almost exactly at 0 and 30⬚, and the 1.5-power curve is clearly higher in between angles. In applications the 1.5-power function may be a good alternative to the exponential function for some cases. Effects of k and kL. Next we show the effect of the surface parameter k. This is illustrated in Fig. 6 and Fig. 7. Here the 1.5-power correlation function is used. The parameter kL is chosen to be 4. In Fig. 6 the backscattering coefficient is seen to rise in level as k increases from 0.1 to 0.5. The angular trends remain almost unchanged except for a gradual narrowing of the spacing between vertical and horizontal polarizations. This means that for k smaller than 0.5, its influence is primarily on the level of the backscattering curve and to a much lesser extent slows down the angular trends for both vertical and horizontal polarization. However, as k increases further from 0.5 to 1.2, Fig. 7 shows that the angular trend of the backscattering coefficient begins to level off significantly. The level of the backscattering curve at small incident angles begins to drop after k reaches 0.8, while at

0 –5 –10

VV (k σ = 0.5) HH (k σ = 0.5)

–15

VV (kσ = 0.8) HH (k σ = 0.8) VV (k σ = 1.2)

– 20 – 25 –10

70

HH (k σ = 1.2) 0

10

20 30 40 Incident angle (deg)

50

60

70

Figure 7. Further increase in k changes the angular shape of backscattering when kL ⫽ 4 and ⑀r ⫽ 25. Under this condition backscattering decreases in the range, 0⬚ to 10⬚, and increases in the range, 30⬚ to 60⬚.

large angles it continues to rise. The angular region where the backscattering curves with small k values cross over those with large k values is between 15⬚ and 20⬚. In summary, the effects of k on backscattering vary depending on its value. When k is increasing up to around 0.5, it causes a rise in the backscattering curve over all angles of incidence, as shown in Fig. 6. Then, as k increases further to 0.8, it causes a gradual leveling off of the angular trend by actually lowering the backscattering at small angles of incidence. This leveling off becomes significant as k increases beyond 0.8. In the meantime the spacing between vertical and horizontal polarizations decreases with an increase in k. Next we consider the effects of kL variation on the backscattering coefficient in Fig. 8, when k is fixed at 0.5. An increase in kL is seen to cause a narrowing of the spacing between vertical and horizontal polarizations. This means

20 kσ = 0.5

10 Backscattering coefficient (dB)

Backscattering coefficient (dB)

192

0 –10 – 20 VV (kL = 2.5) HH (kL = 2.5) VV (kL = 5) HH (kL = 5) VV (kL = 10)

– 30 – 40 – 50

HH (kL = 10) – 60 –10

0

10

20 30 40 Incident angle (deg)

50

60

70

Figure 8. Faster drop off of backscattering with the incident angle as kL increases with ⫽ (1 ⫹ x2)⫺1.5, k ⫽ 0.5, and ⑀r ⫽ 64.

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

scattering by leaves, we consider only backscattering from a half-space of disk- and needle-shaped leaves in this section. Readers are referred to Ref. 1 (ch. 11) for scattering by combinations of leaves, branches, and trunks. To use Eq. (56) for volume scattering calculation from leaves, we need the scattering amplitude of disk- and needleshaped leaves. A basic approach to this problem is to use the scattered field formulation based on the volume equivalence theorem,

Backscattering coefficient (dB)

–5

–10

–15

– 20 VV ( HH ( VV ( HH ( VV ( HH (

– 25

–30

–35 –10

193

0

r = 3) r = 3) r = 9) r = 9) r = 36) r = 36)

10

20 30 40 Incident angle (deg)

E s (rr ) =

50

60

70

In Fig. 10 we show a comparison between the surface model given by Eq. (61) and measurements from a rough soil surface reported in Ref. 23 where ground truth data were acquired by researchers, so model parameters were fixed. Except for the 50⬚ point at 1.5 GHz, very good agreement is realized in levels and trends between the model predictions and data at both 1.5 and 4.75 GHz, in VV and HH polarizations, and over an angular range from 20⬚ to 70⬚. SCATTERING FROM A VEGETATED AREA A vegetation layer may be viewed as an inhomogeneous layer without a top boundary. In general, the scatterers within the layer are collections of leaves, stems, branches, and trunks. At frequencies around 8.6 GHz or higher, leaves are usually the dominant scatterer and attenuator beyond 20⬚ off the vertical in the backscattering direction. To illustrate volume

V

exp(− jk|rr − r |) E in dV |rr − r |

(66)

20 IEM (vv) Backscattering coefficient (dB)

10

IEM (vv) Data (vv)

0

Data (vv)

–10 – 20 L = 8.4 cm. σ = 1.12 cm

– 30

f = 1.5 GHz,

r

= 15.34 - j3.66

ρ (r) = exp[–(r/L)1.2]

– 40 – 50

0

10

20

30

40

50

60

70

80

(a) 20 IEM(vv) 10 Backscattering coefficient (dB)

Comparisons with Soil Measurements

where k is the wave number in air, ⑀r is the relative permittivity of the scatterer (leaf), Ein is the field inside the scatterer, and integration is over the volume V of the scatterer defined in terms of the prime variables. Clearly the scattered field can be found if the field inside the scatterer is known, and

Figure 9. Effect of change in the surface dielectric constant on backscattering. Calculation uses k ⫽ 0.125, kL ⫽ 1.4, and 1.5 power correlation function. An increase in surface dielectric constant causes backscattering coefficients to increase for both vertical and horizontal polarizations and a faster angular drop-off for horizontal polarization.

that the current model approaches the Kirchhoff model when kL gets large. In general, the backscattering coefficient drops off faster as kL increases. With the choice of 1.5-power correlation function the angular trends are mostly linear. Dependence on Dielectric Constant. Finally we illustrate the dependence of the backscattering coefficient on the dielectric constant of the surface. For simplicity the term dielectric constant will always refer to its value relative to vacuum. It is generally expected that the level of the backscattering coefficient increases with an increase in the dielectric constant and that its effect on the angular trend is negligible. This is true for vertically polarized wave. For a horizontally polarized wave, both the level and the angular trend are affected. As shown in Fig. 9, the horizontally polarized coefficient drops off faster with the incident angle as the surface dielectric constant becomes larger causing the spacing between the VV and HH polarizations to widen.

k2 (r − 1) 4π

IEM(vv) Data(vv)

0

Data(vv)

–10 – 20 L = 8.4 cm. σ = 1.2 cm

– 30

f = 4.75 GHz, – 40 – 50

r

= 15.23 - j2.12

ρ (r) = exp[–(r/L)1.2]

0

10

20

30 40 50 Incident angle (deg)

60

70

80

(b)

Figure 10. Comparison between the surface model and backscattering measurements from a known soil surface. Results indicate good agreements between data and model in frequency, incidence angle, and polarization.

194

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

to facilitate integration, we need to express the integration variables in the local frame (principal frame of the scatterer) and then relate the local frame to the reference frame. This setup will also allow arbitrary orientation of the scatterer relative to the reference frame, since the angular separations between the two frames can be varied. Furthermore the leaves of a given species will have an orientation distribution, and we need to average over this distribution in order to find the scattering coefficient. Scattering Amplitudes of Scatterers To derive the scattering amplitudes for leaf-type scatterers, we need to allow the leaves to be arbitrarily oriented, and we want to obtain an estimate of the fields inside the leaf. Three types of leaf-shape are considered: elliptic disk, circular disk, and needle. Due to a lack of symmetry, the orientation of an elliptic disk is specified by three angles, while the circularand needle-shaped leaves are specified by only two. Relation between Reference and Local Frames. To relate a reference frame (x, y, z), to a local frame which is the principal frame of the scatterer (x⬙, y⬙, z⬙) for a symmetric scatterer such as a needle or a circular disk, we need to specify a polar angle 웁 and an azimuthal angle 움 of rotation between the coordinates. Let z⬙ correspond to the normal vector to the disk or needle axial axis. From Fig. 11 the two angles between the coordinate systems are defined by first rotating around z⬙ by 움 and then around y⬙ by 웁, yielding

x cos β cos α y = − sin α z cos α sin β

− sin β x x 0 y ≡ U y cos β z z (67)

cos β sin α cos α sin α sin β

For an elliptic disk another rotation with respect to the z⬙ axis by an angle 웂 defined by

x cos γ y = − sin γ z 0

0 x x 0 y ≡ U y z z 1

sin γ cos γ 0

(sin α cos β cos γ + cos α sin γ ) − sin α cos β sin γ + cos α cos γ sin α sin β x ≡ Ue y z

− sin β cos γ x sin β sin γ y cos β z (69)

where U and Ue are unitary matrices. Their inverses are equal to their transposes. Clearly U is a special case of Ue when 웂 ⫽ 0. Estimate of the Field Inside a Scatterer. For an elliptic disk, the field inside the scatterer in the local frame is related to the incident field, Ei ⫽ E0 exp(⫺jk ⭈ r), where k ⫽ ıˆk; ıˆ is the unit vector in the incident direction and r is the displacement vector in the reference frame, (17)

1 a 1 E local = 0 0

0 1 a2 0

0 0 Ue · E i 1 a3

(70)

Converting it to the reference frame, we have E in = U−1 e · E local ≡ A e · E i

(71)

where Ae is the matrix of transformation relating the incident field in the reference frame to the inner field in the same frame. In Eq. (70) a vector that appears with a matrix is understood to be a column matrix and

a1 = 1 + (r − 1)g1 a2 = 1 + (r − 1)g2

(72)

a3 = 1 + (r − 1)g3 where gi are the demagnetizing factors that vary depending on the shape of the scatterer. For an elliptic disk-shaped leaf, we have (18)

z′′

β

c K(e, π/2) − E(e, π/2) (1 − e2 )0.5 + a e2 c E(e, π/2) − (1 − e2 )K(e, π/2) g2 = a e2 (1 − e2 )0.5 c E(e, π/2) g3 = 1 − a (1 − e2 )0.5 g1 =

y′′

α y

x

x (cos α cos β cos γ − sin γ sin α) y = − cos α cos β sin γ − sin α cos γ z cos α sin β

(68)

z

α

is needed, yielding the final relation after redefining the coordinates as

β x′′

Figure 11. Illustration of principal frame of the scatterer relative to the reference frame.

(73)

In the above a ⬎ b Ⰷ c are the semi-axes of the elliptic disk e = [1 − (b/a)2 ]0.5

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

and the elliptic integrals of the first and second kinds are given by

K

E

e, π 2

π /2

= 0

e, π 2

π /2

=

1

dξ

1 − e sin2 ξ

(74)

1 − e sin ξ dξ

0

For circular disk-shaped leaf (a ⫽ b Ⰷ c), we should replace Ae by Ac, which is equal to Ae with 웂 ⫽ 0, and the demagnetizing factors given by

√

m2 − 1 g1 = g2 = sin √ 2 2(m − 1) m m2 − 1 √ m2 m2 − 1 1 −1 sin , 1− √ g3 = 2 m −1 m m2 − 1 1

m2

−1

defined in accordance with vertical and horizontal polarizations can be written as

k s , k i ) · pˆ i ] [ pˆ s · f e (k 2 k (r − 1)Ie (νˆ s · A e · νˆ i ) = 4π (hˆ s · A e · νˆ ) i

2

νˆ s = xˆ cos θs cos φs + yˆ cos θs sin φs − zˆ sin θs

−1 m=

a c (75)

The use of Eq. (72) in Eq. (66) with the phase of the incident field k ⭈ r replaced by (Ue ⭈ k) ⭈ r⬙ allows the integration variables to be expressed in the local frame to facilitate integration. Scattering Amplitude of an Elliptic Disk. In this section we show the expression for the scattering amplitude with sˆ, ıˆ denoting the unit vectors in the scattered and incident directions, respectively. For a pˆs polarized scattered field the amplitude portion of pˆs ⭈ Ein is pˆs ⭈ Ein ⫽ pˆs ⭈ Ae ⭈ E0. From Eq. (66) we can write the pˆs polarized scattered field component for an elliptic disk in the far field by letting 兩r ⫺ r⬘兩 equal to r ⫺ sˆ ⭈ r⬘ in the phase and equal to 兩r兩 ⫽ r in the amplitude. Then Eq. (66) reduces to

= ≡

(79)

hˆ s = −xˆ sin φs + yˆ cos φs

m − 1 m 1 m (m2 − 1) + ln 2 m2 − 1 2 m + 1 q m a2 m −1 2 ln +1 , m = 1− 2 g3 = −(m − 1) 2 m +1 c (76)

≡

(78)

i

sˆ = xˆ sin θs cos φs + yˆ sin θs sin φs + zˆ cos θs

g1 = g2 =

≈

f ν ν f hν (νˆ s · A e · hˆ i ) ≡ fν h fν h (hˆ s · A e · hˆ )

where the vertical and horizontal polarization unit vectors in Eq. (78) are chosen to agree with ˆ and ˆ unit vectors of a standard spherical coordinate system and form an orthogonal set with sˆ as

Similarly the unit polarization vectors associated with the incident direction are

iˆ = xˆ sin θi cos φi + yˆ sin θi sin φi + zˆ cos θi

For a needle-shaped leaf (a ⫽ b Ⰶ c), we should replace Ae by An, which is equal to Ae with 웂 ⫽ 0 and another set of demagnetizing factors given by

pˆ s · E s (rr ) =

195

k2 (r − 1) exp[− jk(r − sˆ − r )] ( pˆ s · E in ) dV 4π |rr| V k2 (r − 1) ( pˆ s · A e · E 0 ) exp(− jkr) 4πr V ˆ · r ] dV exp[ jk(sˆ − i) (77) k2 (r − 1) ( pˆ s · A e · E 0 ) exp(− jkr)Ie 4πr Ae exp(− jkr) 2 Ie · E 0 pˆ s · k (r − 1) 4π r exp(− jkr) k s , k i ) · pˆ i E0 pˆ s · f e (k r

where f e(ks, ki) is the scattering amplitude matrix for an elliptic disk, ks ⫽ ksˆ, and ki ⫽ kıˆ. The elements of this matrix

νˆ i = xˆ cos θi cos φi + yˆ cos θi sin φi − zˆ sin θi

(80)

hˆ i = −xˆ sin φi + yˆ cos φi and Ie = V

where

ˆ · r ] dV = 4πabc J1 (q) exp[ jk(sˆ − i) q

(81)

ˆ x }2 + {b[Ue · (sˆ − i)] ˆ y }2 q = k {a[Ue · (sˆ − i)] Scattering Amplitude of a Circular Disk. For a circular diskshaped leaf, the forms of Eqs. (77), (78), and (81) are valid, but we need to replace Ae by Ac because the definitions for the demagnetizing factors g1, g2, g3 are different and we need to set a ⫽ b in Eq. (81). Scattering Amplitude of a Needle. For a needle-shaped leaf, the forms of Eqs. (77) and (78) are valid, but we need to replace Ae by An because the definitions for the demagnetizing factors g1, g2, g3 are different. The corresponding expression for Ie will be called In. Its integral form is the same as Eq. (81) except that we have to evaluate it differently. Letting its length be L ⫽ 2c, we have

In = (πa2 ) =

L/2 −L/2

exp( jz qz ) dz

πa2 [exp( jqz L/2) − exp(− jqz L/2)] 2πa2 qz L = sin jqz qz 2 (82)

Theoretical Behaviors We consider the frequency, size, and moisture dependence of the backscattering coefficients 0 of a circular- and needleshaped leaf in this section. To do so, we need an estimate for the dielectric constant for leaves as a function frequency and moisture content. The empirical formula we use here is from Ref. 19.

196

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

With the above quantities known, the permittivity of vegetation is given as a function of frequency in GHz by 75 − j(22.86/ f ) (Mg ) = n + ν f f 4.9 + 1 + j( f /18) (83) 55 + ν f b 2.9 + 1 + ( j f /0.18)0.5

Backscattering coefficient, 8.6 GHz –10 Mg = 0.5

–12 –14

Mg = 0.3 –16 σ

0

–18 Mg = 0.1 –20 –22 –24 20

VV1 HH1 VV2 HH2 VV3 HH3 30

40 50 Incident angle (deg)

60

Figure 12. Dependence of the backscattering coefficient on the gravimetric moisture content of a circular disk-shaped leaf. Mg ⫽ 0.1, 0.3, and 0.5. Thickness of the leaf ⫽ 0.01 cm, and radius ⫽ 1.5 cm. Results indicate negligible difference between vertical and horizontal polarizations and a monotonic increase of 0 with moisture.

Permittivity of Vegetation Given GMC. When the gravimetric moisture content (GMC) is given and denoted by Mg, the nondispersive residual component of the dielectric constant is n = 1.7 − 0.74Mg + 6.16Mg2 The free-water volume fraction is v f f = Mg (0.55Mg − 0.076) while the volume fraction for the bound water is

v fb =

Dependence on Moisture Content, Size, and Frequency. In Fig. 12 we see that 0 increases over all incident angles as the moisture content of the circular leaves increases from 0.1 to 0.5 for both vertical and horizontal polarization. In general, there is very little difference between horizontal and vertical polarization because we assumed random orientation distribution for the leaves. Similar trends are expected for needleshaped leaves. In Fig. 13(a) we show circular leaf size dependence (radius) at 8.6 GHz, 40⬚ incidence at a dielectric constant of 14.9-j2.5. There is a steep rise over small size values reminiscent of the Rayleigh region followed by an oscillatory behavior indicating the resonant region. Finally saturation occurs in the high-frequency region where the length or radius of the scatterer exceeds one wavelength. In the Rayleigh region vertical polarization is smaller than horizontal, while the reverse is true in the larger size region. A similar plot versus the length of the needle-shaped leaf is shown in Fig. 13(b) where we see a similar trend without oscillations and a higher horizontal than vertical polarization level for lengths exceeding 2 cm. Similar plots of the extinction cross sections give a monotonic increase with the size of the disk in Fig. 14(a) and the length of the needle in Fig. 14(b). The increase is much faster for the disk than the needle. For the disk-shaped leaf, horizontal polarization is seen to experience more attenuation than the vertical, and the role of the polarizations reverses for the needle-shaped leaf. In Fig. 15 we show the frequency dependence of the backscattering coefficient for the two types of leaf. Both horizontal and vertical polarization increase with frequency, and the increase is faster for the needle-shaped leaf. Comparisons with Measurements

4.64Mg2

The first volume scattering medium considered is a soybean canopy. It is modeled as a half-space of randomly oriented,

1 + 7.36Mg2

Backscattering coefficient (40°)

Backscattering coefficient (40°) –6

–5 –6

– 6.5 –7 –7 σ Figure 13. Dependence of the backscattering coefficient at 8.6 GHz with (a) the radius of a randomly oriented circular disk-shaped leaf of thickness 0.01 cm and ⑀r ⫽ 14.9 ⫺ j2.5, (b) a randomly oriented needle with radius 0.12 cm and ⑀r ⫽ 8.36 ⫺ j3.12. There is a Rayleigh region for small size and a saturation behavior when the length or radius of the scatterer exceeds a wavelength.

–8 σ0

0

–7.5

–10

VV HH

–8

2

3

4

5

6

7

–9 VV HH

–11 8

0

1

2

3

4

Disk radius (cm)

Length (cm)

(a)

(b)

5

6

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

197

Extinction cross section (40°)

Extinction cross section (40°) 0.5 2.5

0.45

2

0.4 0.35

kc 1.5

kc

1

0.3 0.25

kcv kch

0.5

kcv kch

0.2 0.15

2

3

4

5

6

7

0

8

1

2

3

4

5

Disk radius (cm)

Needle length (cm)

(a)

(b)

disk-shaped leaves. Figure 16 shows the comparisons between Eq. (56) and the data from Ref. 20. The agreement between the model and data is very good. Then in Fig. 17 we show another comparison of Eq. (56) with data from deciduous trees. Again very good agreement is obtained. Since the leaves of soybeans and trees are clearly different in shape, it follows that while shape should make a difference in scattering, its effect becomes negligible when we consider random distributions. Thus the volume scattering model is applicable to disk-shaped leaves regardless of their shape whenever the leaf distribution is very wide. For needle-shaped vegetation we show in Fig. 18 a comparison with coniferous vegetation. Very good agreement is obtained between Eq. (56) and the data reported in Ref. 21.

Backscattering coefficient (40°) –10

6

Figure 14. Variation of the extinction cross section at 8.6 GHz with (a) the radius of a randomly oriented circular diskshaped leaf of thickness 0.01 cm and ⑀r ⫽ 14.9 ⫺ j2.5, (b) a randomly oriented needle with radius 0.12 cm and ⑀r ⫽ 8.36 ⫺ j3.12. For circular-shaped leaves extinction cross section is higher for horizontal polarization and for needle-shaped leaves vertical polarization is higher.

SCATTERING FROM SNOW-COVERED GROUND A snow medium consists of a dense population of wet or dry ice particles in air with a volume fraction usually between 0.1 and 0.4. Within the distance of a centimeter there are several needlelike ice particles that are randomly oriented. Due to their random orientation spherical particles have been used to model snow with a radius around 0.5 millimeter. Due to metamorphism an actual snow layer may have a grain size that increases with depth. It is clear that snow is a dense medium both spatially and electrically in the microwave region. Recall that the classical radiative transfer formulation is for sparse media and that the phase function is the product between the average of the magnitude squared of the scattering amplitude 具兩S兩2典 of a single scatterer multiplied by the number density n0. This definition of the phase function is applicable to sparse media where independent scattering occurs. For snow, the scatterers may scatter as a group in some

Backscattering coefficient (4.25 GHz)

–20

–5

– 30

–10

σ0

–15

VVc HHc VVn HHn

– 40

σ0

–20

–50 2

3

4 5 6 Frequency (GHz)

7

8

Figure 15. Vertically and horizontally polarized backscattering coefficients from a volume of randomly oriented circular disks (VVc, HHc) and needles (VVn, HHn) plotted as a function of frequency at a moisture content of 0.4. Disk thickness ⫽ 0.01 cm, and radius ⫽ 1.5 cm. Needle radius ⫽ 0.17 cm, and length ⫽ 1.67 cm. The level of backscattering is higher for circular disk-shaped leaves, but backscattering increases faster with frequency for needle-shaped leaves.

VV Da 1 Da 2 Da 3

–25

0

20 40 60 Incident angle (deg)

80

Figure 16. Comparison between volume scattering model and measurements from a soybean canopy. Leaf thickness ⫽ 0.02 cm, radius ⫽ 1.5 cm, and ⑀r ⫽ 29.1 ⫺ j6.1. [From (20).]

198

MICROWAVE PROPAGATION AND SCATTERING FOR REMOTE SENSING

can be destroyed. Thus the most significant correction for dense media is the replacement of the number density by the effective number density.

Backscattering coefficient (8.6 GHz) –5

–10

An Effective Number Density From Ref. 4, an effective number density neff was derived for spherical scatterers. It is assumed that the positions of these scatterers are Gaussian correlated. It has the form

–15 σ0

–20

VV HH Dav Dah

–25

20

30 40 50 Incident angle (deg)

neff =

2 2 2 2 ∞ (k2si σ 2 )m 1 − e−k si σ e−k si σ + 3 3 d d m! m=1 (84) 2 2 3 −ksi σ π L exp − a(kx )a(ky )a(kz ) m d 4m

60

where

Figure 17. Comparison between volume scattering model given by Eq. (56) with measurements from deciduous trees in Kansas. Leaf thickness ⫽ 0.01 cm, radius ⫽ 1.5 cm, and ⑀r ⫽ 14.9 ⫺ j4.9. Both data and model indicate negligible difference between horizontal and vertical polarizations in backscattering from trees.

π L −k L

(md/L) + jkr L Re erf √ m d 4m 2 m k s = k(xˆ sin θs cos φs + yˆ sin θs sin φs + zˆ cos θs )

a(kr ) =

exp

2 r

2

k i = k(xˆ sin θi cos φi + yˆ sin θi sin φi + zˆ cos θi ) ˆ x + yk ˆ y + zk ˆ z ≡ k si k s − k i = xk

correlated manner, and near-field interaction may have to be included. For this reason we need an effective number density for correlated scatterers to account for phase coherence and a modified scattering amplitude to include near-field interaction (4). The effective number density of scatterers is smaller than the actual number because several scatterers are acting coherently as one scatterer. This effect can be very significant under large volume fraction condition or when only a few scatterers are within the distance of one wavelength. Because of random orientation, when there are too many scatterers lying within a wavelength, the coherence among scatterers

and 2 is the variance of a scatterer from its mean position, L is the correlation length among scatterer positions, and d is the average spacing between adjacent scatterers. For spherical scatterers, Ref. 4 has extended the Mie phase function to include the near-field interaction by not invoking the far-field approximation. Since the contents of Mie phase function is complex but well documented in the literature, the reader is referred to Refs. 4 and 16 for its content.

Backscattering coefficient (5.3 GHz) –10 –12

Backscattering coefficient (9.9 GHz) –5

–14

Vvv Vhh Svv Shh VV HH Dav Dah

–16

–10

σ 0 –18

–15

–20

σ0

–22

–20

–24

VV Dav

–25

20

10

15

20 25 30 35 Incident angle (deg)

40

Figure 18. Comparison of model given by Eq. (56) with coniferous tree data. Length ⫽ 1.67 cm, radius ⫽ 0.17 cm, and ⑀r ⫽ 8.36 ⫺ j3.12, f ⫽ 9.9 GHz. [From (21).] The agreement validates the model for coniferous vegetation.

30 40 50 Incident angle (deg)

60

Figure 19. Comparison between the layer model defined by Eqs. (56), (58), (59), and (61) and snow data reported by Ref. 22. The notations Vvv, Vhh stand for total volume scattering, Svv, Shh for total surface scattering, and VV, HH for total scattering by the layer for vertical and horizontal polarizations respectively. Dav, Dah denote data. Relative importance between surface and volume scattering is shown.

MICROWAVE RECEIVERS

Backscattering coefficient (9.5 GHz) –10 –12 –14

Vvv Vhh Svv Shh VV HH Dav Dah

–16 σ 0 –18

–20 –22 –24

20

30 40 50 Incident angle (deg)

60

Figure 20. Comparison between the layer model defined by Eqs. (56), (58), (59), and (61) and snow data reported by Ref. 22. The notations Vvv, Vhh stand for total volume scattering, Svv, Shh for total surface scattering, and VV, HH for total scattering by the layer. Relative importance between surface and volume scattering is shown.

Comparison with Measurements In this section we want to show an application of the layer model defined by Eqs. (56), (58), (59), and (61) and the relative contributions of the surface and volume scattering terms to the total backscattering from a snow-covered irregular ground surface. Most of the model parameters have been estimated by Ref. 22, so very little selection is needed. The rms heights of the snow-air boundary and snow ground boundary are 0.45 cm and 0.32 cm; correlation lengths of snow and ground surface are chosen to be 0.7 cm and 1.1 cm, and the snow and ground permittivities are 1.97-j0.007 and 4.7. We use exponential correlation function for both surfaces. Within the snow medium, the ice particle radius and permittivity are 0.014 cm, and 3.15 snow density is 0.48 g/cm3, and snow depth is 60 cm. Using these values, we compute backscattering at 5.3 GHz in Fig. 19 and at 9.5 GHz in Fig. 20. The results are in very good agreement with the dry snow data which are available only for vertical polarization. Figure 19 indicates that surface scattering contribution is dominating scattering but that volume scattering from snow makes significant contribution to total scattering. This is more so at 9.5 GHz where volume scattering accounts for about 2 dB difference. BIBLIOGRAPHY 1. A. K. Fung, Microwave Scattering and Emission Models and Their Applications, Norwood, MA: Artech House, 1994. 2. M. A. Karam et al., Microwave scattering model for layered vegetation, IEEE Trans. Geosci. Remote Sens., 30: 767–784, 1992. 3. A. K. Fung et al., Dense medium phase and amplitude correction theory for spatially and electrically dense media, Proc. IGARSS ’95, Vol. 2, 1995, pp. 1336–1338. 4. H. T. Chuah et al., A phase matrix for a dense discrete random medium: Evaluation of volume scattering coefficient, IEEE Trans. Geosci. Remote Sens., 34: 1137–1143, 1996.

199

5. H. T. Chuah et al., Radar backscatter from a dense discrete random medium, IEEE Trans. Geosci. Remote Sens., 35: 892–900, 1997. 6. F. T. Ulaby, R. K. Moore, and A. K. Fung, Microwave Remote Sensing, Vol. 1, Norwood, MA: Artech House, 1981. 7. S. Chandrasekhar, Radiative Transfer, New York: Dover, 1960. 8. A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 1, New York: Academic Press, 1978, pp. 30–33, 157–165. 9. L. Tsang and J. A. Kong, Thermal microwave emission from half space random media, Radio Sci., 11: 599–609, 1976. 10. A. Ishimaru and R. L. Cheung, Multiple scattering effects on wave propagation due to rain, Ann. Telecommun., 35: 373–378, 1980. 11. H. C. Van de Hulst, Light Scattering by Small Particles, New York: Wiley, 1957. 12. A. K. Fung and H. J. Eom, Multiple scattering and depolarization by a randomly rough Kirchhoff surface, IEEE Trans. Antennas Propag., 29: 463–471, 1981. 13. A. K. Fung and M. F. Chen, Scattering from a Rayleigh layer with an irregular interface, Radio Sci., 16: 1337–1347, 1981. 14. A. K. Fung and H. J. Eom, A theory of wave scattering from an inhomogeneous layer with an irregular interface, IEEE Trans. Antennas Propag., 29: 899–910, 1981. 15. C. Y. Hsieh et al., A further study of the IEM surface scattering model, IEEE Trans. Geosci. Remote Sens., 35: 901–909, 1997. 16. F. T. Ulaby, R. K. Moore, and A. K. Fung, Microwave Remote Sensing: Active and Passive, Vol. 3, Reading, MA: AddisonWesley, 1986, App. E8, p. 2103. 17. J. A. Stratton, Electromagnetic Theory, New York: McGraw-Hill, 1941. 18. M. A. Karam and A. K. Fung, Leaf-shape effects in electromagnetic wave scattering from vegetation, IEEE Trans. Geosci. Remote Sens., 27: 1989. 19. F. T. Ulaby and M. A. El-Rayes, Microwave dielectric spectrum of vegetation, Part II: Dual dispersion model, IEEE Trans. Geosci. Remote Sens., 25: 550–557, 1987. 20. A. K. Fung and H. J. Eom, A scatter model for vegetation up to Ku-band, Remote Sens. Environ., 15: 185–200, 1984. 21. H. Hirosawa et al., Measurement of microwave backscatter from trees, Int. Colloq. on Special Signature of Objects in Remote Sensing, Les Arcs, France, 1985. 22. J. R. Kendra, K. Sarabandi, and F. T. Ulaby, Radar measurements of snow: Experiment and analysis, IEEE Trans. Geosci. Remote Sens., 36: 864–879, 1998. 23. Y. Oh, K. Sarabandi, and F. T. Ulaby, An empirical model and an inversion technique for radar scattering from bare soil surfaces, IEEE Trans. Geosci. Remote Sens., 30: 370–381, 1992.

ADRIAN K. FUNG University of Texas at Arlington

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3609.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Microwave Remote Sensing Standard Article Richard K. Moore1 1The University of Kansas, Lawrence, KS Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3609 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (436K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Radiometers Radar Scattering Radar Scatterometers Radar Altimeters Ground-Penetrating Radars Imaging Radars Real-Aperture Radars Synthetic-Aperture Radars About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3609.htm17.06.2008 15:37:22

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

214

MICROWAVE REMOTE SENSING THEORY

image processing techniques, molecular theory of microwave radiation of gases, etc. The scattering effects of geophysical terrain can be characterized by random rough surface scattering and volume scattering from inhomogenities of the medium. In rough surface scattering, the rough surface has many peaks and valleys and the height profile can be described by random processes (1–3). In volume scattering, there are many particles that interact with microwaves. The positions of these particles are random. Such volume scattering effects are described by random distribution of wave scattering (2,4–6). This article studies the wave scattering by random rough surfaces and random discrete scatterers and their applications to microwave interaction with geophysical media in the context of microwave remote sensing. At microwave frequency, the size of the scatterers and the rough surface heights in a geophysical terrain are comparable to microwave wavelengths. Thus, the use of the wave approach based on solutions of Maxwell’s equations is essential. First, we review the basic principles of microwave interaction in active remote sensing and passive remote sensing. Next, we describe vector radiative transfer theory (2,7), which treats volume scattering and the small perturbation method for treating rough surface scattering. With the advent of modern computers and the development of computation methods, recent research in scattering problems emphasizes Monte Carlo simulations of solutions of Maxwell’s equations. These consist in generating samples or realizations of rough surface and random discrete scatterers and then using numerical methods to solve Maxwell’s equations for such boundary value problems. In the final section, we describe the results of such approaches.

BASICS OF MICROWAVE REMOTE SENSING Active Remote Sensing We first consider the radar equation for scattering by a conglomeration of scatterers (Fig. 1). Consider a volume V containing a random distribution of particles. The volume is illuminated by a transmitter in the direction of kˆi where kˆi is the

MICROWAVE REMOTE SENSING THEORY Microwave remote sensing of the earth has advantages over other remote sensing techniques, in that microwaves can penetrate clouds and also provide day and night coverage. Recent advances in microwave remote sensing measurements include synthetic aperture radar (SAR), imaging radar, interferometric SAR, spotlight SAR, circular SAR for active remote sensing, polarimetric radiometry, and SAR for passive remote sensing. The emphasis of this article is on how microwaves interact with geophysical terrain such as snow, ice, soils, forests, vegetation, rocky terrain, ocean, and sea surface. The scattering effects of such media contribute to microwave measurements. The scattering effects can be divided into surface scattering and volume scattering. This article describes the analytic and numerical approaches for treating such effects. Microwave remote sensing is a broad subject. We refer the reader to other articles in this encyclopedia for measurement techniques of antennas, radars, and radiometers, signal and

unit vector in the direction of incident wave propagation. The scattered wave in the direction kˆs is received by the receiver. Consider a differential volume dV containing N0 ⫽ n0dV num-

dV

^

^

ki

ks

Ri Transmitter

Rr Receiver

Figure 1. Scattering by a conglomeration of scatterers. The scattering geometry for remote sensing, showing both the transmitter and the receiver.

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

MICROWAVE REMOTE SENSING THEORY

215

ber of particles, where n0 is the number of particles per unit volume. The Poynting vector Si incident on dV is

For scattering by a small sphere of relative permittivity ⑀r and radius a, the differential cross section is

G (kˆ )P Si = t i 2 t 4πRi

− 1 2 σd = k a r + 2

(1)

where Pt is the power transmitted by the transmitter, Gt(kˆi) is the gain of the transmitter in the direction kˆi, and Ri is the distance between the transmitter and dV. Let d(N0)(kˆs, kˆi) denote the differential scattering cross section of the N0 particles in dV. The physical size of dV is chosen so that (N0 )

σd

(kˆ s , kˆ i ) = p(kˆ s , kˆ i ) dV

dP =

Si

dV

λ2 Gt (kˆ i )Gr (kˆ s ) ˆ ˆ p(k s , k i ) (4π )2 R2i R2r

λ2 [Gt (kˆ i )]2 dV p(−kˆ i , kˆ i ) (4π )2 R4

(N0 )

(kˆ s , kˆ i ) = N0 σd (kˆ s , kˆ i )

2 3 3 r − 1 (Vfk)(ka) = N0 σd = 4π r + 2

Similarly, we define κs = scattering cross section per unit volume

(9)

κa = absorption cross section per unit volume

(10)

κe = extinction cross section per unit volume

(11)

Then

d s p(kˆ s , kˆ i )

(12)

(4) where integration is over 4앟 scattered directions. Using independent scattering, we have κs = n0 d s σd (kˆ s , kˆ i ) = n0 σs (13) 4π

κa = n0 σa

(14)

(5) For a spherical particle with radius a and relative permittivity ⑀r,

(6)

σa = v0 kr

2 3 r + 2

where ⑀r⬙ is the imaginary part of ⑀r. The extinction is the sum of scattering and absorption. κe = κs + κa = n0 (σs + σa )

Equations (5) and (6) are the radar equation for a conglomeration of scatters. In independent scattering, we assume that the scattering cross sections of particles are additive. For the case that the particles scatter independently, and assuming that the N0 particles are identical, σd

(N ) σd 0

4π

Equation (5) is the radar equation for bistatic scattering of a volume of scatterers. For monostatic radar, the scattered direction is opposite to that of the incident direction kˆs ⫽ ⫺kˆi and we have for Rr ⫽ Ri ⫽ R Pr = Pt

Vf (4π/3)a3

and

κs =

where Gr(kˆs) is the gain of the receiver in direction kˆs. Putting together Eqs. (1) to (4) and integrating over volume dV gives the receiver power Pr as

Pr = Pt

N0 =

(3)

where Rr is the distance between dV and the receiver, and Ar(kˆs) is the effective receiver area of the receiving antenna. Gr (kˆ s )λ2 Ar (kˆ s ) = 4π

Let f ⫽ (4앟/3)a3n0 be the fractional volume occupied by the particles, then

(2)

which means d(N0) is proportional to dV and p(kˆs, kˆi) is the differential cross section per unit volume. The measured power at the receiver due to dV is (N ) σ 0 (kˆ s , kˆ i ) Ar (kˆ s ) d R2r

4 6 r

(7)

(15)

The parameters s, a, and e are also known respectively as scattering coefficient, absorption coefficient, and extinction coefficient. Consider intensity I that has dimension of power per unit area incident on a slab of thickness ⌬z and cross-section area A. Then the power extinguished by the scatterers is ⌬P ⫽ ⫺(intensity) (extinction cross section per unit volume) (volume)

where d(kˆs, kˆi) is the differential cross section of one particle. From Eqs. (2) and (7) and from N0 ⫽ n0dV, we have

P = −Iκe A z

p(kˆ s , kˆ i ) = n0 σd (kˆ s , kˆ i )

Hence ⌬I/⌬z ⫽ ⫺eI giving the solution I ⫽ I0e⫺es, where s is the distance traveled by the wave. Thus e represents attenu-

(8)

= I A

216

MICROWAVE REMOTE SENSING THEORY

ation per unit distance due to absorption and scattering. If attenuation is inhomogeneous, we have an attenuation factor of γ =

κe ds

where ds is the differential distance the wave travels. Attenuation can be included in the radar equation so that

Pr = Pt

dV

λ2 Gt (kˆ i )Gr (kˆ s ) ˆ ˆ p(k s , k i ) exp(−γi − γr ) (4π )2 R2i R2r

(16)

Bistatic Scattering Coefficients For active remote sensing of the surface of the earth, the radar equation is Gt σ Gr λ 2 Pr exp(−γr ) = exp(−γt ) A 2 Pt 4πr2t 4πr 4π

Thus, in terms of scattering characteristics of the surface of the earth, the quantity of interest to be calculated is A. For the case of terrain and sea returns, the cross section is often normalized with respect to the area A that is illuminated by radar. The bistatic scattering coefficient is defined as

where γi =

κe ds

γβ α (θs , ϕs ; θi , ϕi ) = lim

γr =

κe ds

(18)

is the attenuation from dV to receiving antenna. Particle Size Distribution. In many cases, the particles obey a size distribution n(a) so that the number of particles per unit volume with size between a and a ⫹ da is n(a)da. Thus n0 =

∞

n(a) da

(19)

0

Within the approximation of independent scattering,

∞

κs =

n(a)σs (a) da

(20)

where s(a) is the scattering cross section for a particle of radius a. Also

∞

p(kˆ s , kˆ i ) =

(27)

where Es웁 denotes the 웁 polarization component of the scattered electric field and E움i is the incident field with 움 polarization. Given the bistatic scattering coefficient 웂웁움, then the scattering cross section is A ⫽ A웂웁움. Notice that A cos i is the illuminated area projected onto the plane normal to the incident direction. From Fig. 2, the incident and scattered directions kˆi and kˆs can be written as follows:

kˆ i = sin θi cos ϕixˆ + sin θi sin ϕi yˆ − cos θizˆ

(28a)

kˆ s = sin θs cos ϕsxˆ + sin θs sin ϕs yˆ + cos θs zˆ

(28b)

In the backscattering direction s ⫽ i and s ⫽ 앟 ⫹ i, the monostatic (backscattering) coefficient is defined as, σβ α (θi , ϕi ) = cos θi γβ α (θs = θi , ϕs = π + ϕi ; θi , ϕi )

0

κa =

4πr2 |Eβs |2

r→∞ |E i |2 A cos θ α i

(17)

is the attenuation from transmitting antenna to dV and

(26)

(29)

Stokes Parameters Consider a time harmonic elliptically polarized radiation field with time dependence exp(⫺i웆t) propagating in the kˆ direction with complex electric field given by

n(a)σa (a) da

(21)

E = Evvˆ + Ehhˆ

n(a)σd (kˆ s , kˆ i ; a) da

(22)

ˆ denote the two orthogonal polarizations with Where vˆ and h ˆk, vˆ, and h ˆ following the right-hand rule. Thus

(30)

0 ∞ 0

For Rayleigh scattering by spheres

2 ∞ 4 r − 1 ˆ ˆ ˆ p(k s , k i ) = k |k s × (kˆ s × eˆ i )|2 n(a)a6 da r + 2 0 2 ∞ 8π 4 r − 1 k κs = n(a)a6 da 3 r − 2 0 2 ∞ 4π p 3 k κa = n(a)a3 da 3 p + 2 0

v (t) = Re(Ev e−iωt )

(31a)

h (t) = Re(Eh e−iωt )

(31b)

(23) z ^

ks

^

ki

(24)

θs

θi

(25)

A

φi

A typical example is rainfall where the raindrops are described by a size distribution known as the Marshall–Palmer size distribution of exponential dependence n(a) ⫽ nDe⫺움a, with nD ⫽ 8 ⫻ 106 m⫺4 and 움 ⫽ (8200/P0.21) m⫺1, and P is the precipitation rate in millimeters per hour.

x

y

φs

Figure 2. Incident and scattered directions in calculating bistatic scattering coefficients. The incident wave in the direction of kˆi impinges on the target and is scattered in the direction of kˆs.

MICROWAVE REMOTE SENSING THEORY

(C) The third polarization description is that of Stokes parameters. There are four Stokes parameters. They are

εh εβ

εα b

Eo χ ψ

|Ev |2 η |E |2 Ih = h η 2 U = Re(Ev Eh∗ ) η 2 V = Im(Ev Eh∗ ) η Iv =

a εv

Figure 3. Elliptical polarization ellipse with major and minor axes. The trace of the tip of the E vector at a given point in space as a function of time for an elliptically polarized wave.

Ev = Ev0 eiδ v

(32a)

Eh = Eh0 e

(32b)

iδ h

There are three independent parameters that describe the elliptical polarization. We shall describe three ways of describing polarization. (A) The three parameters are Ev0, Eh0, and phase difference 웃 ⫽ 웃v ⫺ 웃h. (B) For a general elliptically polarized wave, the ellipse may not be upright. It is tilted at an angle with respect to v and h (Fig. 3). The second description is using E0, , and . Where is the ellipticity angle and is the orientation angle. Let a and b be the lengths of semimajor axis and semiminor axis, respectively, and LH and RH stand for left hand and right hand polarized respectively. Let be defined so that tan χ = b/a if LH

(33a)

tan χ = −b/a if RH

(33b)

p

a2 + b 2 = E0

(34)

Then for LH b = E0 sin χ

(35a)

a = E0 cos χ

(35b)

and for RH b = −E0 sin χ

(36a)

a = E0 cos χ

(36b)

By performing a rotation of axis, it follows that Ev = iE0 cos χ cos ψ − E0 sin χ sin ψ

(37a)

Eh = iE0 cos χ sin ψ + E0 sin χ cos ψ

(37b)

Equations (37a) and (37b) give the relation between polarization descriptions of (A) and (B).

(38a) (38b) (38c) (38d)

Instead of Iv and Ih, one can also define two alternative Stokes parameters.

Let

Also,

217

I = Iv + Ih

(39a)

Q = Iv − Ih

(39b)

By substituting Eqs. (32a) and (32b) into Eqs. (38a– d), we have 2 Ev0 η E2 Ih = h0 η 2 U = Ev0 Eh0 cos δ η 2 V = Ev0 Eh0 sin δ η

Iv =

(40a) (40b) (40c) (40d)

From Eqs. (39) and (40) I 2 = Q2 + U 2 + V 2

(41)

The relation in Eq. (41) means that for elliptically polarized waves, there are only three independent parameters out of the four Stokes parameters. I=

E02 η

(42)

Q = I cos 2χ cos 2ψ

(43a)

U = I cos 2χ sin 2ψ

(43b)

V = I sin 2χ

(43c)

Equations (43a) to (43c) can be conveniently expressed using the Poincare sphere (Fig. 4) with I as the radius of the sphere and Q, U, and V representing, respectively, the Cartesian axes x, y, and z. From Eqs. (43a) to (43c), it follows that 2 is the latitude coordinate and 2 is the longitude coordinate. From Eqs. (43a) to (43c), Eq. (41) also follows readily. Thus, in elliptical polarization, the polarization is represented by a point on the surface of the Poincare sphere. Partial Polarization. For fluctuating fields, the complex phasors Ev and Eh fluctuate. For random media scattering, Ev and Eh are measured for many pixels and their values fluctu-

218

MICROWAVE REMOTE SENSING THEORY

We consider a dielectric medium of permittivity ⑀ and dimensions a, b, and d. The dimensions a, b, and d are large enough so that we assure the fields are zero at the boundaries. We next count the modes of the medium. The mode condition is 2 m 2 n 2 l 1 = νx2 + νy2 + νz2 ν2 = + + (46) µ 2a 2b 2d

V

I 2χ

U

2ψ

Q Figure 4. Poincare sphere. The north and south poles represent lefthanded and right-handed circular polarization, respectively. The spherical surface represents elliptically polarized waves and the points inside the sphere represent partially polarized waves.

ate from pixel to pixel. In these cases, the Stokes parameters are defined with averages taken

|Ev |2 η |Eh |2 Ih = η 2 U = ReEv Eh∗ η 2 V = ImEv Eh∗ η Iv =

(44a)

where l, m, and n ⫽ 0, 1, 2, . . .. The number of modes in a frequency interval d can be determined using Eq. (46). Each set of l, m, and n corresponds to a specific cavity mode. Thus, the volume of one mode in space is 1/8abd(애⑀)3/2 ⫽ 1/8V(애⑀)3/2 with V being the physical volume of the resonator. If a quarter-hemispherical shell has a thickness d and radius , then the number of modes contained in the shell is N(ν) dν =

where the factor of 2 accounts for the existence of transverse electric field (TE) and transverse magnetic field (TM) modes. If there are n photons in a mode with frequency then the energy E ⫽ nh. Using the Boltzmann probability distribution, the probability of a state with energy E is P(E) = Be−E/K T

(48)

(44b) (44c)

where B is a normalization constant, K is Boltzmann’s constant (1.38 ⫻ 10⫺23 J/K), and T is temperature in kelvin. Thus the average energy E in a mode with frequency is ∞

(44d)

E=

Thus, Q2 + U 2 + V 2 ≤ I 2

4πν 2 dν × 8V (µ)3/2 × 2 = 8πν 2V dν(µ)3/2 (47) 8

(45)

and the polarization corresponds to a point inside the Poincare sphere. Passive Remote Sensing Planck’s Radiation Law. All substances at a finite temperature radiate electromagnetic energy. This electromagnetic radiation is measured in passive remote sensing. According to quantum theory, radiation corresponds to the transition from one energy level to another. There are different kinds of transition, and they include electronic, vibrational, and rotational transitions. For complicated systems of molecules with an enormous number of degrees of freedom, the spectral lines are so closely spaced that the radiation spectrum becomes effectively continuous, emitting photons of all frequencies. To derive the relation between temperature and radiated power, consider an enclosure in thermodynamic equilibrium with the radiation field it contains. The appropriate model for the radiation in the enclosure is an ideal gas of photons. Photons are governed by Bose–Einstein statistics. The procedure for finding the energy density spectrum of the radiation field consists of (a) finding the allowed modes of the enclosure, (b) finding the mean energy in each mode, and (c) finding the energy in a volume V and frequency interval d.

∞

EP(E)

n=0 ∞ n=0

= P(E)

nhνe−nhν /K T

n=0 ∞

= e

−nhν /K T

hν ehν /K T − 1

(49)

n=0

The amount of radiation energy per unit frequency interval and per unit volume is w() ⫽ N()E/V. Hence, w(ν) =

8πhν 3 (µ)3/2 ehν /K T − 1

(50)

To compute radiation intensity, consider a slab of area A and infinitesimal thickness d. Such a volume would contain radiation energy W = 8πAd(µ)3/2

hν 3 ehν /K T

−1

(51)

per unit frequency interval. The radiation power emerging in direction within solid angle d⍀ is 2I cos Ad⍀, where I is the specific intensity per polarization and the radiation pulse will last for a time interval of d兹애⑀ /cos . Thus, √ √ d µ = Id µ8πA W = d 2AI cos θ (52) cos θ Equating Eqs. (51) and (52), I = µ

hν 3 ehν /K T − 1

(53)

MICROWAVE REMOTE SENSING THEORY

In the Rayleigh–Jean’s approximation h /KT Ⰶ 1. This gives for a medium with permeability 애 and permittivity ⑀ I=

KT µ λ 2 µ0 0

(54)

where ⫽ c/ is the free-space wavelength. In free space I=

KT λ2

(55)

for each polarization. The specific intensity given by Eq. (55) has dimension in power per unit area per unit frequency interval per unit solid angle (W ⭈ m⫺2 ⭈ Hz⫺1 ⭈ Sr⫺1). The Rayleigh–Jean’s law can be used in microwave frequencies. Brightness Temperatures. Consider thermal emission from a half space medium of permittivity ⑀1. In passive remote sensing, the radiometer acts as a receiver of the specific intensity I웁 emitted by the medium under observation. The specific intensity is I웁(0, 0) where 웁 denotes the polarization and (0, 0) denotes the angular dependence. From Eq. (54), the specific intensity inside the medium that is at temperature T is I=

KT 1 λ2 0

(56)

The specific intensity has to be transmitted through the boundary. Based on energy conservation, the received emission is, Iβ (θ0 , ϕ0 ) =

KT [1 − r10β (θ1 )] λ2

(57)

In Eq. (57), r10웁(1) denotes reflection when wave is incident from medium 1 onto medium 0. In Eq. (57), 1 and 0 are related by Snell’s law. The measured specific intensity are often normalized to get the brightness temperatures TBβ (θ0 , ϕ0 ) = Iβ (θ0 , ϕ0 )

λ2 K

(58)

From Eqs. (57) and (58), we obtain for a half-space medium TBβ (θ0 , ϕ0 ) = T[1 − r10β (θ1 )]

(59)

It is convenient to define emissivity e웁(0, 0) as eβ (θ0 , ϕ0 ) =

TBβ (θ0 , ϕ0 ) T

(60)

so that eβ (θ0 , ϕ0 ) = 1 − r10β (θ1 )

(61)

219

That emissivity is equal to one minus reflectivity and is a result of energy conservation and reciprocity. Kirchhoff ’s Law. Kirchhoff ’s law generalizes the concept of emissivity equal to one minus reflectivity to the case where there is bistatic scattering from rough surface and volume inhomogeneities.

eβ (θi , ϕi ) = 1 −

2π 1 π /2 dθ sin θ dϕγαβ (θ, ϕ; θi , ϕi ) 4π α 0 0 (64)

The equation above is a formula that calculates the emissivity from the bistatic scattering coefficient 웂. It also relates active and passive remote-sensing measurements. Emissivity of Four Stokes Parameters. In the following, we express emissivities in terms of bistatic scattering coefficients for all four Stokes parameters. The derivation is based on the result of fluctuation-dissipation theorem (8).

π /2 1 2π TBv (ˆs o ) = T 1 − dϕs dθs γβ v (ˆs , sˆ ob ) (65a) 4π β =v,h 0 0 π /2 1 2π dϕs dθs γβ h (ˆs , sˆ ob ) (65b) TBh (ˆs o ) = T 1 − 4π β =v,h 0 0 UB (ˆs o ) = TBv (ˆs o ) + TBh (ˆs o ) π /2 1 2π − 2T 1 − dϕs dθs γβ p (ˆs , sˆ ob ) 4π β =v,h 0 0 π /2 T 2π dϕs dθs [γβ p (ˆs, sˆ ob ) − γβ v (ˆs, sˆ ob ) = 4π β =v,h 0 0 (65c) − γβ h (ˆs , sˆ ob )] VB (ˆs o ) = TBv (ˆs o ) + TBh (ˆs o ) π /2 1 2π dϕs dθs γβ R (ˆs , sˆ ob ) − 2T 1 − 4π β =v,h 0 0 π /2 T 2π = dϕs dθs [γβ R (ˆs , sˆ ob ) 4π β =v,h 0 0 (65d) − γβ v (ˆs , sˆ ob ) − γβ h (ˆs , sˆ ob )] In Eqs. (65), 웂웁p is the bistatic scattering coefficient of a linearly polarized wave with polarization at an angle 45⬚ with respect to vertical and horizontal polarization wave, and 웂웁R is the bistatic scattering coefficient of an incident wave that is right-hand circular polarized. The measurement of the third and fourth Stokes parameters in microwave thermal emission from the ocean can be used to determine ocean wind (29,30).

The reflection r10웁 obeys a symmetry relation that is a result of reciprocity and energy conservation. r10β (θ1 ) = r01β (θ0 )

(62)

where r01웁(0) denote reflection for wave when incident from region 0 to region 1. Thus, emissivity is, from Eqs. (61) and (62) eβ (θ0 , ϕ0 ) = 1 − r01β (θ0 )

(63)

VOLUME SCATTERING AND SURFACE SCATTERING APPROACHES The subject of radiative transfer (7) is the analysis of radiation intensity in a medium that is able to absorb, emit, and scatter radiation. Radiative transfer theory was first initiated by Schuster in 1905 in an attempt to explain the appearance of absorption and emission lines in stellar spectra. Our inter-

220

MICROWAVE REMOTE SENSING THEORY

est in the radiative transfer theory lies in its application to the problem of remote sensing from scattering media. In the active and passive remote sensing of low absorption medium such as snow and ice, the effects of scattering due to medium inhomogeneities play a dominate role. Two distinct theories are being used to deal with the problem of incorporating scattering effects: wave theory and radiative transfer theory. In the wave theory, one starts out with Maxwell’s equations, introduces the scattering and absorption characteristics of the medium, and tries to find solutions for the quantities of interest, such as brightness temperature, or backscattering cross sections. We take such an approach later. Radiative transfer theory, on the other hand, starts with the radiative transfer equations that govern the propagation of energy through the scattering medium. It has an advantage in that it is simple and, more importantly, includes multiple scattering effects. Although the transfer theory was developed on the basis of radiation intensities, it contains information about the correlation of fields (9). The mutual coherence function is related to the Fourier transform of the specific intensity. In this section, we focus on vector radiative transfer equations including the polarization characteristics of electromagnetic propagation. The extinction matrix, phase matrix, and emission vector for these two types of scattering media are obtained. The goal is to study the scattering and propagation characteristics of the Stokes parameters in radiative transfer theory and use the theory to calculate bistatic scattering cross sections and brightness temperatures.

Radiative Transfer Theory Scalar Radiative Transfer Theory Radiative Transfer Equation. Consider a medium, consisting of a large number of particles (Fig. 5). We have I(r, sˆ) at all r and for all sˆ due to scattering. We consider a small volume element dV ⫽ dAdl, and dl is along the direction sˆ. The small volume element is centered at r. We consider the differential change in specific intensity I(sˆ) as it passes through dV. The differential change of power in direction sˆ is dP = −I(r, sˆ )dA d + I(r + dlsˆ , sˆ ) dA d

We now look at a radiative change in a medium containing many particles. The volume dV contains many particles that are randomly positioned. The volume dV is much bigger than 3 so that the random phase prevails and the input and output relation of dV can be expressed in terms of intensities instead of fields. There are three kinds of changes that will occur to I(r, sˆ) in the small volume element. 1. Extinction that contributes a negative change 2. Emission by the particles inside the volume dV that contributes a positive change 3. Bistatic scattering form direction sˆ⬘ into direction sˆ that contributes a positive change

dA dl

(66)

^ Iout(s)

Many particles inside

Iin(s) I(s′)

Figure 5. Specific intensity I(sˆ) in the direction sˆ in and out of an elemental volume. Many particles are inside the elemental volume. Each particle absorbs power and scatters power which leads to a decrease of the specific intensity in direction sˆ. At the same time, the specific intensity is enhanced by the emission of particles as well as the energy scattered into the direction sˆ from the other directions sˆ⬘.

MICROWAVE REMOTE SENSING THEORY

For the extinction where e is the extinction cross section per unit volume of space, the differential change of power from extinction is dP(i) = −κe dV I(r, sˆ ) d

(67)

Let ⑀(r, sˆ) be emission power per unit volume of space per unit solid angle per unit frequency; then dP(ii) = (r, sˆ ) dV d

2. I(r, sˆ) ⫽ I(x, y, z, , ). However, by symmetry, I is independent of x, y, and . Thus we let I(z, ) be the unknown. 3. Note that I(z, ) is 0 ⱕ ⱕ 180⬚. We divide into upwardand downward-going specific intensity. For 0 ⬍ ⬍ 앟/2

(68)

To derive the change due to bistatic scattering, we note that p(sˆ, sˆ⬘) is the bistatic scattering cross section per unit volume of space. Then if I(r, sˆ⬘) is the specific intensity in direction sˆ⬘, and since I(r, sˆ⬘) exists in all directions sˆ⬘, dP(iii) = I(r, sˆ )d p(ˆs , sˆ ) dV d

(69)

(72a)

Id (z, θ ) = I(z, π − θ )

(72b)

sˆ = sin θ cos ϕxˆ + sin θ sin ϕ yˆ + cos θ zˆ dIu for Iu cos θ dz sˆ · ∇I(r, sˆ ) = − cos θ dId for I d dz

where integration is over 4앟 directions. Equating the sum of Eqs. (67) to (69) to Eq. (66) gives

(73)

Thus the radiative transfer equations become

− I(r, sˆ ) dA d + I(r + dlsˆ , sˆ ) dA d =

KT dIu = −κa Iu + κa 2 dz λ dId KT = −κa Id + κa 2 − cos θ dz λ cos θ

(70)

4π

where dI/ds is the rate of change of I(r, sˆ) per unit distance in direction sˆ. Thus the radiative transfer equation with a thermal emission term at microwave frequencies is d kT I(r, sˆ ) = −κe I(r, sˆ ) + κa 2 + d p(ˆs , sˆ )I(r, sˆ ) ds λ

Iu (z, θ ) = I(z, θ )

Since

4π

dI dl dA d

ds = −κe dV I(r, sˆ ) d + (r, sˆ ) dV d

I(r, sˆ ) d p(ˆs , sˆ ) dV d

+

221

(71)

If one uses independent scattering, then as indicated previously, we have e ⫽ n0t, a ⫽ n0a, and p(sˆ, sˆ⬘) ⫽ n0兩f(sˆ, sˆ⬘)兩2. Passive Microwave Remote Sensing of a Layer Nonscattering Medium. Passive microwave remote sensing of the earth measured the thermal emission from the atmosphere and the earth with a receiving antenna known as radiometer (Fig. 6). Let the atmosphere and the earth be at temperatures T and Tc, respectively. Also let a be the absorption coefficient of the atmosphere. We make the following assumption and observations:

(74a) (74b)

The boundary condition for the radiative transfer equations are as follows: At z ⫽ 0, top of the atmosphere Id (z = 0) = 0

(75)

At z ⫽ ⫺d, the boundary separately the atmosphere and the earth surface, Id (z = −d) = rId (z = −d) +

KT2 (1 − r) λ2

(76)

where KT2 / 2 is the black-body specific intensity from the earth. The Fresnel reflectivity r depends on as well as polarization, as noted previously. The solution of Eqs. (74a) and (74b) can be expressed in terms of the sum of particular and homogeneous solutions.

KT + Ae−κ a secθ z λ2 KT Id = 2 + Beκ a secθ z λ

Iu =

1. The particles are absorptive, and absorption dominates over scattering. We thus set p(sˆ, sˆ⬘) ⫽ 0 so that e ⫽ a.

(77a) (77b)

where A and B are constants to be determined. Imposing the boundary condition [Eq. (75)] and Eq. (77b) gives Z

B=

Radiometer θ κa’T T2

Iu

Vacuum Id

Atmosphere Earth

Figure 6. Thermal emission from the atmosphere and the earth. The emission is received by the receiving antenna of the radiometer.

−KT λ2

and

KT −κ a dsecθ KT e + r 2 e−κ a dsecθ (1 − eκ a dsecθ ) λ2 λ KT2 + 2 (1 − r)e−κ a dsecθ λ

A= −

(78)

222

MICROWAVE REMOTE SENSING THEORY

Putting Eq. (78) in Eq. (77) gives Iu. The measured specific intensity by the radiometer is Iu(z ⫽ 0), and

Iu (z = 0) =

KT KT (1 − e−κ a dsecθ ) + 2 re−κ a dsecθ (1 − eκ a dsecθ ) λ2 λ KT2 −κ a dsecθ (79) + 2 (1 − r)e λ

The first term is the emission of a layer of thickness d with absorption coefficient a. The second term is the downward emission of the layer that is reflected by the earth. It is further attenuated as it travels upward to the radiometer. The last term is the upward emission of the earth that is attenuated by the atmosphere. It is convenient to normalize the measured I to a quantity with units of temperature. The brightness temperature TB is defined by

TB =

measured I K/λ2

= T (1 − e−κ a dsecθ ) + Tre−κ a dsecθ (1 − eκ a dsecθ )

(80)

+ T2 (1 − r)e−κ a dsecθ Vector Radiative Transfer Equation Phase Matrix of Independent Scattering. For vector electromagnetic wave scattering, the vector radiative transfer equation has to be developed for the Stokes parameters. We first treat scattering of waves by a single particle (e.g., raindrop, ice grain, leaf, etc.). The scattering property of the particle depends on its size, shape, orientation, and dielectric properties. Consider a plane wave E = (vˆ i Evi + hˆ i Ehi )eik i ·r = eˆ i E0 eik i ·r

(81)

impinging upon the particle. In spherical coordinates kˆ i = sin θi cos ϕixˆ + sin θi sin ϕi yˆ + cos θizˆ

(82)

vˆ i = cos θi cos ϕi xˆ + cos θi sin ϕi yˆ − sin θizˆ

(83)

hˆ i = − sin ϕi xˆ + cos ϕi yˆ

(84)

In the direction kˆs, the far-field scattered wave Es will be a spherical wave and is denoted by E s = (vˆ s Evs + hˆ s Ehs )eik s ·r

(85)

kˆ s = sin θs cos ϕsxˆ + sin θs sin ϕs yˆ + cos θs zˆ

(86)

vˆ s = cos θs cos ϕs xˆ + cos θs sin ϕs yˆ − sin θs zˆ

(87)

hˆ s = − sin ϕs xˆ + cos ϕs yˆ

(88)

with

The scattered field will assume the form Es =

eikr F(θs , ϕs ; θi , ϕi ) · eˆ i E0 r

(89)

where F(s, s; i, i) is the scattering function matrix. Hence Evs Evi eikr f vv (θs , ϕs ; θi , ϕi ) f vh (θs , ϕs ; θi , ϕi ) = · r f hv (θs , ϕs ; θi , ϕi ) fhh (θs , ϕs ; θi , ϕi ) Ehs Ehi (90) with f ab (θs , ϕs ; θi , ϕi ) = aˆ s · F (θs , ϕs ; θi , ϕi ) · bˆ i

(91)

and a,b ⫽ v,h. To relate the scattered Stokes parameters to the incident Stokes parameters, we define Is =

1 L(θs , ϕs ; θi , ϕi ) · I i r2

(92)

where Is and Ii are column matrices containing the scattering and incident Stokes parameters, respectively. Iv s I Is = hs (93) Us Vs Iv i I Ii = hi (94) Ui Vi and L(s, s; i, i) is the Stokes matrix. | f vh |2 | f vv |2 | f |2 | f hh |2 hv L(θs , ϕs ; θi , ϕi ) = ∗ ∗ 2Re( f vv f hv ) 2Re( f vh f hh ) ∗ ∗ 2Im( f vv f hv ) 3Im( f vh f hh ) ∗ Re( f vh f vv ) ∗ Re( f hh f hv ) ∗ ∗ + f vh f hv ) Re( f vv f hh ∗ ∗ Im( f vv f hh + f vh f hv )

∗ −Im( f vh f vv ) ∗ −Im( f hv f hh ) ∗ ∗ −Im( f vv f hh − f vh f hv ) ∗ ∗ Re( f vv f hh − f vh f hv ) (95)

Because of the incoherent addition of Stokes parameters, the phase matrix is equal to the average of the Stokes matrix over the distribution of particles in terms of size, shape, and orientation. Extinction Matrix. For nonspherical particles, the extinction matrix is generally nondiagonal. The extinction coefficients can be identified with the attenuation of the coherent wave, which can be calculated by using Foldy’s approximation (10). Let Ev and Eh be, respectively, the vertically and horizontally polarized components of the coherent wave. Then the following coupled equations hold for the coherent field along the propagation direction (, ). Let the direction of propagation be denoted by sˆ with sˆ(, ) ⫽ sin cos xˆ ⫹ sin sin yˆ ⫹ cos zˆ. dEv = (ik + Mvv )Ev + Mvh Eh ds

(96)

dEh = MhvEv + (ik + Mhh )Eh ds

(97)

MICROWAVE REMOTE SENSING THEORY

where s is the distance along the direction of propagation. Solving Eqs. (96) and (97) yields two characteristic waves with defined polarization and attenuation rates. Thus, for propagation along any particular directions (, ), there are only two attenuation rates. In Eqs. (96) and (97)

M jl =

i2πn0 f jl (θ, ϕ; θ, ϕ) k

j, l = v, h

dIh = 2Re(Mhh )Ih + Re(Mhv )U − Im(Mhv )V ds

(99) (100)

dV = − 2Im(Mhv )Iv + 2Im(Mvh )Ih + [Im(Mvv ) − Im(Mhh )]U ds (102) + [Re(Mvv ) + Re(Mhh )]V Identifying the extinction coefficients in radiative transfer theory as the attenuation rates in coherent wave propagation, we have the following general extinction matrix for nonspherical particles.

−2Re Mvv 0 κe = −2Re Mhv 2Im Mhv

0 −2Re Mhh −2Re Mvh −2Im Mvh

−Re(Mvh ) −Re(Mhv ) −(Re Mvv + Re Mhh ) −(Im Mvv − Im Mhh )

−Im(Mvh ) Im(Mhv ) (Im Mvv − Im Mhh ) −(Re Mvv + Re Mhh )

Flat surface Figure 7. Waves reflected by a flat surface. The reflected waves have the same phase in the specular direction.

dU = 2Re(Mhv )Iv + 2Re(Mvh )Ih + [Re(Mvv ) + Re(Mhh )]U ds (101) − [Im(Mvv ) − Im(Mhh )]V

Equal phase front

(98)

where the angular bracket denotes the average to be taken over the orientation and size distribution of the particles. Using the definition of Stokes parameters Iv, Ih, U, and V as well as Eqs. (96) and (97), the differential equations can be derived for dIv /ds, dIh /ds, dU/ds, and dV/ds dIv = 2Re(Mvv )Iv + Re(Mvh )U + Im(Mvh )V ds

223

(103)

Emission Vector. In this section, we list the emission vector for passive remote sensing of nonspherical particles. The fluctuation–dissipation theorem is used to calculate the emission of a single nonspherical particle. Generally, all four Stokes parameters in the vector source term are nonzero and are proportional to the absorption coefficient in the backward direction (11). The emission term can be inserted into the vector radiative transfer equations, which assume the following form

where e(sˆ) is the extinction matrix and P(sˆ, sˆ⬘) is the phase matrix. κa1 (ˆs ) κ (ˆs ) r(s) ˆ = a2 (105) −κa3 (ˆs ) −κa4 (ˆs ) κa1 (ˆs ) = κe11 (ˆs ) − d [P11 (ˆs , sˆ ) + P21 (ˆs , sˆ )] (106a) (106b) κa2 (ˆs ) = κe22 (ˆs ) − d [P12 (ˆs , sˆ ) + P22 (ˆs , sˆ )] κa3 (ˆs ) = 2κe13 (ˆs ) + 2κe23 (ˆs ) − 2 d [P13 (ˆs , sˆ ) + P23 (ˆs , sˆ )] (106c) κa4 (ˆs ) = −2κe14 (ˆs ) − 2κe24 (ˆs ) + 2 d [P14 (ˆs , sˆ ) + P24 (ˆs , sˆ )] (106d) where eij and Pij are, respectively, the ij elements of e and P with i, j ⫽ 1, 2, 3, or 4. The vector radiative transfer theory has been applied extensively to microwave remote sensing problems (2,5,12). Random Rough Surface Scattering Consider a plane wave incident on a flat surface (Fig. 7). We note that the wave is specularly reflected because the specular reflected waves are in phase with each other. The reflected wave only exists in the specular direction. Imagine now that the surface is rough. It is clear from Fig. 8 that the two reflected waves have a pathlength difference of 2h cos i. This will give a phase difference of ϕ = 2kh cos θi

(107)

If h is small compared with a wavelength, then the phase difference is insignificant. However, if the phase difference is significant, the specular reflection will be reduced due to interference of the reflected waves that can partially cancel each other. The scattered wave is diffracted into other direc-

θi θi h

dI(r, sˆ ) = −κ e (ˆs ) · I(r, sˆ ) + r(ˆs )CT (r) + d P(ˆs , sˆ ) · I(r, sˆ ) ds (104)

Figure 8. Waves scattered by a rough surface. Two reflected waves have a pathlength difference of 2h cos i that lead to a phase difference of ⌬ ⫽ 2kh cos i.

224

MICROWAVE REMOTE SENSING THEORY

tions. A Rayleigh condition is defined such that the phase difference is 90⬚. Thus for h<

λ cos θi 8

(108)

the surface is regarded as smooth and if h>

λ cos θi 8

(109)

The Fourier transform then becomes ∞ 1 FL (kx ) = dxe−ik x x f L (x) 2π −∞

(116)

The two sets [Eqs. (114) and (116)] should agree for large L. Physically, the domain of the rough surface is always limited by the antenna beamwidth. For a stationary random process f (x1 ) f (x2 ) = h2C(x1 − x2 )

(117)

the surface is regarded as rough. For random rough surface, h is regarded as the root mean square (rms) height. Consider an incident wave inc(r) impinging upon a rough surface. The wave function obeys the wave equation:

where h is the rms height and C is the correlation function. Some commonly used correlation functions are Gaussian correlation function

(∇ 2 + k2 )ψ = 0

C(x) = exp(−x2 /l 2 )

(110)

The rough surface is described by a height function z ⫽ f(x, y). Two common boundary conditions are those of the Dirichlet problem and the Neumann problem. For the Dirichlet problem, at z ⫽ f(x, y) ψ =0

∂ψ/∂n = 0

(112)

where ⭸ /⭸n is the normal derivative. In this section, we illustrate the analytic techniques for scattering by such surfaces. For electromagnetic wave scattering by a one-dimensional rough surface z ⫽ f(x), the Dirichlet problem corresponds to that of a TE wave impinging upon a perfect conductor, where is the electric field. The Neumann problem corresponds to that of a TM wave impinging upon a perfect electric conductor and is the magnetic field. In the section on numerical simulations, the cases of dielectric surfaces are simulated. The simplified perfect conductor has been used in studying active remote sensing of the ocean that is reflective to electromagnetic waves. Statistics, Correlation Function, and Spectral Density. For a one-dimensional random rough surface, we let z ⫽ f(x), where f(x) is a random function of x with zero mean f (x) = 0

(113)

We define the Fourier transform F (kx ) =

1 2π

∞ −∞

dxe−ik x x f (x)

(114)

Strictly speaking, if the surface is infinite, the Fourier transform does not exist. To circumvent the difficulty one can use the Fourier Stieltjes integral (1), or one can define truncated function

f L (x) =

f (x) |x| ≤ L/2 0

and exponential correlation function C(x) = exp(−|x|/l)

|x| ≥ L/2

(115)

(119)

In Eqs. (118) and (119) l is known as the correlation length. In the spectral domain

(111)

For the Neumann problem, the boundary condition is at z ⫽ f(x, y)

(118)

F (kx ) = 0

(120)

and

h2C(x1 − x2 ) =

∞ −∞

dk1x

∞ −∞

dk2x eik 1x x 1 −ik 2x x 2 F (k1x )F ∗ (k2x ) (121)

Since the left-hand side depends only on x1 ⫺ x2, we have F (k1x )F ∗ (k2x ) = F (k1x )F (−k2x ) = δ(k1x − k2x )W (k1x ) (122) and h2C(x) =

∞ −∞

dkx eik x xW (kx )

(123)

where W(kx) is known as the spectral density. Since f(x) is real, we have used the relation. F ∗ (kx ) = F (−kx )

(124)

C(x) = C(−x)

(125)

Since

and C(x) is real, it follows that W(kx) is real and is an even function of kx. Instead of using a correlation function to describe the Gaussian random process, one can also use the spectral density. For Gaussian correlation function of Eq. (118)

h2 l k2 l 2 W (kx ) = √ exp − x 4 2 π

(126)

and for exponential correlation of Eq. (119) W (kx ) =

h2 l π (1 + k2x l 2 )

(127)

MICROWAVE REMOTE SENSING THEORY

where kz ⫽ (k2 ⫺ kx2)1/2. The Rayleigh hypothesis (13) has been invoked in Eq. (132) as the scattered wave is expressed in terms of a spectrum of upward plane waves only. To calculate Es(kx), we match the boundary condition from Eq. (130),

z

θ i θs

z = f (x)

eik ix x−ik iz f (x) +

x Figure 9. The wave scattering from a slightly rough surface. The incident and scattered directions and the rough surface profile are shown.

Small Perturbation Method. The scattering of electromagnetic waves from a slightly rough surface can be studied using a perturbation method (9). It is assumed that the surface variations are much smaller than the incident wavelength and the slopes of the rough surface are relatively small. The small perturbation method (SPM) makes use of the Rayleigh hypothesis to express the reflected and transmitted fields into upward- and downward-going waves, respectively. The field amplitudes are then determined from the boundary conditions. An investigation of the validity of the Rayleigh hypothesis can be found in Ref. 13. A renormalization method can also be used to make corrections on the small perturbation method (14,15). In this section, we use the small perturbation method to carry out scattering up to second order and show that energy conservation is exactly obeyed. The incoherent wave is calculated up to first order to obtain bistatic scattering coefficients. Dirichlet Problem for One-Dimensional Surface. We first illustrate the method for a simple one-dimensional random rough surface with height profile z ⫽ f(x) and 具f(x)典 ⫽ 0. The scattering is a two-dimensional problem in x ⫺ z without y variation. Consider an incident wave impinging on such a surface with the Dirichlet boundary condition (Fig. 9). Let Einc = eik ix x−ik iz z

(129)

Einc + Es = 0

−∞

dkx eik x x+ik z z Es (kx ) = 0

(133)

(134)

Thus assuming that 兩kzf(x)兩 Ⰶ 1

k2iz f 2 (x) 1 − ikiz f (x) − + ··· 2 ∞ k2 f 2 (x) + ··· dkx eik x x 1 + ikz f (x) − z + 2 −∞

e

ik ix x

× [Es(0) (kx ) + Es(1) (kx ) + Es(2) (kx ) + · · · ] = 0

(135)

Balancing Eq. (135) to zeroth order gives eik ix x +

∞ −∞

dkx eik x x Es(0) (kx ) = 0

(136)

so that Es(0) (kx ) = −δ(kx − kix )

(137)

If we substitute Eq. (137) into Eq. (132), we get back the zeroth order solution in the space domain as given by Eq. (131). Balancing Eq. (135) to first order gives,

− eik ix x [ikiz f (x)] +

∞ −∞

dkx eik x x [ikz f (x)]Es(0) (kx ) ∞ =− dkx eik x x Es(1) (kx )

(138)

−∞

From Eq. (138), it follows that the first-order solution can be expressed in terms of the zeroth-order solution. Substituting Eq. (137) in Eq. (138) gives ∞ −∞

(130)

∞

Es (kx ) = Es(0) (kx ) + Es(1) (kx ) + Es(2) (kx ) + · · ·

The boundary condition at z ⫽ f(x) is

Perturbation theory consists of expanding the exponential functions of exp[⫺ikizf(x)] and exp[ikzf(x)] in power series. In the spectral domain, we also let

(128)

where kix ⫽ k sin i, kiz ⫽ k cos i. In the perturbation method, one uses the height of the random rough surface as a small parameter. We assume that kh Ⰶ 1, where h is the rms height. For the scattered wave, we write it as a perturbation series. Es = Es(0) + Es(1) + Es(2) + · · ·

225

dkx eik x x Es(1) (kx ) = 2ikiz f (x)eik x x ∞ = 2ikiz dkx F (kx )eik x x+ik ix x

(139)

−∞

The zeroth order scattered wave is the reflection from a flat surface at z ⫽ 0. Thus Es(0) = −eik ix x+ik iz z

(131) Es(1) (kx ) = 2ikiz F (kx − kix )

We let the scattered field to be represented by Es (r) =

∞ −∞

dkx eik x x+ik z z Es (kx )

The second equality is a result of using the Fourier transform of f(x) from Eq. (114). Thus

(132)

(140)

The result of Eq. (140) can be interpreted as follows. For the wave to be scattering from incident direction kix to scattered direction kx, the surface has to provide the spectral component

226

MICROWAVE REMOTE SENSING THEORY

of kx ⫺ kix. This is characteristic of Bragg scattering. Balancing Eq. (135) to second order gives

e

ik ix x

k2iz f 2 (x) + 2

∞ −∞

dkx e

−

∞ −∞

ik x x

k2z f 2 (x) (0) Es (kx ) 2

dkx eik x x ikz f (x)Es(1) (kx ) ∞ dkx eik x x Es(2) (kx ) =

Ss · zˆ = Re

(141)

Using Eq. (137), the first two terms in Eq. (141) canceled each other. Thus the second-order solution can be expressed in terms of the zeroth-order and first-order solutions. Substituting Eqs. (140) and (137) in Eq. (141) gives the second-order solution

∞ −∞

dkx kz kiz F (kx − kx )F (kx − kix )

(143) (144)

The dirac function 웃(kx ⫺ kix) in Eqs. (137) and (144) indicate that the coherent wave is in the specular direction. Substituting Eqs. (137), (143) and (144) in Eq. (132) gives the coherent field, to second order,

Es(0) (r)

= −e ∞

Es(2) (r) = eik ix x+ik iz z 2kiz

ik ix x+ik iz z

−∞

dkx kzW (kix − kx )

(145)

1 cos θi 2η

(0)* (1)* (0) (2)* Suppose we include up to E(0) ⫹ 具E(1) s Es s Es 典 ⫹ Es 具Es 典 ⫹ (0)* 具E 典Es in Eq. (149). That is, we include the intensity due to the product of first-order fields and the product of the zerothorder field and the second-order field. Thus the power per unit area outflowing from the rough surface that is associated with the coherent field 具S ⭈ zˆ典C is

∂ ∂ i ∗ E (0) (r) Es(0) (r) + Es(0) (r) Es(2) (r) 2ωµ s ∂z ∂z ∗ ∂ (150) + Es(2) (r) Es(0) (r) ∂z

Ss · zˆ C = Re

Ss · zˆ C =

∞ kiz 1 − 4kiz dkx (Re kz )W (kix − kx ) 2ωµ −∞

(151)

Note that W, the spectra density, is real. Since kz is imaginary for evanescent waves, the integration limits of Eq. (151) are replaced by ⫺k to k, because kz is imaginary for the evanescent waves and the integrand does not contribute. This means that evanescent waves do not contribute to the average power flow.

cos θi Ss · zˆ C = 2η

1 − 4kiz

k −k

dkx kzW (kix − kx )

(152)

For the incoherent wave power flow, we use the first-order scattering fields. In the spectral domain, we have the incoherent field 具S ⭈ zˆ典IC

∞ 1 Ss · zˆ IC = Re dkx 2ωµ −∞ ∞ ∗ i(k x −k ∗ x )z E (1) (k )E (1) (k∗ ) dkx ei(k x −k x )x k∗ x s z e s x −∞

(153)

(146) (147)

Since ∗

Es(1) (kx )Es(1) (kx ) = 4k2izW (kx − kix )(kx − kx )

Because the two terms in Eq. (145) are opposite in signs, the result of Eq. (145) indicates that the coherent reflection is less than that of the flat-surface case. Bistatic Scattering. To study energy transfer in scattering, we note that incident wave has power per unit area Sinc · zˆ = −

(149)

Putting Eqs. (145) to (147) into Eq. (150) gives

−∞

Es (r) = Es(0) (r) + Es(2) (r)

i ∂E ∗ Es s 2ωµ ∂z

(142)

where k⬘z ⫽ (k2 ⫺ k⬘2x )1/2. Equation (142) has the following simple physical interpretation. Second-order scattering consists of scattering from the incident direction kix into an intermediate direction k⬘x as provided by spectral component k⬘x ⫺ kix of the rough surface. This is followed by another scattering from k⬘x to direction kx as provided by spectral component kx ⫺ k⬘x of the rough surface. Since k⬘x is an arbitrary direction, an integration over all possible k⬘x is needed in Eq. (142). Coherent Wave Reflection. The coherent wave is obtained by calculating the stochastic average. We note that, Es(1) (kx ) = 0 ∞ Es(2) (kx ) = 2δ(kx − kix )kiz dkx kzW (kix − kx )

(2) s

−∞

Es(2) (kx ) = 2

The power per unit area outflowing from the rough surface is

(148)

flowing into the rough surface, where is the wave impedance. The negative sign in Eq. (148) indicates that the Poynting vector has a negative zˆ component.

(154)

It then follows that Ss · zˆ IC =

2 cos θi kiz η

k −k

dkx kzW (kx − kix )

(155)

Comparing Eq. (155) with Eq. (152) shows that the incoherent power flow exactly cancels the second term of Eq. (152) giving the relation Ss · zˆ =

cos θi 2η

(156)

MICROWAVE REMOTE SENSING THEORY

that exactly obeys energy conservation. Thus if we define the incoherent wave s = Es − Es

(157)

s (kx )s∗ (kx ) = I(kx )δ(kx − kx )

(158)

Einc Vj εj

We define the power flow per unit area of the incoherent wave as Ss · zˆ =

1 2ωµ

dkx kz I(kx )

(159) V

Casting in terms of angular integration, we let

Ss · zˆ =

k 2η

π /2 π /2

kx = k sin θs

(160)

kz = k cos θs

(161)

dθs cos2 θs I(kx = k sin θs )

(162)

Thus if we divide Eq. (162) by the incident power per unit area of Eq. (148), we can define the incoherent bistatic scattering coefficients. σ (θs ) =

Vl εl

k −k

k cos2 θs I(kx = k sin θs ) cos θi

(163)

Note that the integration of (s) over s will combine with the reflected power of the coherent wave to give an answer that obeys energy conservation. (1) For first-order scattering, ⑀(1) s (kx) ⫽ Es (kx), so that from Eqs. (155) and (158) I(kx ) = 4k2izW (kx − kix )

227

(164)

Figure 10. An incident field Einc(r) incidents upon N nonoverlap, small spheroids that are randomly positioned and oriented in a volume V.

volume. This has been demonstrated in controlled laboratory experiments (16). Well-known analytical approximations in multiple-scattering theory include Foldy’s approximation, the quasicrystalline approximation (QCA), and the quasicrystalline approximation with coherent potential (QCA-CP) (2). The last two approximations depend on the pair distribution function of particles positions in which the Percus–Yevick (PY) approximation is used to describe the correlation of positions of particles of finite sizes (2,17). Because of the recent advent of computers and computational methods, the study of scattering by dense media has recently relied on exact solutions of Maxwell’s equations through Monte Carlo simulations. Such simulations can be done by packing several thousands of particles randomly in a box, and then solving Maxwell’s equations. The simulations are performed over many samples (realizations) and the scattering results are averaged over these realizations. The results give information about the collective scattering effects of many particles closely packed together.

and Eq. (163) assumes the form σ (θs ) = 4k3 cos2 θs cos θiW (k sin θs − k sin θi )

(165)

The backscattering coefficient is for s ⫽ ⫺i σ (−θi ) = 4k3 cos3 θiW (−2k sin θi )

(166)

The small perturbation method has been used for the three-dimensional scattering problem (2,4) and also for dielectric surfaces. It has been used extensively for rough surface scattering problems in soils and ocean scattering (2,12). MONTE-CARLO SIMULATIONS OF WAVE SCATTERING FROM DENSE MEDIA AND ROUGH SURFACES

Formulation. Let an incident electric field Einc(r) impinge upon N number of randomly positioned small spheroids (Fig. 10). Spheroid j is centered at rj and has permittivity ⑀j, j ⫽ 1, 2, 3, . . ., N. The discrete scatterers are embedded in a background with permittivity ⑀. Particle j occupies region Vj. Let ⑀p(r) be the permittivity as a function of r, j for r in V j p (r) = (167) for r in the background Thus the induced polarization is (168)

p (r) −1

(169)

where

Scattering of Electromagnetic Waves from Dense Distributions of Nonspherical Particles Based on Monte Carlo Simulations For wave propagation in a medium consisting of randomly distributed scatterers, the classical assumption is that of independent scattering, which states that the extinction rate is equal to n0e where n0 is the number of particles per unit volume and e is the extinction cross section of one particle. This classical assumption is not valid for a dense medium that contains particles occupying an appreciable fractional

P(r) = χ (r)E(r)

χ (r) =

is the electric susceptibility. The total electric field can be expressed in terms of the volume integral equation as

E(r) = E inc (r) + k

2

g(r, r

dr

)

P(r ) − ∇

dr

∇

g(r, r ) P(r ) (170)

228

MICROWAVE REMOTE SENSING THEORY

where

where

ik|r−r |

e 4π|r − r |

g(r, r ) =

(171)

−

N j=1

N Vj

j=1

∇

dr

∇

Vj

dr

g(r, r ) P j (r )

g(r, r ) P j (r )

Nb

a jα f jα (r)

Vj

(173)

Here the spheroid is assumed to be small, we choose the basis functions in Eq. (173) to be the electrostatic solution of that of a spheroid. Let the jth spheroid be centered at rj with (174)

and the symmetry axes of the spheroid be xˆbj, yˆbj, and zˆbj, with respective semiaxes lengths be aj, bj, and cj. The orientation of the symmetry axis zˆbj is

is the electric field induced by the polarization Pj(r) of the spheroid j. Of particular importance is the internal field created by Pj(r) on itself. Because of the smallness of the spheroid, an electrostatic solution can be sought and we have, for r in Vj

C j1

(177)

f j3

(178)

Nb N j=1 α=1

ξ

+1 ξ0 − 1 0

− 1 (ξ02 − 1)

ξ2 − 1 1 p − ξ0 ξ0 − 0 ln =− 2 2

ξ

+1 ξ0 − 1 0

dr f lβ (r) · E(r)

Vl

dr f lβ (r) ·

+

dr f lβ (r) · E inc (r) +

Nb N

a jα

j=1 α=1 j = 1

Nb

Vl

dr f lβ (r) · q jα (r)

alα

α=1

Vl

alα f lα (r)

α=1

Vl

Nb

Vl

(182) dr f lβ (r) · qlα (r)

dr f lβ (r) · E inc (r) +

Nb N j=1 α=1 j = 1

a jα

Vl

dr f lβ (r)

· q jα (r) + alβ Clβ

The three basis functions are those of the dipole solutions, which are constant vectors. In Eqs. (176) to (178), v0j ⫽ 4앟aj2cj /3. If the particles are closely packed, the near-field interactions have large spatial variations over the size of a spheroid that may induce quadrupole fields inside the spheroid. However, the non-near-field interactions have small spatial variations over the size of a spheroid and only induce dipole fields inside the spheroid. Substituting Eqs. (173) into (172), we get

E(r) = E inc (r) +

Vl

=

1 = yˆ b j √ v0 j

ξ0 ln 2

a jβ =

=

1 f j2 = xˆ b j √ v0 j

The coefficients Cj움, 움 ⫽ 1, 2, 3, are constants depending on particle size, shape, and permittivity. An approximation sign is used in Eq. (181) to indicate the low-frequency approximation. Next, we apply Galerkin’s method (18) to Eq. (179)

The first three normalized basis functions for electric fields are (176)

p − =−

C j2 = C j3

(175)

1 f j1 = zˆ b j √ v0 j

(181)

where j is the particle index and 움 is the basis function index (19) and

=

zˆ b j = sin β j cos α j xˆ + sin β j sin α j yˆ + cos β j zˆ

dr ∇ g(r, r ) · f jα (r )(r j − 1)

q jα (r) ∼ = C jα f jα (r)

α=1

r j = x j xˆ + y j yˆ + z j zˆ

(180)

−∇

(172)

To solve Eq. (172) by the method of moments (18), we expand the electric field Ej(r) inside the jth spheroid in a set of Nb basis functions

E j (r) =

dr g(r, r ) f jα (r )(r j − 1)

Vj

is the scalar Green’s function with wavenumber of the background medium k ⫽ 웆兹애⑀. The induced polarization P(r) is nonzero over the particles only. Let Pj(r) be the polarization density inside the particle j. Then the volume integral equation [Eq. (170)] becomes

E(r) = E inc (r) + k2

q jα (r) = k2

a jα q jα (r)

(179)

This gives

alβ =

1 (1 − Clβ ) +

Nb N j=1 α=1 j = 1

Vl

dr f lβ (r) · Einc (r)

a jα

Vl

(183)

dr f lβ (r) · q jα (r)

Equation (183) contains the full multiple scattering effects among the N number of spheroids under the small spheroid assumption. Because of the small spheroid assumption, only

MICROWAVE REMOTE SENSING THEORY

the dipole term contributes to the first term in Eq. (183) which is the polarization induced by the incident field. Thus dr f lβ (r) · E inc (r) = v0l f lβ · E inc (rl ) (184)

229

The scattered field is decomposed into vertical and horizontal polarization E s = Evs vˆs + Ehs hˆ s

(191)

Vl

After the coefficients al웁, l ⫽ 1, 2, . . ., N, and 웁 ⫽ 1, 2, 3 are solved, the far-field scattered field in the direction (s, s) is expressed as

eikr (vˆ s vˆ s + hˆ s hˆ s ) 4πr Nb N · a jα (rj − 1)

E s (r) = k2

−ik s ·r

dr e

Vj

j=1 α=1

(185)

f jα (r )

where ⑀rj ⫽ ⑀j / ⑀, kˆs ⫽ sin s cos sxˆ ⫹ sin s sin syˆ ⫹ cos szˆ is the scattered direction; vˆs ⫽ cos s cos sxˆ ⫹ cos s sin syˆ ⫺ sin szˆ, hˆs ⫽ ⫺sin sxˆ ⫹ cos syˆ are, respectively, the vertical and horizontal polarizations. Under the small spheroid assumption, only the dipole fields will contribute to the far-field radiation in Eq. (185). Thus, we have ikr

e (vˆ svˆ s + hˆ shˆ s ) · E s (r) ∼ = k2 4πr

Nb N

a jα (rj − 1)v0 j f jα e−ik s ·r j

j=1 α=1

In the simulations of random discrete scatterer scattering, there is a strong component of coherent scattered field that is dependent on the shape of the box. To calculate the extinction rates and the scattering phase matrices, we need to subtract out the coherent field to get the incoherent fields. The incoherent fields contribute to the extinction rates and the scattering phase matrices. The simulations are performed for Nr realizations. We performed Nr ⫽ 50 realizations for this article. Let be the realization index. Then the coherent scattered field is

E =

(192)

and the incoherent field is σ

σs = rE s − E s

(193)

which also can be decomposed into vertical and horizontal polarization

(186) Numerical Simulation. In this section, we show the results of the numerical simulations by using N ⫽ 2000 spheroids and up to f ⫽ 30% by volume fraction. The relative permittivity used for the spheroids is 3.2 and the size parameter of the spheroids used is such that ka ⫽ 0.2. For dipole interactions, we replace the integral in the last term of Eq. (183) as follows. dr f lβ (r) · q jα (r) = (rj − 1)v0 j v0l k2 f lβ · G(rl , r j ) · f jα (187)

Nr σ r E Nr σ =1 s

σ σ ˆ σs = vs vˆ s + hs hs

(194)

The averaged N particle bistatic scattering cross sections are

σv s N =

Nr 1 | σ |2 Nr σ =1 vs

(195)

σh s N =

Nr 1 | σ |2 Nr σ =1 hs

(196)

Vl

where

G(rl , r j ) =

I+

∇∇ k2

g(r, r )

(188)

is the dyadic Green’s function. In the simulations, all the spheroids are prolate and are identical in size with c ⫽ ea, where e is the elongation ratio of the prolate spheroid. The size of the box in which the spheroids are placed is V = Nv/ f

(189)

where f is the fractional volume, and v ⫽ 4앟a2c/3 is the volume of one spheroid. To create the situation of random phase, it is important that the size of the box has to be larger than a wavelength. An incident electric field of E inc (r) = ye ˆ ikz

(190)

is launched onto the box containing the N spheroids. The matrix of Eq. (183) is solved by iteration. After the matrix equation is solved, the scattered field is calculated by Eq. (186).

The N particle bistatic scattering cross sections of Eqs. (195) and (196) contain the collective scattering effects of the particles. For the simulations, the particles are not absorptive. Thus the extinction rate is the same as the scattering rate. The extinction rate is κe =

1 V

π 0

2π

dθs sin θs 0

dϕs (σvsN + σhsN )

(197)

The 1/V factor in Eq. (197) is due to the fact that the extinction rate for a random medium is the extinction cross section per unit volume of space and the V in this case is the size of the box. The random positions of the spheroids are generated by a shuffling method facilitated by contact function (20,21). In Fig. 11, we illustrate the extinction coefficients normalized by the wavenumber k as a function of fractional volume. We consider the case consisting of aligned prolate spheroids with ka ⫽ 0.2 and e ⫽ 1.8. In such a medium, a vertically polarized incident wave with the incident polarization aligned with the symmetry axis of the prolate spheroids has a higher extinction rate than that of the horizontally polarized incident wave. The extinction rate is polarization dependent. In the same figure, we also show the extinction rate for the case when the spheroids are randomly oriented. For random orien-

230

MICROWAVE REMOTE SENSING THEORY

6

CASE 1. s ⫽ 0⬚ and s ⫽ 180⬚. For this case, we define the phase matrix elements of

x 10–4

σvsN V σhsN P21 (θs ) = V P11 (θs ) =

Extinction / k

5 4 3 2 1 0

0.05

0.1 0.15 0.2 Fractional volume

0.25

0.3

Figure 11. Extinction rate as a function of fractional volume of particles. Relative permittivity of particles ⑀r ⫽ 3.2. For spheroids ka ⫽ 0.2 and e ⫽ 1.8. For spheres ka ⫽ 0.2. The dotted curve is for the medium with spheres; ⫹, the medium with randomly oriented spheroids. o and x, the medium with aligned spheroids but with incident wave being vertically and horizontally polarized, respectively.

tation, the probability density function of orientation p(웁, 움) is p(β, α) =

sin β 4π

for 0 ⱕ 웁 ⱕ 앟, and 0 ⱕ 움 ⱕ 2앟. The sin 웁 is a result of the smaller solid angle at small 웁. The normalization of the probability density function is such that

2π

π

dβ p(β, α) = 1

dα 0

0

The attenuation for the randomly oriented case is between those of vertically and horizontally polarized incidence of the aligned case. The extinction rates are also compared with those of a medium with spherical particles of ka ⫽ 0.2 and e ⫽ 1. The spherical case predicts a much lower attenuation than the spheroidal case, even though the medium has the same fractional volume. Next we illustrate the scattering phase matrices. The phase matrices are bistatic scattering cross sections per unit volume of a conglomeration of particles. We consider the incident wave and polarization as given by Eq. (190). The spheroids are randomly oriented in the following illustrations. We also compare with the results of independent scattering. The independent scattering results are obtained by including only the first term inside the curly bracket of Eq. (183). That is, alβ =

1 (1 − Clβ )

Vl

dr f lβ (r) · E inc (r)

(198)

(199) (200)

In this case, the incident polarization is perpendicular to the scattering plane formed by the incident and scattered directions. The quantities P11 and P21 correspond to copolarization and cross-polarization, respectively. In Figs. 12(a) and 12(b), we plot P11 and P21, respectively, as a function of s. We give the results of s ⫽ 0⬚ and s ⫽ 180⬚ in the same figure. The following definition is used. For s ⫽ 0⬚, we have a scattering angle between 0⬚ and 180⬚ and the scattering angle is equal to s. For s ⫽ 180⬚, the scattering angle is equal to 360⬚ ⫺ s covering the range of scattering angles between 180⬚ and 360⬚. CASE 2. s ⫽ 90⬚ and s ⫽ 270⬚. For this case, we define the phase matrix elements of σhsN V σvsN P22 (θs ) = V

P12 (θs ) =

(201) (202)

In this case, the incident polarization is in the scattering plane formed by the incident and scattered directions. The quantities P22 and P12 correspond to copolarization and cross polarization, respectively. In Figs. (12c) and Figs. (12d), we plot P12 and P22, respectively, as a function of s. The scattering angle is between 0⬚ and 360⬚. The following definition is used. For s ⫽ 90⬚, we have the scattering angle between 0⬚ and 180⬚ the scattering angle is equal to s. For s ⫽ 270⬚, the scattering angle is equal to 360⬚ ⫺ s covering the range of scattering angles between 180⬚ and 360⬚. In Fig. 12, we show the results of P11, P21, P12, and P22 for the fractional volume of 30%. The results of independent scattering are also shown for comparison. The dimension of phase matrix is bistatic cross section per unit volume which is inverse distance. The unit is such that wavelength is equal to unity. We note that the copolarization, P11 and P22, are smaller than those of independent scattering, whereas the cross-polarization, P21 and P12, are higher than those of independent scattering. We also note that the simulation results fluctuate because of the random phase situation, whereas that of independent scattering are smooth curves. The fluctuations are characteristic of random scattering as the bistatic scattering cross section per unit volume will fluctuate from sample to sample. We also note that P22 has angular dependence that is the characteristic of Rayleigh scattering. Simulations of Scattering by Random Rough Surface The classical analytic approaches of solving random rough surface scattering problems based on the Kirchhoff approximation and small perturbation method are restricted in domain of validity (1,2). Recently, there has been an increasing interest in the Monte-Carlo simulations of random rough surface scattering. One method is the integral equation method in which an integral equation is then converted to a matrix

MICROWAVE REMOTE SENSING THEORY

1.5

research in recent years (22,23). Fast numerical methods have been developed to solve such problems (24–28). In this section, we shall use a standard method of simulation and illustrate the results. The numerical results yield many interesting features that are beyond the validity of classical methods.

x10–3

P11

1 0.5 0

0

50

100 150 200 250 Scattering angle (degree)

300

350

(a) 2.5

x10–5

Integral Equation. We confine our attention to numerical simulations of scattering by one-dimensional random rough surface. Consider an incident wave inc(r) impinging upon a one-dimensional random rough surface with a height profile z ⫽ f(x). The wave function (r) above the surface is ψ (r) = ψinc (r) + ψs (r)

P21

2

1 0

50

100 150 200 250 Scattering angle (degree)

300

350

ψinc (r ) +

dsnˆ · [ψ (r)∇g(r, r ) − g(r, r )∇ψ (r)] S

ψ (r ) r ∈ V0 = 1 ψ (r ) r ∈ S 2 0 r ∈ V1

(b) x10–5

P12

1.5

g(r, r ) =

1 0

50

100 150 200 250 Scattering angle (degree)

300

350

(c) x10–3

(204b) (204c)

S

0 r ∈ V0 = − 1 ψ1 (r ) r ∈ S 2 −ψ1 (r ) r ∈ V1

P22

50

100 150 200 250 Scattering angle (degree)

300

350

(d)

Figure 12. Phase matrix as a function of scattering angle for ka ⫽ 0.2, fractional volume f ⫽ 30%, elongation ratio e ⫽ 1.8, relative permittivity of particles ⑀r ⫽ 3.2. The spheroids are randomly oriented. In the simulations, N ⫽ 2000 particles are used and the results are averaged over Nr ⫽ 50 realizations. (a) P11, (b) P21, (c) P21, and (d) P22. o, the dense medium results. x, the independent scattering results.

equation by the method of moments, and the resulting equation is solved with a full matrix inversion. Many practical problems such as the near-grazing incidence or two-dimensional rough surface are considered large-scale rough surface problems. For such large-scale rough surface problems, an efficient numerical method is needed. The simulation of largescale rough surface problem have been a subject of intensive

(205)

dsnˆ · [ψ1 (r)∇g1 (r, r ) − g1 (r, r )∇ψ1 (r)]

0.5

0

i (1) H (k|r − r |) 4 0

is the two-dimensional Green’s function. The zero in Eq. (204c) corresponds to the extinction theorem. Note that in Eqs. (204a) and (204b), r is on the surface. The transmitted wave 1(r) in the lower medium satisfies

1

0

(204a)

where

2

1.5

(203)

where s(r) is the scattered wave. The wave function obeys the following surface integral equation

1.5

2.5

231

(206a) (206b) (206c)

where g1 (r, r ) =

i (1) H (k1 |r − r |) 4 0

(207)

is Green’s function of the lower medium. The wave functions (r) and 1(r) are related by boundary condition on surface S. For the TE wave, the boundary condition is

ψ (r) = ψ1 (r)

(208a)

nˆ · ∇ψ (r) = nˆ · ∇ψ1 (r)

(208b)

For the TM wave, the boundary condition is

ψ (r) = ψ1 (r)

(209a)

1 1 nˆ · ∇ψ1 (r) nˆ · ∇ψ (r) = 1

(209b)

232

MICROWAVE REMOTE SENSING THEORY

By rewriting Eq. (204b) and applying the boundary condition to Eq. (206b), we have 1 ψinc (r ) + ds[ψ (r)nˆ · ∇g(r, r ) − g(r, r )nˆ · ∇ψ (r)] = ψ (r ) 2 S (210a) 1 ds[ψ (r)nˆ · ∇g1 (r, r ) − g1 (r, r )ρnˆ · ∇ψ (r)] = − ψ (r ) 2 S (210b)

g ψinc[x, f (x)] = √ 2 π

where

1 ρ = 1

for the TE wave for the TM wave

(211)

and ds ⫽ [1 ⫹ (df /dx)2]1/2 dx. Let u(r, r ) =

that the surface current is truncated at x ⫽ ⫾L/2, so that the surface current is forced to zero for 兩x兩 ⬎ L/2. If this is an abrupt change, artificial reflection from the two endpoints will occur. To avoid these artificial reflections, one common way is to taper the incident wave so that the incident wave decays to zero gradually and are exponentially small outside the domain. A way to taper the incident wave is in the spectral domain. Let

p

1 + (df /dx)2nˆ · ∇ (r, r )

amn u(xn ) +

n=1 N

bmn ψ (xn ) = ψinc (xm )

(212a)

+

N

(1) bmn ψ (xn )

=0

(1) bmn

(k x −k ) 2 g 2 ix 4

(217)

∂ψ ∗ 1 Im ψinc inc 2ηk ∂z

(218)

The power received is Pinc = −

(212b)

∞ −∞

dxSinc · zˆ

(219)

n=1

amn

(1) amn

Sinc · zˆ = −

where xm ⫽ (m ⫺ 0.5)⌬x ⫺ L/2, m ⫽ 1, 2, . . ., N. The matrix (1) elements amn, bmn, a(1) mn, and bmn are given by

bmn

−∞

dkx ei(k x x−k z z) e−

n=1

(1) amn ρu(xn )

n=1

N

∞

where kix ⫽ k sin i, kz2 ⫽ k2 ⫺ kx2, k is the wavenumber of the free space, and g is the parameter that controls the tapering of the incident wave. The advantage of using Eq. (217) is that it obeys the wave equation exactly because it is a spectrum of plane waves. To calculate the power impinging upon the surface, we have

where r ⫽ r[x, f(x)] and r⬘ ⫽ r⬘[x⬘, f(x⬘)]. Using the method of moment (18), we can discretize the above equations as N

i (1) m = n x H0 (krmn ) 4 = x i H (1) [kxγm /(2e)] m = n 4 0

By substituting Eq. (218) into (219) and integrating over dx, it follows readily that only propagating waves contribute to power. Thus g2 Pinc = 4ηk

(213)

ik f (xn )(xn − xm ) − [ f (xn ) − f (xm )] (1) −x H1 (krmn ) 4 rmn m = n = 1 (x ) f x m − m=n 2 4π γm2 (214) i (1) m = n x H0 (k1 rmn ) 4 = (215) x i H (1) [k1 xγm /(2e)] m = n 4 0 ik1 f (xn )(xn − xm ) − [ f (xn ) − f (xm )] (1) H1 (k1 rmn ) −x 4 rmn m = n = (x ) f x 1 m − − m=n 2 4π γm2 (216)

where rmn ⫽ 兵(xn ⫺ xm)2 ⫹ [f(xn) ⫺ f(xm)]2其1/2 and 웂m ⫽ 兵1 ⫹ [f⬘(xm)]2其1/2, e ⫽ 2.71828183, H(1) 1 is the first-order Hankel function of the first kind, and f⬘(xm) and f⬙(xm) represent the first and second derivative of f(x) evaluated at xm, respectively. Incident Waves and Scattered Waves. In numerical simulations, the rough surface is truncated at x ⫽ ⫾L/2. This means

(kx − kix )2 g2 dkx kz exp − 4 −k k

(220)

Scattered Waves. After the surface fields (r) and n ⭈ ⵜ(r) are calculated by numerical methods, we can calculate the scattered waves and the transmitted waves using Eqs. (204a) and (206c), respectively. From Eq. (204a), the scattered wave is

ds[ψ (r)nˆ · ∇g(r, r ) − g(r, r )nˆ · ∇ψ (r)]

ψs (r ) =

(221)

S

For the far field

r

π 2 i g(r, r ) = exp −i exp(ikr ) exp 4 πkr 4 [−ik(sin θs x + cos θs z)]

(222)

Putting Eq. (222) and into (221), we have ψs (r ) =

i 4

r

π 2 exp −i exp(ikr )ψs(N ) (θs ) πkr 4

(223)

where

ψs(N ) (θs ) =

df sin θs − cos θs dx −u(x) + ψ (x)ik dx −∞ (224) exp[−ik(x sin θs + f (x) cos θs )]

∞

MICROWAVE REMOTE SENSING THEORY

TE wave

5 0

TE wave 5

SPM Numerical result

–5

–10

–10

–15

–15

–20

–20

–25

–25

–30

–30

–80 –60 –40 –20

0

20

40

60

–35

80

Figure 13. Numerical results of wave scattering from a dielectric slightly rough surface and comparison with the small perturbation method (SPM) for the case of rms height of 0.05 wavelength, correlation length of 0.35 wavelength and relative dielectric constant of 5.6 ⫹ i0.6 at an incidence angle of 30⬚. The numerical results are averaged over 200 realizations. The results show good agreement between the numerical method and SPM for the case of small rms height for TE wave incidence.

The Poynting’s vector in direction kˆs is Ss (r ) = −

Ps =

1 Im[ψs (r)∇ψs∗ (r )] 2ηk

=

0

π /2

−π /2 π /2 −π /2

–80 –60 –40 –20

0

20

40

60

80

Figure 15. The convergence test with respect to number of realizations for the TE case and comparison with the small perturbation method for the case of rms height of 0.3 wavelength, correlation length of 1 wavelength and relative dielectric constant of 5.6 ⫹ i0.6 at the incidence angle of 30⬚. The numerical results show that we need many realizations for the convergence of the averaged bistatic scattering coefficients. Also SPM cannot give accurate results for the moderate rms height.

To define the bistatic scattering coefficient (s), we have Ps = Pinc

(225)

The total power scattered above the surface is

5

SPM 200 realizations 20 realizations 1 realization

0

–5

–35

233

π /2 −π /2

dθs σ (θs )

(227)

Thus

dθs rSs (r )

σ (θs , θi ) = (226)

1 1 2 |ψ (N ) (θs )|2 2η 16 πk s

4πg2

k −k

|ψs(N ) (θs )|2 dkx kz exp[−(kx − k sin θi )2 g2 /2]

(228)

Results of Numerical Simulations. In this section, results of numerical simulations are illustrated. First, we show the bistatic scattering coefficient of a rough surface with small rms height and slope and compare them with that from the small perturbation method. Next, we illustrate the convergence with respect to the number of realizations. After that, wave

TM wave SPM Numerical result

–5 –10

5

–15

0

–20

–5

–25

–10

–30

–15

–35

TM wave SPM Numerical result

–20 –80 –60 –40 –20

0

20

40

60

80

Figure 14. Numerical results of wave scattering from a dielectric slightly rough surface and comparison with the small perturbation method (SPM) for the case of rms height of 0.05 wavelength, correlation length of 0.35 wavelength and relative dielectric constant of 5.6 ⫹ i0.6 at incidence angle of 30⬚. The numerical results are averaged over 200 realizations. The results show good agreement between the numerical method and SPM for the case of small rms height for TM wave incidence.

–25 –30 –35

–80 –60 –40 –20

0

20

40

60

80

Figure 16. Numerical results of wave scattering from the dielectric rough surface with a moderate rms height and comparison with the small perturbation method. TM case.

234

MICROWAVE REMOTE SENSING THEORY

Backscattering enhancement

0.1 0.09 0.08

Numerical results (TE) SPM (TE) Numerical results (TM) SPM (TM)

TE wave TM wave –20

0.07 0.06

–30

0.05

–40

0.04 –50

0.03 0.02

–60

0.01 0

–70

–80 –60 –40 –20 0 20 40 Scattering angles (deg)

60

80

Figure 17. Numerical results of wave scattering from a dielectric rough surface with a large rms height. Backscattering enhancement is shown for both TE and TM waves. This result indicates the importance of multiple scattering effects.

scattering from a very rough surface is calculated so that backscattering enhancement (27) can be observed. The variations of emissivities with incidence angles and permittivities are plotted too. Finally, backscattering coefficients at low grazing angle are shown. In Figs. 13 and 14, we plot the numerical results averaged over 200 realizations for TE and TM waves, respectively, for the case of rms h ⫽ 0.05, correlation length 0.35, incident angle 30⬚, and 5.6 ⫹ i0.6 of relative dielectric constant. We also show the results of using the small perturbation method (SPM). We see that the two results are in very good agreement. Because of the small height, we see a distinct angular peak in the specular direction of s ⫽ i ⫽ 30⬚. The peak is due to specular reflection of the coherent wave. Because of its small slope, (s) decreases rapidly away from the specular direction. In Fig. 15, we test the convergence with respect to the number of realizations for the TE case. We show the results averaged over 1, 20, and 200 realizations, respectively.

1 0.95 0.9 0.85 0.8 0.75 0.7 TE wave TM wave

0.65 0.6

0

10

30 40 20 Incidence angles (deg)

50

60

Figure 18. The variation of emissivities with the incident angles.

–80 –90 80 80.5 81 81.5 82 82.5 83 83.5 84 84.5 85 Incidence angle (deg) Figure 19. The variation of backscattering coefficients as a function of incidence angles from 80⬚ and 85⬚ and comparison with the small perturbation method.

The rms height of rough surface is 0.3, correlation length is 1.0, the relative dielectric constant of lower medium is 5.6 ⫹ i0.6, and the incident angle is 30⬚. For one realization, there are angular fluctuations of the bistatic scattering coefficients, which is a result of constructive and destructive interferences as a function of s. With the increasing of the number of realizations, the curve becomes smoother and smoother. SPM results are also shown in the figure. In this case, because of larger rms height, the two results are different. That means we can not use SPM for larger rms height. In Fig. 16, we plot the results with the same parameters as in Fig. 15 for the TM case. The results indicate that there are large differences between numerical simulation and SPM. In Fig. 17, the bistatic scattering coefficients are shown for the case with large rms height and slope for both TE and TM waves. The case is with rms h ⫽ 0.5, correlation length 1.0, and relative dielectric constant 5.6 ⫹ i0.6 at incidence angle of 10⬚. The backscattering enhancement is observed for both TE and TM waves. In passive remote sensing, the emissivity is an important parameter. It relates the brightness temperatures to the physical temperature. The emissivity can be calculated by integrating bistatic scattering coefficients over scattering angles and subtracting the result from unity. In Fig. 18, the variation of emissivities with the incident angles are illustrated for the case of rms h ⫽ 0.3, correlation length 1.0, and dielectric constant 5.6 ⫹ i0.6 for TE and TM cases, repectively. We can see that the emissivity of the TE wave decreases as the incidence angle increases. For the TM wave, the emissivity increases as the incidence angle increases. In Fig. 19, we study the case of close-to-grazing incidence by plotting the TE and TM backscattering coefficients as a function of incidence angles from 80⬚ and 85⬚. The results of the SPM are also shown. Both TE and TM backscattering coefficients decreases as a function of incidence angle. There are large differences between numerical simulations and the SPM.

MICROWAVE SWITCHES

BIBLIOGRAPHY 1. A. Ishimaru, Wave Propagation and Scattering in Random Media, Vols. 1 and 2, New York: Academic Press, 1978. 2. L. Tsang, J. A. Kong, and R. T. Shin, Theory of Microwave Remote Sensing, New York: Wiley-Interscience, 1985. 3. A. G. Voronovich, Wave Scattering from Rough Surfaces, New York: Springer-Verlag, 1994. 4. F. T. Ulaby, R. K. Moore, and A. K. Fung, Microwave Remote Sensing: Active and Passive, Vols. 1 and 2, Reading, MA: AddisonWesley, 1981. 5. F. T. Ulaby and C. Elachi, Radar Polarimetry for Geoscience Applications, Norwood, MA: Artech House, 1990. 6. P. Sheng (ed.), Scattering and Localization of Classical Waves in Random Media, Singapore: World Scientific, 1990. 7. S. Chandrasekhar, Radiative Transfer, New York: Dover, 1960. 8. L. Landau and E. Lifshitz, Electrodynamics of Continuous Media, Oxford, UK: Pergamon, 1960. 9. S. O. Rice, Reflection of EM waves by slightly rough surface, in M. Kline (ed.), The Theory of Electromagnetic Waves, New York: Interscience, 1963. 10. L. L. Foldy, The multiple-scattering of waves, Phys. Rev., 67: 107–119, 1945. 11. L. Tsang, Thermal emission of nonspherical particles, Radio Sci., 19: 966–974, 1984. 12. A. K. Fung, Microwave Scattering and Emission Models and their Applications, Boston: Artech House, 1994. 13. R. F. Millar, The Rayleigh hypothesis and a related least squares solution to scattering problems for periodic surfaces and other scatterers, Radio Sci., 8: 785–796, 1973. 14. A. A. Maradudin, R. E. Luna, and E. R. Mendez, The Brewster effect for a one-dimensional random surface, Waves Random Media, 3 (1): 51–60, 1993. 15. J. J. Greffet, Theoretical model of the shift of the Brewster angle on a rough surface, Opt. Lett., 17: 238–240, 1992. 16. A. Ishimaru and Y. Kuga, Attenuation constant of coherent field in a dense distribution of particles, J. Opt. Soc. Am., 72: 1317– 1320, 1982. 17. K. H. Ding et al., Monte Carlo simulations of pair distribution functions of dense discrete random media with multiple sizes of particles, J. Electro. Waves Appl., 6: 1015–1030, 1992. 18. R. F. Harrington, Field Computation by Moment Methods, New York: MacMillan, 1968. 19. J. Stratton, Electromagnetic Theory, New York: McGraw-Hill, 1941. 20. J. W. Perram et al., Monte Carlo simulations of hard spheroids, Chem. Phys. Lett., 105: 277–280, 1984. 21. J. W. Perram and M. S. Wertheim, Statistical mechanics of hard ellipsoids. I. Overlap algorithm and the contact function, J. Comp. Phys., 58: 409–416, 1985. 22. E. Thorsos and D. Jackson, Studies of scattering theory using numerical methods, Waves Random Media, 1 (3): 165–190, 1991. 23. M. Nieto-Vesperinas and J. M. Soto-Crespo, Monte-Carlo simulations for scattering of electromagnetic waves from perfectly conducting random rough surfaces, Opt. Lett., 12: 979–981, 1987. 24. L. Tsang et al., Monte Carlo simulations of large-scale problems of random rough surface scattering and applications to grazing incidence with the BMIA/canonical grid method, IEEE Trans. Antennas Propag., AP-43: 851–859, 1995. 25. K. Pak et al., Backscattering enhancement of vector electromagnetic waves from two-dimensional perfectly conducting random rough surfaces based on Monte Carlo simulations, J. Opt. Soc. Amer. A, 12 (11): 2491–2499, 1995.

235

26. K. Pak, L. Tsang, and J. T. Johnson, Numerical simulations and backscattering enhancement of electromagnetic waves from twodimensional dielectric random rough surfaces with the sparsematrix canonical method, J. Opt. Soc. Amer. A, 14 (7): 1515– 1529, 1997. 27. J. T. Johnson et al., Backscattering enhancement of electromagetic waves from two-dimensional perfectly conducting random rough surfaces: A comparison of Monte Carlo simulations with experimental data, IEEE Trans. Antennas Propag., AP-44: 748– 756, 1996. 28. V. Jandhyala et al., A combined steepest descent-fast multiple algorithm for the fast analysis of three-dimensional scattering by rough surface, IEEE Trans. Geosci. Remote Sens., 36: 738–748, 1998. 29. L. Tsang, Polarimetric passive microwave remote sensing of random discrete scatterers and rough surfaces, J. Electro. Waves and Appli., 5 (1): 41–57, 1991. 30. S. H. Yueh et al., Polarimetric measurements of sea surface brightness temperature using an aircraft K-band Radiometer, IEEE Trans. Geosci. Remote Sensing, 33 (1): 85–92, Jan. 1995.

LEUNG TSANG QIN LI University of Washington

MICROWAVE RESONATORS. See CAVITY RESONATORS.

Abstract : Microwave Remote Sensing Theory : Wiley Encyclopedia of Electrical and Electronics Engineering : Wiley InterScience

● ● ● ●

My Profile Log In Athens Log In

●

HOME ●

ABOUT US ●

CONTACT US

Home / Engineering / Electrical and Electronics Engineering

●

HELP ●

Recommend to Your Librarian

Microwave Remote Sensing Theory

●

Save title to My Profile

●

Article Titles A–Z

Standard Article

●

Email this page

●

Topics

●

Print this page

Wiley Encyclopedia of Electrical and Electronics Engineering

Leung Tsang1 and Qin Li1 1University of Washington, Seattle, WA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3615 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (726K)

Abstract The sections in this article are Basics of Microwave Remote Sensing Volume Scattering and Surface Scattering Approaches Monte-Carlo Simulations of Wave Scattering From Dense Media and Rough Surfaces

About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3615.htm17.06.2008 15:40:18

Browse this title

Search this title

●

Advanced Product Search

●

Search All Content

●

Acronym Finder

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3614.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Oceanic Remote Sensing Standard Article William J. Emery1 1University of Colorado, Boulder, CO Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3614 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (160K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are The Electromagnetic Spectrum Visible Wavelengths Sensing Ocean Color Imaging Sea Ice and Ice Motion The Thermal Infrared Estimating Sea Surface Temperature Split Window Methods Global Maps and Data Availability Relationship to Ocean Currents. SST Motion Skin SST Versus Bulk SST and Heat Exchange Presently Available Sensors and Future Plans

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3614.htm (1 of 2)17.06.2008 15:42:41

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3614.htm

Filtering out Clouds Various Methods and their Accuracies Passive Microwave Wind Speed Atmospheric Water Vapor Rainfall Merging the Passive Microwave with the Optical Data Active Microwave Radar Altimeters Scatterometer Winds Synthetic Aperture Radar (SAR) Summary About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3614.htm (2 of 2)17.06.2008 15:42:41

OCEANIC REMOTE SENSING

75

OCEANIC REMOTE SENSING The field of satellite remote sensing is a scientific discipline that has matured significantly since its birth in the 1960s. The mission of the early earth-orbiting satellites was to provide information for weather forecasting. Many of the ocean applications of remote sensing grew out of the use of weather satellite data. This continues to be true today, but there have been a number of satellite missions dedicated to the study of the ocean. Unfortunately, one of these dedicated satellite missions called SEASAT stopped transmitting after only 90 days of operation (reported to be due to a major power failure) instead of providing data for 2 to 3 years as planned. During this brief period, however, SEASAT was able to prove the utility of many of the first-time microwave satellite sensors. Only very recently have we managed to deploy similar sensors on a variety of spacecraft. We have chosen to organize the material in this article by dividing the information by sensor wavelength and pointing out the specific applications for each of the wavelength bands. Since many of the applications can be separated by wavelength, this division allows us to really address different applications as we address the individual wavelength bands. We will also discuss situations where more than one single band is needed to estimate the remote sensing application. In most cases the additional channel information is used to correct the parameter estimated in the other channel. THE ELECTROMAGNETIC SPECTRUM The electromagnetic (EM) spectrum quantifies the electromagnetic energy as a function of wavenumber or wavelength (Fig. 1). On this diagram are given the names of the spectral bands, the physical mechanisms leading to the creation of the EM signal, the transmission of this energy through a standard atmosphere, and finally the principal techniques used for remote sensing in these wavelengths. Looking at the visible, near-infrared, thermal infrared, and microwave wavelengths, it is interesting that the atmospheric transmission is a maximum for parts of these bands. Known as ‘‘windows,’’ these are the frequencies that are used for sensing the emitted thermal energy from the earth’s surface. From this spectrum we can see where certain parameters should be sensed. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

76

OCEANIC REMOTE SENSING

Healing Molecular vibrations

Dissociation

Molecular rotations

Electron shifts 105

104

103

102

101

Phenomena detected

Fluctuations in electron and magnetic fields 1

10–1 10–2 10–3 10–4 10–5 10–6 10–7 10–8 10–9 10–10 10–11 10–12 10–13 10–14

Photon energy Photon

10–14 10–15 10–16 10–17 10–18 10–19 10–20 10–21 10–22 10–23 10–24 10–25 10–26 10–27 10–28 10–29 10–30 10–31 10–32 10--33 energy 1020 1019 1018 1017 1016 1015 1014 1013 1012 1011 1010

109

108

107

106

105

104

103

102

101

Frequency

1 (cycles per second)

10–11 10–10 10–9 10–8 10–7 10–6 10–5 10–4 10–3 10–2 10–1 1 nm Gamma X – rays Ultrarays violet “Hard” “Soft”

1 µm Visible light

Infrared

1 mm Microwave

101

10

102

1m

EHF SHF UHF O/KeKvXCSL UHF

103

104

1 km HF

MF

106

1000 km Audio

Radio VHF

105

LF

107

108 AC

Wavelength

Spectral regions Transmission through atmosphere

Total X-ray gamma imaging ray counts, gamma-ray spectrometry

Atomic absorption spectrophotometry, mechanical line scanning

Imaging, Radiometry, single and spectrometry, multilens thermography cameras, various film emulsions, multispectral photography

Passive microwave radiometry, radar imaging

Electromagnetic sensing

Principal techniques for environmental remote sensing

Figure 1. Electromagnetic spectrum and remote sensing measurements.

VISIBLE WAVELENGTHS The visible bands on operational satellites have been designed to depict the weather through the distribution of and motion of clouds. Most weather satellite visible channels are broadband and average over the visible bands. The first operational narrow band visible color sensor was the multispectral scanner (MSS) that flew on the Landsat series of satellites with visible channels at 0.5 to 0.6 m, 0.6 to 0.7 m, 0.7 to 0.8 m, and 0.8 to 1.1 m. Unfortunately, the lack of a frequent repeat coverage by the MSS made it impossible to monitor changes in the ocean color that would reflect biological activity. The first dedicated sensor for monitoring and mapping ocean color was the coastal zone color scanner (CZCS) that flew on NIMBUS 7. With a much larger swath width (앑1636 km) than the MSS the CZCS overlapped at the equator on consecutive orbits, thus providing global coverage. The CZCS operated from 1978 until 1986. It was hoped that the United States would soon fly a new ocean color instrument called SeaWIFS which has unfortunately been delayed for many years. There is a new Japanese instrument flying called the ocean color and thermal sensor (OCTS) with ocean color channels.

SENSING OCEAN COLOR There is considerable data processing required to convert ocean color radiances into relevant biological properties. One of the biggest tasks is to correct for the atmosphere. The pos-

sible terms are depicted in Fig. 2: (a) light rays which upwell from below the sea surface and refract at the surface to point toward the sensor within the sensor’s instantaneous field of view (IFOV) and contribute to Lw, the water leaving radiance, (b) only a portion of the rays at (a) that contribute to Lw actually reach the sensor, (c) the rays of Lw which are scattered by the atmosphere, (d) sun’s rays that reflect into the sensor call ‘‘sun glitter,’’ (e) rays scattered in the atmosphere before reflecting at the sea surface called ‘‘sky glitter,’’ (f) ‘‘glitter’’ rays which are scattered out of the sensor IFOV, (g) ‘‘glitter’’ rays that reach the sensor, (h) sun rays scattered by the atmosphere into the sensor, (i) rays scattered toward the sensor after earlier atmospheric scattering, (j) upwelling radiation that emerges from the water outside the sensor IFOV and scattered into the sensor IFOV, and (k) rays scattered to the sensor having first been reflected at the sea surface. Since only the rays in (a) are desired, corrections must be developed for the rest of the components (1,2). The next step is to relate the ocean color measurements to water quality measurements. Thus we need to know how these parameters affect the optical properties of the ocean and what the spectral characteristics of the different constituents are. To accomplish the latter, we need a reference for pure seawater. Water containing phytoplankton has much more complex spectral characteristics. The dissolved organic matter associated with decayed vegetation is known as ‘‘yellow substance’’ or ‘‘gelbstoff.’’ This title refers to the fact that the spectrum of absorption shows a minimum in the yellow wavelengths. The complexity of computing all of the terms related to biological activity from only visible color channels is further reduced by dividing all waters into two categories: (1) Case 1 waters are seas whose optical properties are domi-

OCEANIC REMOTE SENSING

features in the first image will move to another location in the second image which can be located by finding the maximum cross-correlation (MCC) when the second image is shifted relative to the first. The difference in the locations allows one to compute the ice movements (4,5). Using passive microwave imagery from the 85.5 GHz channel on the SSM/I, we have 12.5 km spatial resolution data and can compute comprehensive, all-weather ice motions from these data. An example for the Southern Ocean is shown here in Fig. 3. A full 7 years of these motion fields can be found at http://polarbear.colorado.edu.

h

Lp g

d

77

f e

THE THERMAL INFRARED

b

i c

Lr

k

Lw

a

j

IFOV

Figure 2. Terms in the calculation of ocean color.

nated by phytoplankton and their degradation products only, and (2) Case 2 waters have non-chlorophyll-related sediments or yellow substance instead of, or in addition to, phytoplankton.

At wavelengths above 10 m we are sensing radiation emitted by the surface depending on temperature as described by Planck’s law. Thermal infrared radiation is used to sense meteorological phenomena at nighttime when there is no sunlight. This application is so important that in the AVHRR the infrared channel data are reversed so that cold clouds (which would appear dark due at low temperatures) appear white, similar to their representation in the visible channel. Since clouds will block the infrared radiation emitted from the surface, it is very important that each image be corrected for cloud cover. This is done in two ways: (a) The clouds are identified and subtracted from the image, and (b) the infrared sea surface temperatures (SSTs) are composited using the maximum temperature (lower temperatures are discarded) over a period of time. Since partial clouds in an image pixel will lower the temperature sensed, we will minimize the effects of the clouds when we composite on the maximum temperature. Thus, satellite SST maps are computed over a period of time, usually 10 days or two weeks. This compositing on the maximum temperature also helps to reduce the effect of atmospheric water vapor which attenuates the infrared signal from the ocean’s surface. ESTIMATING SEA SURFACE TEMPERATURE

IMAGING SEA ICE AND ICE MOTION Satellite imagery has provided the polar research community with important information on the space/time changes of the sea ice that dominates the polar world. Prior to the advent of polar-orbiting satellites there was no source of global weather information that could be used by polar climatologists. Most people believe the polar regions to be cloud-covered 90% of the time. If this were truly the case, we would not need to discuss the use of optical satellite data in mapping sea ice near the poles. The true cloud cover is actually about 75% to 78%, but it must be remembered that clouds move and don’t persistently cover the same region. Thus we can expect any one polar area to be free of clouds at least 20% of the time. This being the case, it is then possible to produce a clear image using temporal compositing. The composite is over some period of time (10 days, 2 weeks, etc.) and is computed using the maximum of the various channel representation. These values can be converted to sea ice concentrations (3). An important application is the computation of sea ice motion from successive visible/infrared imagery from the advanced very high resolution radiometer (AVHRR) and special sensor microwave imager (SSM/I) data. The basic idea is that

The earliest SST maps computed from satellite data used the 8 km spatial resolution scanning radiometer (SR) that flew on early National Oceanic and Atmospheric Administration (NOAA) polar orbiting satellites. Global data from this instrument were used to compute both global and regional maps of SST using optimum interpolation. These analog temperature measurements were later replaced with a fully digital system with a much higher spatial resolution (1 km) in the presentday AVHRR. Also the SST algorithm was changed to take advantage of the channels available with the AVHRR. In the almost two decades that have passed since the first launch of an AVHRR, it has remained the main source of SST information on a global basis. The first step in computing SST is to have an algorithm for converting the infrared pixel values to temperature. Unlike the visible channels the infrared channels of the AVHRR are equipped with a system to ‘‘calibrate’’ the measurements during their collection by the sensor. To compensate for sensor drift on each scan the sensor views two separate ‘‘blackbodies’’ with measured temperatures as well as ‘‘deep space.’’ These three temperatures are used to ‘‘calibrate’’ the pixel values and turn them into temperature.

78

OCEANIC REMOTE SENSING 80 5 cm/s

98

10 cm/s

8

98

2

98

99

8

988

4

99 4

8

988

98

99

4

4

98

8

988

994

99

988

994

Figure 3. Mean (1988–1994) sea ice motion for the Southern Ocean.

0

0

80

SPLIT WINDOW METHODS

parisons have shown the MCSST to produce values as accurate and reliable as any of these new algorithms.

Dual channel methods take advantage of the fact that the atmosphere attenuates infrared energy differently in the two adjacent channels. This approach is generally referred to as the ‘‘split-window’’ since it requires dividing the infrared water vapor window into two parts. The channels most frequently used are the 11 애m and 12 애m bands. Our spectrum (Fig. 1) shows that the 12 애m channel will experience greater atmospheric attenuation when compared to the 11 애m channel. The formulation of the SST using both of these channels is SST = aT4 + b(T4 − T5 )

(1)

where T4 and T5 are the 11 애m and 12 애m channels. In some cases a third channel will be used as well. For the AVHRR, this is channel 3 (앑3.7 애m), which must first be corrected for reflected radiation. The coefficients are usually found by comparisons with SST measured by coincident drifting and moored buoys. The most widely known split-window algorithm is the multichannel SST (MCSST, 6) which uses Eq. (1) with a ⫽ 1.01345 and b ⫽ 2.659762 and an additional term to account for solar zenith angle. There have been other algorithms developed as improvements on the MCSST (7) but many com-

GLOBAL MAPS AND DATA AVAILABILITY The MCSST has been processed into global maps of SST. These maps are available at the Data Active Archive Center (DAAC) of the Jet Propulsion Lab. They can be found along with other information on oceanographic data at http://podaac.www.jpl.nasa.gov/. RELATIONSHIP TO OCEAN CURRENTS. SST MOTION One important application of satellite SST mapping is the computation of surface currents. The assumption must be made that feature displacements by surface currents dominate all other changes in SST. Under this assumption, two sequential images of the same general area are used to find the displacement (see sea ice motion) that makes all of the features match up over time (8,9). By overlapping these areas, maps of ocean surface currents can be made much in the same way that sea ice motion was tracked. SKIN SST VERSUS BULK SST AND HEAT EXCHANGE Infrared satellite sensors can only ‘‘see’’ thermal radiation which, due to the high emissivity of seawater, is emitted from

OCEANIC REMOTE SENSING

the sub-millimeter-thick ‘‘skin’’ of the ocean. This temperature cannot be measured in situ by ships or buoys since touching the ocean’s skin layer will destroy it. The skin SST can be measured using radiometers from ships, and it is hoped that in the future there will be a change to computing skin SST from the infrared satellite data. It is the temperature difference between the skin SST and the subsurface or ‘‘bulk’’ SST that controls the exchange of heat between the ocean and the atmosphere. PRESENTLY AVAILABLE SENSORS AND FUTURE PLANS At present the instrument used to compute SST is the AVHRR. This sensor has been flying in one of two forms since 1978. There was a change in 1982 which added another channel to the AVHRR, making it possible to compute split-window SSTs. In 1998 a new sensor called MODIS (moderate field-of-view imaging spectrometer) will be launched with a number of new channels in the thermal infrared which should afford new opportunities for computing an even more accurate skin SST. It is clear, however, that the thermal infrared techniques will remain the primary channels for the computation of SST from space. FILTERING OUT CLOUDS One of the most important processing steps in using either infrared or visible data is the detection and removal of cloud cover. This is usually done in two ways: (1) Some method is used to sense clouds and remove them from the image, and (2) a temporal compositing technique is used that suppresses any residual clouds in the image. Even with both of these techniques there are usually some cloud contaminated pixels in the composite images. VARIOUS METHODS AND THEIR ACCURACIES There is a wide variety of techniques for the detection and removal of clouds. Most common are the threshold methods where clouds are considered to have high or low values relative to the targets of interest. Fixed thresholds based on the scene histograms are the most common of these methods. Dynamic thresholds are also used where the threshold value is computed dynamically from the histogram during the cloud removal process. Another procedure is to use the ‘‘spatial coherence’’ of the noncloud pixels. Statistical classification methods such as Maximum Likelihood are often used. Frequently, methods are combined with one procedure used to remove most of the clouds and the second method used to remove the remaining clouds.

79

nately after the demise of SEASAT another SMMR was deployed on NIMBUS 7 which operated until 1986. Many of the geophysical algorithms were developed with SMMR data. These algorithms have been further explored with the SSM/ I first launched in June 1987, and subsequent instruments continue to operate today. WIND SPEED The emitted microwave radiation at the ocean’s surface is affected by the roughness of the sea surface, which is correlated with the near-surface wind speed. Atmospheric attenuation of the 37 GHz radiation propagating from the sea surface is very small except when a significant amount of rain in the atmosphere scatters the 37 GHz signal. The Wentz wind speed algorithm relates wind speed at the 19.5 m height to the 37 GHz brightness temperatures, which are computed from the SSM/I 37 GHz horizontal and vertical polarized radiance measurements. Corrections are made in the 37 GHz data for the emission of the sea surface and for atmospheric scattering conditions. The wind speeds computed by Wentz (10) are referenced to a 10 m height as is traditional in meteorology. Comparisons between SSM/I inferred wind speeds and wind speeds measured at moored buoys indicate that the SSM/I wind speeds are accurate to 2 m/s. ATMOSPHERIC WATER VAPOR The SSM/I has a number of channels that can be used to sense atmospheric moisture (11). Of these the 22 GHz channel is the most sensitive to water vapor. Retrievals of atmospheric moisture using this channel alone are accurate to 0.145 g/cm2 to 0.17 g/cm2. Global water vapor fields computed from SSM/I data demonstrates how this capability can be used to map global patterns of atmospheric moisture, something that has not been possible with radiosonde measurements. RAINFALL Precipitation is the primary contributor to atmospheric attenuation at microwave wavelengths. This attenuation results from both absorption and scattering by hydrometeors. The magnitude of these processes depends upon wavelength, drop size distribution, and precipitation layer thickness. For light rain we can neglect the effect of multiple scattering, and the attenuation can be computed from statistical considerations. For heavier rainfall we must include multiple scattering by considering the full drop size distribution.

PASSIVE MICROWAVE

MERGING THE PASSIVE MICROWAVE WITH THE OPTICAL DATA

One of the great changes in remote sensing since 1980 has been the increased application of passive microwave imagery to many fields of geoscience. This was motivated by a series of operational passive microwave instruments. The first was the scanning multifrequency microwave radiometer (SMMR), which was followed by the special sensor microwave/imager (SSM/I). The SMMR was first carried by SEASAT, but fortu-

One of the most useful applications of the passive microwave data are as corrections for the optical wavelength parameter retrievals. For example, one of the biggest problems in the infrared estimation of SST is the correction for atmospheric moisture. Since the SSM/I senses water vapor it can be used as a correction for SST estimates (3) which improves the estimation of SST to an accuracy of 0.25.

80

OCEANIC REMOTE SENSING

ACTIVE MICROWAVE One of the greatest benefits of the short-lived SEASAT satellite was the proof that both active and passive microwave sensors could measure quantities of real interest to oceanographers. The brief 90 days of data clearly demonstrated how the all-weather microwave instruments could observe the earth’s surface. The three most important instruments were the RADAR Altimeter, the synthetic aperture RADAR (SAR) and the scatterometer (SCAT). The former measures the height of the sea surface above a reference level while the scatterometer measures the wind stress over the ocean. The SAR images the ocean’s surface but also has very important applications in polar regions (mapping sea ice) and over land where it is related to the vegetation. RADAR ALTIMETERS After SEASAT there were no altimeters in space until 1985 when the US Navy launched GEOSAT designed to map the earth’s gravity field for naval operations. After it completed this mission the navy was persuaded to place the satellite in the SEASAT orbit to collect data useful for oceanographic studies. Two years of very useful data were collected and formed the basis for a number of studies. Subsequently a joint altimeter mission between France and the US National Aeronautic and Space Administration (NASA), called TOPEX/Poseidon (TP), was launched in 1992 which became the most successful altimeter ever. Also in operation during this period is the European Resources Satellite (ERS) altimeter, of which there are now two (ERS1, ERS2). It is common to merge the TP data with its 10-day repeat cycle with the ERS data with their longer repeat cycles. The altimeter also accurately measures wind and significant wave-height from the RADAR backscatter. SCATTEROMETER WINDS Another instrument demonstrated by SEASAT was the mapping of wind stress over the ocean with the scatterometer. Since it is very difficult to get information on winds over the ocean, this measurement capability is extremely important. Also, scatterometer winds are all-weather retrievals, making it possible to map ocean winds regardless of weather. After SEASAT the first scatterometers were again those on ERS1 and ERS2. More recently a NASA scatterometer (NSCAT) was launched on the Japanese ADEOS platform. There are also plans to launch future scatterometers on non-US spacecraft. SYNTHETIC APERTURE RADAR (SAR) The high spatial resolution images produced by the SEASAT SAR were very tantalizing to oceanographers who hoped to use SAR for a variety of applications. The data gap caused by the loss of SEASAT delayed a lot of those expectations. There continues to be no US SAR operating, and these data are now available from the European Earth Resource Satellites ERS1 and ERS2 as well as the RADARSAT satellite launched and operated by Canada. Early in the SEASAT period it was dem-

onstrated that SAR imagery could be used to image directional wave spectra and sense internal waves. SAR images also contain expressions of oceanographic fronts (thermal or saline), but the primary application remained the mapping of sea ice in all weather conditions. Microwaves are not blocked by clouds, making it possible to sense the polar ice surface regardless of weather. The high spatial resolution of SAR made it possible to resolve details not possible with optical systems. SUMMARY Oceanography requires sampling over large geographic regions which is time-consuming and expensive for in situ measurements platforms. Satellite remote sensing offers a costeffective method of sampling these large oceanic regions. The only difficulty is in establishing exactly what the satellites are sensing and how accurately the quantities are being sensed. We continue to develop better instruments with better signal-to-noise ratios that can better resolve the parameters of interest. It is certain that satellite data will continue to be important for future oceanographic studies. BIBLIOGRAPHY 1. H. R. Gordon, Removal of atmospheric effects from satellite imagery of the oceans, Appl. Optics, 17: 1,631–1,636. 2. H. R. Gordon, A preliminary assessment of the Nimbus-7 CZCS atmospheric correction algorithm in a horizontally inhomogeneous atmosphere, in J. F. R. Gower (ed.), Oceanography from Space, New York: Plenum, 1981, pp. 257–265. 3. W. J. Emery, C. W. Fowler, and J. Maslanik, Arctic sea ice concentrations from special sensor microwave imager and advanced very high resolution radiometer satellite data, J. Geophys. Res., 99: 18,329–18,342, 1994. 4. W. J. Emery, C. W. Fowler, and J. Maslanik, Satellite remote sensing of ice motion, Oceanographic Applications of Remote Sensing, Boca Raton, FL: CRC Press, 1994, pp. 367–379. 5. R. N. Ninnis, W. J. Emery, and M. J. Collins, Automated extraction of sea ice motion from AVHRR imagery, J. Geophys. Res., 91: 10725–10734, 1986. 6. E. P. McClain, W. G. Pichel, and C. C. Walton, Comparative performance of AVHRR-based multichannel sea surface temperatures, J. Geophys. Res., 90: 11,587–11,601, 1985. 7. C. C. Walton, Nonlinear multichannel algorithms for estimating sea surface temperature with AVHRR satellite data, J. Appl. Meteor., 27, 115–124, 1988. 8. W. J. Emery et al., An objective procedure to compute advection from sequential infrared satellite images, J. Geophys. Res., 91 (color issue): 12,865–12,879, 1986. 9. W. J. Emery, C. W. Fowler, and C. A. Clayson, Satellite image derived Gulf stream currents, J Oceanic Atm. Sci. Tech., 9: 285– 304, 1992. 10. F. J. Wentz, Measurement of oceanic wind vector using satellite microwave radiometers, IEEE Trans. Geosci. Remote Sens., 30: 960–972, 1992. 11. P. Schluessel and W. J. Emery, Atmospheric water vapor over ocean from SSM/I measurements, Int. J. Remote Sens., 11: 753– 766, 1989.

WILLIAM J. EMERY University of Colorado

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3612.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Remote Sensing by Radar Standard Article Jakob J. van Zyl1 and Yunjin Kim1 1California Institute of Technology, Pasadena, CA Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3612 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (704K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Radar Principles Real Aperture Radar Synthetic Aperture Radar Advanced SAR Techniques Nonimaging Radars About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECT...INEERING/25. Geoscience and Remote Sensing/W3612.htm17.06.2008 15:43:05

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

466

REMOTE SENSING BY RADAR

REMOTE SENSING BY RADAR Radar remote sensing instruments acquire data useful for geophysical investigations by measuring electromagnetic interactions with natural objects. Examples of radar remote sensing instruments include synthetic aperture radars (SARs), scatterometers, altimeters, radar sounders, and meteorological radars such as cloud and rain radars. The main advantage of radar instruments is their ability to penetrate clouds, rain, tree canopies, and even dry soil surfaces depending upon the operating frequencies. In addition, since a remote sensing radar is an active instrument, it can operate day and night by providing its own illumination. Imaging remote sensing radars such as SAR produce highresolution (from submeter to a few tens of meters) images of surfaces. The geophysical information can be derived from these high-resolution images by using proper postprocessing techniques. Scatterometers measure the backscattering cross section accurately in order to characterize surface properties such as roughness. Altimeters are used to obtain accurate surface height maps by measuring the round-trip time delay from a radar sensor to the surface. Radar sounders can image underground material variations by penetrating deeply into the ground. Unlike surveillance radars, remote sensing radars require accurate calibration in order for the data to be useful for scientific applications. In this article, we start with the basic principles of remote sensing radars. Then, we discuss the details of imaging radars and their applications. In order to complete the remote sensing radar discussion, we briefly examine nonimaging radars such as scatterometers, altimeters, radar sounders, and meteorological radars. For more information on these types of radars, the interested reader is referred to other articles in this encyclopedia. We also provide extensive references for each radar for readers who need an in-depth description of a particular radar. RADAR PRINCIPLES We start our discussion with the principles necessary to understand the radar remote sensing instruments that will be described in the later part of this article. For more detailed discussions, readers are referred to Refs. 1–3. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

REMOTE SENSING BY RADAR

Radar flight hardware

Transmit antenna

Frequency up-conversion

Waveform generator

Power amplifier

Transmitted radar wave

Digitizer

Frequency down-conversion

Low-noise amplifier Receive antenna

Data storage

Ground data processing system

Data storage

Computer

467

yy ;; ;; yy Scattering object

Scattered radar wave

Radar image

Figure 1. The basic components of a radar system. A pulse of energy is transmitted from the radar system antenna, and after a time delay an echo is received and recorded. The recorded radar echoes are later processed into images. The flight electronics are carried on the radar platform, either an aircraft or a spacecraft. Image processing is usually done in a ground facility.

Radar Operation A radar transmits an electromagnetic signal and receives and records the echo reflected from the illuminated terrain. Hence, a radar is an active remote sensing instrument since it provides its own illumination. The basic radar operation is illustrated by Fig. 1. A desired signal waveform, commonly a modulated pulse, is generated by a waveform generator. After proper frequency upconversion and high-power amplification, the radar signal is transmitted from an antenna. The reflected echo is received by the antenna, and it is amplified and down-converted to video frequencies for digitization. The digitized data are either stored in a data recorder for later ground data processing or processed by an on-board data processor. Since remote sensing radars usually image large areas, they are commonly operated from either an airborne or a spaceborne platform.

them in the range (cross-track) dimension, and they use the angular size (in the case of the real-aperture radar) or the Doppler history (in the case of the synthetic-aperture radar) to separate surface pixels in the azimuth (along-track) dimension. The imaging radar sensor uses an antenna which illuminates the surface to one side of the flight track. Usually, the antenna has a fan beam which illuminates a highly elongated

C′ A′

D′ B′

Imaging aperture

Basic Principles of Radar Imaging Imaging radars generate surface images very similar to visible and infrared images. However, the principle behind the image generation is fundamentally different in the two cases. Visible and infrared sensors use a lens or mirror system to project the radiation from the scene on a ‘‘two-dimensional array of detectors’’ which could be an electronic array or a film using chemical processes. The two-dimensionality can also be achieved by using scanning systems. This imaging approach conserves the angular relationships between two targets and their images as shown in Fig. 2. Imaging radars use the time delay between the echoes that are backscattered from different surface elements to separate

B

D

A Actual scene

C

Figure 2. Optical imaging systems preserve the angular relationship between objects in the image.

REMOTE SENSING BY RADAR

Time Near swath Far swath edge Range edge bin

θ θr

A di zim re u ct th io n

;y Radar echo

Power

pl Rad at a fo r rm

fli Ra gh d t p ar at h

468

Slant range R

Xr

Radar image sw ath Ground rang e direction Far swath Near swath edge edge

Radar pulse Cτ

separated by a time difference ⌬t equal to t =

2xr sin θ c

where c is the speed of light and the factor 2 is included to account for the signal round-trip propagation. The angle in Eq. (1) is the incidence angle. The two features can be discriminated if the leading edge of the pulse returned from the second object is received later than the trailing edge of the pulse received from the first feature. Therefore, the smallest discriminable time difference in the radar receiver is equal to the pulse effective time length . Thus, cτ 2xr sin θ = τ ⇒ xr = c 2 sin θ

Illuminated area

Resolution The resolution is defined as the surface separation between the two closest features that can still be resolved in the final image. First, consider two point targets that are separated in the range direction by xr. The corresponding echoes will be

(2)

In other words, the range resolution is equal to half the footprint of the radar pulse on the surface. Sometimes the effective pulse length is described in terms of the system bandwidth B. To a good approximation, we have τ=

Figure 3. Radar imaging geometry and definition of terms.

elliptical-shaped area on the surface as shown in Fig. 3. The illuminated area across track defines the image swath. Within the illumination beam, the radar sensor transmits a very short effective pulse of electromagnetic energy. Echoes from surface points farther away along the cross-track coordinate will be received at proportionally later time (Fig. 3). Thus, by dividing the receive time in increments of equal time bins, the surface can be subdivided into a series of range bins. The width in the along-track direction of each range bin is equal to the antenna footprint along the track, xa. As the platform moves, the sets of range bins are covered sequentially, thus allowing strip mapping of the surface line by line. This is comparable to strip mapping with a pushbroom imaging system using a line array in the visible and infrared part of the electromagnetic spectrum. The brightness associated with each image pixel in the radar image is proportional to the echo power contained within the corresponding time bin. As we will see later, the different types of imaging radars really differ in the way in which the azimuth resolution is achieved. The look angle is defined as the angle between the vertical direction and the radar beam at the radar platform, while the incidence angle is defined as the angle between the vertical direction and the illuminating radar beam at the surface. When surface curvature effects are neglected, the look angle is equal to the incidence angle at the surface when the surface is flat. In the case of spaceborne systems, surface curvature must be taken into account, which leads to an incidence angle that is always larger than the look angle (3) for flat surfaces. If topography is present (i.e., the surface is not flat), the local incidence angle may vary from radar image pixel to pixel.

(1)

1 B

(3)

The sin term in the denominator of Eq. (2) means that the ground range resolution of an imaging radar will be a strong function of the look angle at which the radar is operated. To illustrate, a signal with a bandwidth B ⫽ 20 MHz (i.e., effective pulse length ⫽ 50 ns) provides a range resolution of 22 m for ⫽ 20⬚, while a signal bandwidth B ⫽ 50 MHz ( ⫽ ns) provides a range resolution of 4.3 m at ⫽ 45⬚. In the azimuth direction, without further data processing, the resolution xa is equal to the beam footprint on the surface that is defined by the azimuth beamwidth of the radar antenna a given by λ L

(4)

λh hθa = cos θ L cos θ

(5)

θa = from which it follows that xa =

where L is the azimuth antenna length, and h is the altitude of the radar above the surface being imaged. To illustrate, for h ⫽ 800 km, ⫽ 23 cm, L ⫽ 12 m, and ⫽ 20⬚ we obtain xa ⫽ 16 km. Even if is as short as 2 cm and h is as low as 200 km, xa will still be equal to about 360 m, which is considered to be a relatively low resolution, even for remote sensing. This has led to very limited use of the real-aperture technique for surface imaging, especially from space. Equation (5) is also directly applicable to optical imagers. However, because of the small value of (about 1 애m), resolutions of a few meters can be achieved from orbital altitudes with an aperture only a few tens of centimeters in size. When the radar antenna travels along the line of flight, two point targets at the different angles from the flight track have different Doppler frequency. Using this Doppler frequency spread, one can obtain a higher resolution in the along track direction. As shown in the synthetic aperture radar section, the along-track resolution can be as small as the half of the antenna length in the along-track direction. This method is often called the Doppler beam sharpening.

REMOTE SENSING BY RADAR

Radar Equation One of the key factors that determine the quality of the radar imagery is the corresponding signal-to-noise ratio (SNR). This is the equivalent of the brightness of a scene being photographed with a camera versus the sensitivity of the film or detector. Here, we consider the effect of thermal noise on the sensitivity of radar imaging systems. Let Pt be the sensor-generated peak power transmitted out of the antenna. One function of the antenna is to focus the radiated energy into a small solid angle directed toward the area being imaged. This focusing effect is described by the antenna gain G, which is equal to the ratio of the total solid angle over the solid angle formed by the antenna beam: G=

4π 4πLW 4πA = = 2 θr θa λ2 λ

(6)

where L is the antenna length in the flight track direction, W is the antenna length in the cross track direction, and A is the antenna area. The radiated wave propagates spherically away from the antenna toward the surface. Thus the power density Pi per unit area incident on the illuminated surface is Pi =

Pt G 4πR2

(7)

The backscattered power Ps from an illuminated surface area s is given by Ps = Pi sσ0

(8)

where 0 is the surface normalized backscattering cross section which represents the efficiency of the surface in re-emitting back toward the sensor some of the energy incident on it. It is similar to the surface albedo at visible wavelengths. The backscattered energy propagates spherically back toward the sensor. The power density Pc at the antenna is then Ps Pc = 4πR2

(9)

and the total received power is equal to the power intercepted by the antenna: Pr = Pc A

(10)

469

where k is Boltzmann’s constant (k ⫽ 1.38 ⫻ 10 –23 J/K) and T is the total equivalent noise temperature. The resulting SNR is then SNR = Pr /PN

(13)

One common way of characterizing an imaging radar sensor is to determine the surface backscatter cross section N which gives an SNR ⫽ 1. This is called the noise equivalent backscatter cross section. It defines the weakest surface return that can be detected, and therefore the range of surface units that can be imaged. Backscattering Cross Section and Calibration Devices The normalized backscattering cross section represents the reflectivity of an illuminated area in the backscattering direction. A higher backscattering cross section means that the area more strongly reflects the incidence radar signal. It is mathematically defined as

4πR2 Es Es∗ (14) σ0 = lim R,A i →∞ Ai Ei Ei∗ where Ai is the illuminated area and Es and Ei are the scattered and incident electric fields, respectively. In order to calibrate the radar data, active and/or passive calibration devices are commonly used. By far the most commonly used passive calibration device is a trihedral corner reflector which consists of three triangular panels bolted together to form right angles with respect to each other. The maximum radar cross section (RCS) of a trihedral corner reflector is given by RCS =

4πa4 3λ2

(15)

where a is the long-side triangle length of a trihedral corner reflector. This reflector has about 40⬚ half-power beamwidth which makes the corner reflector response relatively insensitive to the position errors. In addition, these devices are easily deployed in the field; and since they require no power to operate, they can be used unattended in remote locations under most weather conditions. Signal Modulation

or PG λ 2 Gr Pr = t t2 sσ0 4πR (4πR)2

(11)

In Eq. (11) we explicitly show that the transmit and receive antennas may have different gains. This is important for the more advanced SAR techniques like polarimetry where antennas with different polarizations may be used during transmission and reception. In addition to the target echo, the received signal also contains noise which results from the fact that all objects at temperatures higher than absolute zero emit radiation across the whole electromagnetic spectrum. The noise component that is within the spectral bandwidth B of the sensor is passed through with the signal. The thermal noise power is given by PN = kTB

(12)

A pulsed radar determines the range by measuring the roundtrip time by transmitting a pulse signal. In designing the signal pattern for a radar sensor, there is usually a strong requirement to have as much energy as possible in each pulse in order to enhance the SNR. This can be done by increasing the peak power or by using a longer pulse. However, particularly in the case of spaceborne sensors, the peak power is usually strongly limited by the available power devices. On the other hand, an increased pulse length (i.e., smaller bandwidth) leads to a worse range resolution [see Eq. (2)]. This dilemma is usually resolved by using modulated pulses which have the property of a wide bandwidth even when the pulse is very long. One such modulation scheme is the linear frequency modulation or chirp. In a chirp, the signal frequency within the pulse is linearly changed as a function of time. If the frequency is linearly changed from f 0 to f 0 ⫹ ⌬f, the effective bandwidth would be

470

REMOTE SENSING BY RADAR

equal to

and the resulting surface footprint or swath S is given by B = |( f 0 + f ) − f 0 | = | f |

(16)

which is independent of the pulse length. Thus a pulse with long duration (i.e., high energy) and wide bandwidth (i.e., high range resolution) can be constructed. The instantaneous frequency for such a signal is given by B t τ

f (t) = f 0 +

for −

τ τ ≤t ≤ 2 2

(17)

and the corresponding signal amplitude is A(t)

s cos[

B f (t) dt] = cos f 0t + t 2 2τ

(18)

Note that the instantaneous frequency is the derivative of the instantaneous phase. A pulse signal such as Eq. (18) has a physical pulse length ⬘ and a bandwidth B. The product ⬘B is known as the time bandwidth product of the radar system. In typical radar systems, time bandwidth products of several hundred are used. At first glance it may seem that using a pulse of the form Eq. (18) cannot be used to separate targets that are closer than the projected physical length of the pulse. It is indeed true that the echoes from two neighboring targets which are separated in the range direction by much less than the physical length of the signal pulse will overlap in time. If the modulated pulse, and therefore the echoes, have a constant frequency, it will not be possible to resolve the two targets. However, if the frequency is modulated as described in Eq. (18), the echoes from the two targets will have different frequencies at any instant of time and therefore can be separated by frequency filtering. In actual radar systems, a matched filter is used to compress the returns from the different targets. It can be shown (3) that the effective pulse length of the compressed pulse is given by Eq. (3). Therefore, the achievable range resolution using a modulated pulse of the kind given by Eq. (18) is a function of the chirp bandwidth, and not the physical pulse length. In typical spaceborne and airborne SAR systems, physical pulse lengths of several tens of microseconds are used, while bandwidths of several tens of megahertz are no longer uncommon for spaceborne systems, and several hundreds of megahertz are common in airborne systems.

REAL APERTURE RADAR

S≈

λh hθr = 2 cos θ W cos2 θ

(20)

where h is the sensor height above the surface, is the angle from the center of the illumination beam to the vertical (known as the look angle at the center of the swath), and r is assumed to be very small. To illustrate, for ⫽ 27 cm, h ⫽ 800 km, ⫽ 20⬚, and W ⫽ 2.1 m, the resulting swath width is 100 km. As shown before, the main disadvantage of the real aperture radar technique is the relatively poor azimuth resolution that could be achieved from space. From aircraft altitudes, however, reasonable azimuth resolutions can be achieved if higher frequencies (typically X-band or higher) are used. For this reason, real aperture radars are not commonly used any more. SYNTHETIC APERTURE RADAR Synthetic aperture radar refers to a particular implementation of an imaging radar system that utilizes the movement of the radar platform and specialized signal processing to generate high-resolution images. Prior to the discovery of the synthetic aperture radar principle, imaging radars operated using the real aperture principle and were known as sidelooking airborne radars (SLARs). Carl Wiley of the Goodyear Aircraft Corp. is generally credited as the first person to describe the use of Doppler frequency analysis of signals from a moving coherent radar to improve along-track resolution. He noted that two targets at different along-track positions will be at different angles relative to the aircraft velocity vector, resulting in different Doppler frequencies. Therefore, targets can be separated in the along-track direction on the basis of their different Doppler frequencies. This technique was originally known as Doppler beam sharpening, but later became known as synthetic aperture radar (SAR). The reader interested in a discussion of the history of SAR, both airborne and spaceborne, is referred to an excellent discussion in Chap. 1 of Ref. 3. In this section we discuss the principles of radar imaging using synthetic-aperture techniques, the resulting image projections, distortions, tonal properties, and environmental effects on the images. We have attempted to give simple explanations of the different imaging radar concepts. For more detailed mathematical analysis the reader is referred to specialized texts such as Refs. 1–3. Synthetic Aperture Radar Principle

The real aperture imaging radar sensor uses an antenna which illuminates the surface to one side of the flight track. Usually, the antenna has a fan beam which illuminates a highly elongated elliptical shaped area on the surface as shown in Fig. 3. As mentioned before, the illuminated area across track defines the image swath. For an antenna of width W operating at a wavelength , the beam angular width in the range plane is given by θr ≈

λ W

(19)

As discussed in the previous section, a real-aperture radar cannot achieve high azimuth resolution from an orbital platform. In order to achieve a high resolution from any altitude, the synthetic-aperture technique is used. This technique uses successive radar echoes acquired at neighboring locations along the flight line to synthesize an equivalent very long antenna which provides a very narrow beamwidth and thus a high image resolution. In this section we explain SAR using two different approaches, namely, the synthetic array approach and the Doppler synthesis approach, which lead to the same results. We will also discuss uses of linear frequency

REMOTE SENSING BY RADAR

modulation (chirp) as well as some limitations and degradation that are inherent to the SAR technique. Our discussion closely follows that of Ref. 1. The range resolution and radar equation derived previously for a real aperture radar are still valid here. The main difference between real and synthetic aperture radars is in the way in which the azimuth resolution is achieved. Synthetic Array Approach. The synthetic array approach explains the SAR technique by comparing it to a long array of antennas and the resulting increase in the resolution relative to the resolution achieved with one of the array elements. Let us consider a linear array of antennas consisting of N elements (Fig. 4). The contribution of the nth element to the total far field electric field E in the direction 웁 is proportional to

s an eiφ

Er

n −ikd n sin β

e

(21)

where an and n are the amplitude and phase of the signal radiated from the nth element, dn is the distance of the nth element to the array center, and k ⫽ 2앟/ . The total electric field is given by E(β )

s

an eiφ n e−ikd n sin β

(22)

n

If all the radiators are identical in amplitude and phase and are equally spaced with a separation d, then E(β )

s aeiφ

e−inkd sin β

(23)

n

This is the vector sum of N equal vectors separated by a phase ⌿ ⫽ kd sin 웁. The resulting vector shown in Eq. (23) has the following properties: • For 웁 ⫽ 0 ⇒ ⌿ ⫽ 0 ⇒, all vectors add together leading to a maximum for E. • As 웁 increases, the elementary vectors will spread and lead to a decrease in the magnitude of E. • For 웁 such that N⌿ ⫽ 2앟, the vectors are spread all around a circle, leading to a sum equal to zero.

Array element d

SA

lig Rf

ht

pa

471

th

θa R

L′

a Rθ

Figure 5. The width of the antenna beam in the azimuth direction defines the length of the synthetic aperture.

Thus, the radiation pattern has a null for Nkd sin β = 2π ⇒ β = sin−1

2π Nkd

= sin−1

λ D

(24)

where D ⫽ Nd is the total physical length of the array. From the above it is seen that an array of total length D ⫽ Nd has a beamwidth equal to the one for a continuous antenna of physical size D. This is achieved by adding the signals from each element in the array coherently—that is, amplitude and phase. The fine structure of the antenna pattern depends on the exact spacing of the array. Close spacing of the array elements is required to avoid grating effects. In a conventional array, the signals from the different elements are combined together with a network of waveguides or cables leading to a single transmitter and receiver. Another approach is to connect each element to its own transmitter and receiver. The signals are coherently recorded and added later using a separate processor. A third approach could be used if the scene is quasi-static. A single transmitter/ receiver/antenna element can be moved from one array position to the next. At each location a signal is transmitted and the echo recorded coherently. The echoes are then added in a separate processor or computer. A stable oscillator is used as a reference to ensure coherency as the single element moves along the array line. This last configuration is used in a SAR where a single antenna element serves to synthesize a large aperture. Referring to Fig. 5, it is clear that if the antenna beam is equal to a ⫽ /L, the maximum possible synthetic aperture length that would allow us to observe a point is given by L = Rθa

(25)

This synthetic array will have a beamwidth s equal to θs = λ/2L = λ/(2Rθa ) = L/(2R) β

Far-field observation point Figure 4. Geometry of a linear array of antenna elements.

(26)

The factor 2 is included to account for the fact that the 3 dB (half-power) beamwidth is narrower in a radar configuration where the antenna pattern is involved twice, once each at transmission and reception. The corresponding surface alongtrack resolution of the synthetic array is xa = Rθs =

L 2

(27)

472

REMOTE SENSING BY RADAR

S

AR

fli

t gh

pa

For large ranges, this reduces to

th

L ≤

r λR

(30)

2

and the achievable azimuth resolution is Ro

xa ≥

√ 2λR

(31)

Ri

Figure 6. For large synthetic arrays, one has to compensate during ground processing for the change in geometry between the antenna elements and the point being imaged.

This result shows that the azimuth (or along-track) surface resolution is equal to half the size of the physical antenna and is independent of the distance between the sensor and the surface. At first glance, this result seems most unusual. It shows that a smaller antenna gives better resolution. This can be explained in the following way. The smaller the physical antenna is, the larger its footprint. This allows a longer observation time for each point on the surface; that is, a longer array can be synthesized. This longer synthetic array allows a finer synthetic beam and surface resolution. Similarly, if the range between the sensor and surface increases, the physical footprint increases, leading to a longer synthetic array and finer angular resolution which counterbalances the increase in the range. As the synthetic array gets larger, it becomes necessary to compensate for the slight changes in geometry when a point is observed (Fig. 6). It should also be taken into account that the distance between the sensor and the target is variable depending on the position in the array. Thus, an additional phase shift needs to be added in the processor to the echo received at location xi equal to 4π (R0 − Ri ) φi = 2k(R0 − Ri ) = λ

(28)

where R0 is the range at closest approach to the point being imaged. In order to focus at a different point, a different set of phase shift corrections needs to be used. However, because this is done at a later time in the processor, optimum focusing can be achieved for each and every point in the scene. SAR imaging systems that fully apply these corrections are called focused. In order to keep the processing simple, one can shorten the synthetic array length and use only a fraction of the maximum possible length. In addition, the same phase shift correction can be applied to all the echoes. This would lead to constructive addition if the partial array length is such that φi ≤ or

2k

R2

+

2

Doppler Synthesis Approach. Another way to explain the synthetic aperture technique is to examine the Doppler shifts of the radar signals. As the radar sensor moves relative to the target being illuminated, the backscattered echo is shifted in frequency due to the Doppler effect. This Doppler frequency is equal to v v f d = 2 f 0 cos = 2 sin θt c λ

(32)

where f 0 is the transmitted signal frequency, ⌿ is the angle between the velocity vector v ⫽ vˆ and the sensor-target line, and t ⫽ 앟/2 ⫺ ⌿. As the target moves through the beam, the angle t varies from ⫹a /2 to ⫺a /2 where a is the antenna azimuth beamwidth. Thus, a well-defined Doppler history is associated with every target. Figure 7 shows such a history

Doppler history for P +fφ

+fD

Doppler history for P′

∆τ Time –fD

Flight direction

π 4

L

This is called an unfocused SAR configuration where the azimuth resolution is somewhat degraded relative to the fully focused one, but still better than that of a real aperture radar. The advantage of the unfocused SAR is that the processing is fairly simple compared to that of a fully focused SAR. To illustrate, for a 12 m antenna, a wavelength of 3 cm, and a platform altitude of 800 km, the azimuth resolutions will be 6 m, 220 m, and 2000 m for the cases of a fully focused SAR, an unfocused SAR, or a real aperture radar, respectively.

Doppler frequency

Point being imaged

2

π −R ≤ 4

P

(29)

P′

Figure 7. Doppler history for two targets separated in the azimuth direction.

REMOTE SENSING BY RADAR

for two neighboring targets P and P⬘ located at the same range but at different azimuth positions. The Doppler shift varies from ⫹f D to ⫺f D where v f D = 2 sin(θa /2) λ

(33)

When a Ⰶ 1, Eq. (28) can be written as fD = 2

v λ v v θa =2 = λ 2 λ 2L L

(34)

The instantaneous Doppler shift is given by v v2t cos θ f D (t) = 2 θt ≈ 2 λ λh

(35)

where t ⫽ 0 corresponds to the time when the target is exactly at 90⬚ to the flight track. Thus, the Doppler histories for the two points P and P⬘ will be identical except for a time displacement equal to t =

PP v

(36)

It is this time displacement that allows the separation of the echoes from each of the targets. The resolution along track (azimuth resolution) xa is equal to the smallest separation PP⬘ that leads to a time separation ⌬t that is measurable with the imaging sensor. It can be shown that this time separation is equal to the inverse of the total Doppler bandwidth BD ⫽ 2f D. In a qualitative way, it can be stated that a large BD gives a longer Doppler history that can be better matched to a template. This would allow a better determination of the zero Doppler crossing time. Thus, the azimuth resolution is given by xa = (PP )min = vtmin = v

1 2 fD

=v

L 2v

=

L 2

473

Signal Fading and Speckle A close examination of a synthetic-aperture radar image shows that the brightness variation is not smooth but has a granular texture which is called speckle (Fig. 8). Even for an imaged scene which has a constant backscatter property, the image will have statistical variations of the brightness on a pixel-by-pixel basis but will have a constant mean over many pixels. This effect is identical to when a scene is observed optically under laser illumination. It is a result of the coherent nature (or very narrow spectral width) of the illuminating signal. To explain this effect in a simple way, let us consider a scene which is completely ‘‘black’’ except for two identical bright targets separated by a distance d. The received signal V at the radar is given by V = V0 e−i2kr 1 + V0 e−i2kr 2

(39)

and assuming that d Ⰶ r0 (for spaceborne radars, the pixel size is typically on the order of tens of meters, while the range is typically several hundred kilometers), we obtain V = V0 e−i2kr 0 (e−i2kd sin θ + e+i2kd sin θ ) ⇒ |V | = 2|V0 cos(kd sin θ )| (40) which shows that, depending on the exact location of the sensor, a significantly different signal value would be measured. If we now consider an image pixel which consists of a very large number of point targets, the resulting coherent superposition of all the patterns will lead to a ‘‘noise-like’’ signal. Rigorous mathematical analysis shows that the resulting signal has well-defined statistical properties (1–3). The measured

(37)

which is the same as the result derived using the synthetic array approach [see Eq. (27)]. As mentioned earlier, the imaging radar transmits a series of pulsed electromagnetic waves. Thus, the Doppler history from a point P is not measured continuously but sampled on a repetitive basis. In order to get an accurate record of the Doppler history, the Nyquist sampling criterion requires that sampling occur at least at twice the highest frequency in the Doppler shift. Thus, the pulse repetition frequency (PRF), must be larger than PRF ≥ 2 f D =

2v L

(38)

In other terms, the above equation means that at least one sample (i.e., one pulse) should be taken every time the sensor moves by half an antenna length. The corresponding aspect in the synthetic array approach is that the array elements should be close enough to each other to have a reasonably ‘‘filled’’ total aperture in order to avoid significant grating effects. To illustrate, for a spaceborne imaging system moving at a speed of 7 km/s and using an antenna 10 m in length, the corresponding minimum PRF is 1.4 kHz.

Figure 8. The granular texture shown in this image acquired by the NASA/JPL AIRSAR system is known as speckle. Speckle is a consequence of the coherent nature in which a synthetic aperture radar acquires images.

474

REMOTE SENSING BY RADAR

signal amplitude has a Rayleigh distribution, and the signal power has an exponential distribution (2). In order to narrow the width of these distributions (i.e., reduce the brightness fluctuations), successive signals or neighboring pixels can be averaged incoherently. This would lead to a more accurate radiometric measurement (and a more pleasing image) at the expense of degradation in the image resolution. Another approach to reduce speckle is to combine images acquired at neighboring frequencies. In this case the exact interference patterns lead to independent signals but with the same statistical properties. Incoherent averaging would then result in a smoothing effect. In fact, this is the reason why a scene illuminated with white light does not show speckled image behavior. In most imaging SARs, the smoothing is done by averaging the brightness of neighboring pixels in azimuth, or range, or both. The number of pixels averaged is called the number of looks N. It can be shown (1) that the signal standard deviation SN is related to the mean signal power P by 1 SN = √ P N

(41)

The larger the number of looks N, the better the quality of the image from the radiometric point of view. However, this degrades the spatial resolution of the image. It should be noted that for N larger than about 25, large increase in N leads to only a small decrease in the signal fluctuation. This small improvement in the radiometric resolution should be traded off against the large increase in the spatial resolution. For example, if one were to average 10 resolution cells in a four-look image, the speckle noise will be reduced to about 0.5 dB. At the same time, however, the image resolution will be reduced by an order of magnitude. Whether this loss in resolution is worth the reduction in speckle noise depends on both the aim of the investigation, and the kind of scene imaged. Figure 9 shows the effect of multilook averaging. The same image as Fig. 8, acquired by the NASA/JPL AIRSAR system, is shown displayed at one, four, 16, and 32 looks, respectively. This figure clearly illustrates the smoothing effect, as well as the decrease in resolution resulting from the multilook process. In one early survey of geologists done by Ford (4), the results showed that even though the optimum number of looks depended on the scene type and resolution, the majority of the responses preferred 2-look images. However, this survey dealt with images that had rather poor resolution to begin with; and one may well find that with today’s higher-resolution systems, analysts may be asking for a larger number of looks. Ambiguities and Anomalies Radar images could contain a number of anomalies which result from the way imaging radars generate the image. Some of these are similar to what is encountered in optical systems, such as blurring due to defocusing or scene motion, and some such as range ambiguities are unique to radar systems. This section addresses the anomalies which are most commonly encountered in radar images. As mentioned earlier (see Fig. 3) a radar images a surface by recording the echoes line by line with successive pulses. The leading edge of each echo corresponds to the near edge of

Figure 9. The effects of speckle can be reduced by incoherently averaging pixels in a radar image, a process known as multilooking. Shown in this figure is the same image, processed as a single look (the basic radar image), 4 looks, 16 looks, and 32 looks. Note the reduction in granular texture as the number of looks increase. Also, note that as the number of looks increases, the resolution of the images decreases. Some features, such as those in the largest dark patch, may be completely masked by the speckle noise.

the image scene, and the tail end of the echo corresponds to the far edge of the scene. The length of the echo (i.e., swath width of the scene covered) is determined by the antenna beamwidth and the size of the data window. The exact timing of the echo reception depends on the range between the sensor and the surface being imaged. If the timing of the pulses or the extent of the echo are such that the leading edge of one echo overlaps with the tail end of the previous one, then the far edge of the scene is folded over the near edge of the scene.

REMOTE SENSING BY RADAR

475

This is called range ambiguity. Referring to Fig. 10, the temporal extent of the echo is equal to Te ≈ 2

R hλ sin θ θr tan θ = 2 c cW cos2 θ

(42)

This time extent should be shorter than the time separating two pulses (i.e., 1/PRF). Thus, we must have PRF <

cW cos2 θ 2hλ sin θ

(43)

In addition, the sensor parameters, specifically the PRF, should be selected such that the echo is completely within an interpulse period; that is, no echoes should be received during the time that a pulse is being transmitted. The above equation gives an upper limit for the PRF. Another kind of ambiguity present in SAR imagery also results from the fact that the target’s return in the azimuth direction is sampled at the PRF. This means that the azimuth spectrum of the target return repeats itself in the frequency domain at multiples of the PRF. In general, the azimuth spectrum is not a band-limited signal; instead the spectrum is weighted by the antenna pattern in the azimuth direction. This means that parts of the azimuth spectrum may be aliased, and high-frequency data will actually appear in the lowfrequency part of the spectrum. In actual images, these azimuth ambiguities appear as ghost images of a target repeated at some distance in the azimuth direction as shown in Fig.

SA

R

t pla

fo

rm

11. To reduce the azimuth ambiguities, the PRF of a SAR has to exceed the lower limit given by Eq. (33). In order to reduce both range and azimuth ambiguities, the PRF must therefore satisfy the conditions expressed by both Eqs. (33) and (43). Therefore, we must insist that

θ θr R Rθ rtanθ Rθ r

θ

;y;y ;y

Power

Echo from pulse N

Echo from pulse N + 1

Interpulse period

Transmit event

Figure 11. Azimuth ambiguities result when the radar pulse repetition frequency is too low to sample the azimuth spectrum of the data adequately. In that case, the edges of the azimuth spectrum fold over themselves, creating ghost images as shown in this figure. The top image was adequately sampled and processed, while the bottom one clearly shows the ghost images due to the azimuth ambiguities. The data were acquired with the NASA/JPL AIRSAR system, and a portion of Death Valley in California is shown.

2v cW cos2 θ > 2hλ sin θ L

(44)

from which we derive a lower limit for the antenna size as LW >

Time

Figure 10. Temporal extent of radar echoes. If the timing of the pulses or the temporal extent of the echoes is such that the leading edge of one echo overlaps the trailing edge of the previous one, the far edge of the scene will be folded over the near edge, a phenomenon known as range ambiguities.

4vhλ sin θ c cos2 θ

(45)

Another type of artifact in radar images results when a very bright surface target is surrounded by a dark area. As the image is being formed, some spillover from the bright target, called sidelobes, although weak, could exceed the background and become visible as shown in Fig. 12. It should be pointed out that this type of artifact is not unique to radar systems. They are common in optical systems, where they are known as the sidelobes of the point spread function. The difference is that in optical systems the sidelobe characteristics are determined by the characteristics of the imaging optics (i.e., the

476

REMOTE SENSING BY RADAR • b′ Appears closer than a′ in radar image ⇒ layover

Radar image plane

• d′ and e′ are closer together in radar image ⇒ Foreshortening • h to i not illuminated by the radar ⇒ Radar shadow a′ b′ c′ d′e′ f′ g′

b a

h′ i′

e c

d

f

g

i

Figure 13. Radar images are cylindrical projections of the scene onto the image plane, leading to characteristic distortions. Refer to the text for more detailed discussions. Figure 12. Sidelobes from the bright target, indicated by arrows in this image, mask out the return from the dark area surrounding the target. The characteristics of the sidelobes are determined mainly by the characteristics of the radar processing filters.

hardware), whereas in the case of a SAR the sidelobe characteristics are determined by the characteristics of the processing filters. In the radar case, the sidelobes may therefore be reduced by suitable weighting of the signal spectra during matched filter compression. The equivalent procedure in optical systems is through apodization of the telescope aperture. The vast majority of these artifacts and ambiguities can be avoided with proper selection of the sensor’s and processor’s parameters. However, the interpreter should be aware of their occurrence because in some situations they might be difficult, if not impossible, to suppress. Geometric Effects and Projections The time delay/Doppler history basis of SAR image generation leads to an image projection different than that in the case of optical sensors. Even though at first look radar images seem very similar to optical images, close examination quickly shows that geometric shapes and patterns are projected in a different fashion by the two sensors. This difference is particularly acute in rugged terrain. If the topography is known, a radar image can be reprojected into a format identical to an optical image, thus allowing image pixel registration. In extremely rugged terrain, however, the nature of the radar image projection leads to distortions which sometimes cannot be corrected. In the radar image, two neighboring pixels in the range dimension correspond to two areas in the scene with slightly different range to the sensor. This has the effect of projecting the scene in a cylindrical geometry on the image plane, which leads to distortions as shown in Fig. 13. Areas that slope toward the sensor look shorter in the image, while areas that slope away from the sensor look longer in the image than horizontal areas. This effect is called foreshortening. In the extreme case where the slope is larger than the incidence angle, layover occurs. In this case, a hill would look as if it is projected over the region in front of it. Layover cannot be corrected and can only be avoided by having an incidence angle at the surface larger than any expected sur-

face slopes. When the slope facing away from the radar is steep enough such that the radar waves do not illuminate it, shadowing occurs and the area on that slope is not imaged. Note that in the radar images, shadowing is always away from the sensor flight line and is not dependent on the time of data acquisition or the sun angle in the sky. Shadowing can be beneficial for highlighting surface morphologic patterns. Figure 14 contains some examples of foreshortening and shadowing. ADVANCED SAR TECHNIQUES The field of synthetic aperture radar changed dramatically over the years, especially over the past decade with the operational introduction of advance radar techniques such as polarimetry and interferometry. While both of these techniques have been demonstrated much earlier, radar polarimetry only became an operational research tool with the introduction of the NASA/JPL AIRSAR system in the early 1980s, and it reached a climax with the two SIR-C/X-SAR flights on board

Figure 14. This NASA/JPL AIRSAR image shows examples of foreshortening and shadowing. Note that since the radar provides its own illumination, radar shadowing is a function of the radar look direction and does not depend on the sun angle. This image was illuminated from the left.

REMOTE SENSING BY RADAR

the space shuttle Endeavour in April and October 1994. Radar interferometry received a tremendous boost when the airborne TOPSAR system was introduced in 1991 by NASA/JPL, and it progressed even further when data from the European Space Agency ERS-1 radar satellite became routinely available in 1991. SAR Polarimetry Radar polarimetry is covered in detail in a different article in this encyclopedia. We therefore only summarize this technique here for completeness. The reader is referred to the appropriate article in this encyclopedia for the mathematical details. Electromagnetic wave propagation is a vector phenomenon; that is, all electromagnetic waves can be expressed as complex vectors. Plane electromagnetic waves can be represented by two-dimensional complex vectors. This is also the case for spherical waves when the observation point is sufficiently far removed from the source of the spherical wave. Therefore, if one observes a wave transmitted by a radar antenna when the wave is a large distance from the antenna (in the far-field of the antenna), the radiated electromagnetic wave can be adequately described by a two-dimensional complex vector. If this radiated wave is now scattered by an object, and one observes this wave in the far-field of the scatterer, the scattered wave can again be adequately described by a two-dimensional vector. In this abstract way, one can consider the scatterer as a mathematical operator which takes one two-dimensional complex vector (the wave impinging upon the object) and changes it into another two-dimensional vector (the scattered wave). Mathematically, therefore, a scatterer can be characterized by a complex 2 ⫻ 2 scattering matrix. However, this matrix is a function of the radar frequency, and the viewing geometry. Once the complete scattering matrix is known and calibrated, one can synthesize the radar cross-section for any arbitrary combination of transmit and receive polarizations. Figure 15 shows a number of such synthesized images for the San Francisco Bay area in California. The data were acquired with the NASA/JPL AIRSAR system. The typical implementation of a radar polarimeter involves transmitting a wave of one polarization and receiving echoes in two orthogonal polarizations simultaneously. This is followed by transmitting a wave with a second polarization, and again receiving echoes with both polarizations simultaneously. In this way, all four elements of the scattering matrix are measured. This implementation means that the transmitter is in slightly different positions when measuring the two columns of the scattering matrix, but this distance is typically small compared to a synthetic aperture and therefore does not lead to a significant decorrelation of the signals. The NASA/JPL AIRSAR system pioneered this implementation for SAR systems (5), and the same implementation was used in the SIR-C part of the SIR-C/X-SAR radars (6). The past few years have seen relatively little advance in the development of hardware for polarimetric SAR systems; newer implementations are simply using more advanced technology to implement the same basic hardware configurations as the initial systems. Significant advances were made, however, in the field of analysis and application of polarimetric SAR data.

477

Polarimetric SAR Calibration. Many of the advances made in analyzing polarimetric SAR data result directly from the greater availability of calibrated data. Unlike the case of single-channel radars, where only the radar cross section needs to be calibrated, polarimetric calibration usually involves four steps: cross-talk removal, phase calibration, channel imbalance compensation, and absolute radiometric calibration (7). Cross-talk removal refers to correcting mostly the cross-polarized elements of the scattering matrix for the effects of system cross-talk that couples part of the copolarized returns into the cross-polarized channel. Phase calibration refers to correcting the copolarized phase difference for uncompensated path length differences in the transmit and receive chains, while channel imbalance refers to balancing the copolarized and cross-polarized returns for uncompensated gain differences in the two transmit and receive chains. Finally, absolute radiometric calibration involves using some kind of a reference calibration source to determine the overall system gain to relate received power levels to normalized radar cross section. While most of the polarimetric calibration algorithms currently in use were published several years ago (7–11), several groups are still actively pursuing the study of improved calibration techniques and algorithms. The earlier algorithms are reviewed in Refs. 12 and 13, while Ref. 14 provides a comprehensive review of SAR calibration in general. Some of these earlier algorithms are now routinely used to calibrate polarimetric SAR data operationally, as for example in the NASA/ JPL AIRSAR and SIR-C processors (15). Example Applications of Polarimetric SAR Data. The availability of calibrated polarimetric SAR data allowed research to move from the qualitative interpretation of SAR images to the quantitative analysis of the data. This sparked significant progress in the classification of polarimetric SAR images, led to improved models of scattering by different types of terrain, and allowed the development of some algorithms to invert polarimetric SAR data for geophysical parameters, such as forest biomass, surface roughness, and soil moisture. Classification of Earth Terrain. Many earth science studies require information about the spatial distribution of land cover types, as well as the change in land cover and land use with time. In addition, it is increasingly recognized that the inversion of SAR data for geophysical parameters involves an initial step of segmenting the image into different terrain classes, followed by inversion using the algorithm appropriate for the particular terrain class. Polarimetric SAR systems, capable of providing high-resolution images under all weather conditions as well as during day or night, provide a valuable data source for classification of earth terrain into different land cover types. Two main approaches are used to classify images into land cover types: (1) maximum likelihood classifiers based on Bayesian statistical analysis and (2) knowledge-based techniques designed to identify dominant scattering. Some of the earlier studies in Bayesian classification focused on quantifying the increased accuracy gained from using all the polarimetric information. References 16 and 17 showed that the classification accuracy is significantly increased when the complete polarimetric information is used compared to that achieved with single-channel SAR data. These earlier classifiers assumed equal a priori probabilities for all classes, and modeled the SAR amplitudes as circular

478

REMOTE SENSING BY RADAR

Figure 15. Radar polarimetry allows one to synthesize images at any polarization combination. This set of images of San Francisco, California, was synthesized from a single set of measurements acquired by the NASA/JPL AIRSAR system. Note the differential change in brightness between the city (the bright area) and Golden Gate Park, the dark rectangular area in the middle of the images. This differential change is due to a difference in scattering mechanism. The city area is dominated by a double reflection from the streets to the buildings and back to the radar, while the park area exhibits much more diffuse scattering.

Gaussian distributions, which means that the textural variations in radar backscatter are not considered to be significant enough to be included in the classification scheme. Reference 18 extended the Bayesian classification to allow different a priori probabilities for different classes. Their method first classifies the image into classes assuming equal a priori probabilities, and then it iteratively changes the a priori probabilities for subsequent classifications based on the local results of previous classification runs. Significant improvement in classification accuracy is obtained with only a few iterations. More accurate results are obtained using a more rigorous maximum a posteriori (MAP) classifier where the a priori distribution of image classes is modeled as a Markov random field and the optimization of the image classes is done over the whole image instead of on a pixel-by-pixel basis (19). In a

subsequent work, the MAP classifier is extended to include the case of multifrequency polarimetric radar data (20). The MAP classifier was used in Ref. 21 to map forest types in the Alaskan boreal forest. In this study, five vegetation types (white spruce, balsam poplar, black spruce, alder/willow shrubs, and bog/fen/nonforest) were separated with accuracies ranging from 62% to 90%, depending on which frequencies and polarizations are used. Knowledge-based classifiers are implemented based upon determination of dominant scattering mechanisms through an understanding of the physics of the scattering process as well as experience gained from extensive experimental measurements (22). One of the earliest examples of such a knowledge-based classifier was published in Ref. 23. In this unsupervised classification, knowledge of the physics of the

REMOTE SENSING BY RADAR

scattering process was used to classify images into three classes: odd numbers of reflections, even numbers of reflections, and diffuse scattering. The odd and even numbers of reflection classes are separated based on the copolarized phase difference, while the diffuse scattering class is identified based on high cross-polarized return and low correlation between the copolarized channels. While no direct attempt was made to identify each class with a particular terrain type, it was noted that in most cases the odd numbers of reflection class corresponded to bare surfaces or open water, even numbers of reflections usually indicated urban areas or sparse forests, sometimes with understory flooding present, and diffuse scattering is usually identified with vegetated areas. As such, all vegetated areas are lumped into one class, restricting the application of the results. Reference 22 extended this idea and developed a level 1 classifier that segments images into four classes: tall vegetation (trees), short vegetation, urban surfaces, and bare surfaces. First the urban areas are separated from the rest by using the L-band copolarized phase difference and the image texture at C-band. Then areas containing tall vegetation are identified using the L-band cross-polarized return. Finally, the C-band cross-polarized return and the Lband texture are used to separate the areas containing short vegetation from those with bare surfaces. Accuracies better than 90% are reported for this classification scheme when applied to two different images acquired in Michigan. Another example of a knowledge-based classification is reported in Ref. 24. In this study, a decision-tree classifier is used to classify images of the Amazonian floodplain near Manaus, Brazil into five classes: water; clearing; macrophyte; nonflooded forest; and flooded forest based on polarimetric scattering properties. Accuracies better than 90% are reported. Geophysical Parameter Estimation. One of the most active areas of research in polarimetric SAR involves estimating geophysical parameters directly from the radar data through model inversion. Space does not permit a full discussion of recent work. Therefore, in this section only a brief summary of recent work will be provided, with the emphasis on vegetated areas. Many electromagnetic models exist to predict scattering from vegetated areas (25–34), and this remains an area of active research. Much of the work is aimed at estimating forest biomass (35–39). Earlier works correlated polarimetric SAR backscatter with total above-ground biomass (35,36) and suggested that the backscatter saturates at a biomass level that scales with frequency, a result also predicted by theoretic models. This led some investigators to conclude that these saturation levels define the upper limits for accurate estimation of biomass (40), arguing for the use of low-frequency radars to be used for monitoring forest biomass (41). More recent work suggests that some spectral gradients and polarization ratios do not saturate as quickly and may therefore be used to extend the range of biomass levels for which accurate inversions could be obtained (37). Reference 41 showed that inversion results are most accurate for monospecies forests, and it also showed that accuracies decrease for less homogeneous forests. They conclude that the accuracies of the radar estimates of biomass are likely to increase if structural differences between forest types are accounted for during the inversion of the radar data. Such an integrated approach to retrieval of forest biophysical characteristics is reported in Refs. 42 and 43. These stud-

479

ies first segment images into different forest structural types, and then they use algorithms appropriate for each structural type in the inversion. Furthermore, Ref. 43 estimates the total biomass by first using the radar data to estimate tree basal area and height and crown biomass. The tree basal area and height are then used in allometric equations to estimate the trunk biomass. The total biomass, which is the sum of the trunk and crown biomass values, is shown to be accurately related to allometric total biomass levels up to 25 kg/m2, while Ref. 44 estimates that biomass levels as high as 34 kg/m2 to 40 kg/m2 could be estimated with an accuracy of 15% to 25% using multipolarization C-, L-, and P-band SAR data. Research in retrieving geophysical parameters from nonvegetated areas is also an active research area, although not as many groups are involved. One of the earliest algorithms to infer soil moisture and surface roughness for bare surfaces was published in Ref. 45. This algorithm uses polarization ratios to separate the effects of surface roughness and soil moisture on the radar backscatter, and an accuracy of 4% for soil moisture is reported. More recently, Dubois et al. (46) reported a slightly different algorithm, based only on the copolarized backscatters measured at the L-band. Their results, using data from scatterometers, airborne SARs, and spaceborne SARs (SIR-C), show an accuracy of 4.2% when inferring soil moisture over bare surfaces. Reference 47 reported an algorithm to measure snow wetness, and it demonstrated accuracies of 2.5%. SAR Interferometry SAR interferometry refers to a class of techniques where additional information is extracted from SAR images that are acquired from different vantage points, or at different times. Various implementations allow different types of information to be extracted. For example, if two SAR images are acquired from slightly different viewing geometries, information about the topography of the surface can be inferred. On the other hand, if images are taken at slightly different times, a map of surface velocities can be produced. Finally, if sets of interferometric images are combined, subtle changes in the scene can be measured with extremely high accuracy. In this section we shall first discuss so-called cross-track interferometers used for the measurement of surface topography. This will be followed by a discussion of along-track interferometers used to measure surface velocity. The section ends with a discussion of differential interferometry used to measure surface changes and deformation. Radar Interferometry for Measuring Topography. SAR interferometry was first demonstrated by Graham (48), who demonstrated a pattern of nulls or interference fringes by vectorally adding the signals received from two SAR antennas, one physically situated above the other. Later, Zebker and Goldstein (49) demonstrated that these interference fringes can be formed after SAR processing of the individual images if both the amplitude and the phase of the radar images are preserved during the processing. The basic principles of interferometry can be explained using the geometry shown in Fig. 16. Using the law of cosines on the triangle formed by the two antennas and the point being imaged, it follows that (R + δR)2 = R2 + B2 − 2BR cos

π

2

−θ +α

(46)

480

REMOTE SENSING BY RADAR

Antenna 1 z β

Antenna 2

α

θ R + δR R

h

Z(y) y Figure 16. Basic interferometric radar geometry. The path length difference between the signals measured at each of the two antennas is a function of the elevation of the scatterer.

where R is the slant range to the point being imaged from the reference antenna, 웃R is the path length difference between the two antennas, B is the physical interferometric baseline length, is the look angle to the point being imaged, and 움 is the baseline tilt angle with respect to the horizontal. From Eq. (46) it follows that we can solve for the path length difference 웃R. If we assume that R Ⰷ B (a very good assumption for most interferometers), one finds that δR ≈ −B sin(θ − α)

(47)

The radar system does not measure the path length difference explicitly, however. Instead, what is measured is an interferometric phase difference that is related to the path length difference through φ=

a2π a2π δR = − B sin(θ − α) λ λ

(48)

where a ⫽ 1 for the case where signals are transmitted out of one antenna and received through both at the same time, and a ⫽ 2 for the case where the signal is alternately transmitted and received through one of the two antennas only. The radar wavelength is denoted by . From Fig. 16, it also follows that the elevation of the point being imaged is given by z(y) = h − R cos θ

(49)

with h denoting the height of the imaging reference antenna above the reference plane with respect to which elevations are quoted. From Eq. (48) one can infer the actual radar look angle from the measured interferometric phase as θ = α − sin−1

λφ a2πB

(50)

Using Eqs. (50) and (49), one can now express the inferred elevation in terms of system parameters and measurables as

z(y) = h − R cos α − sin−1

λφ a2πB

(51)

This expression is the fundamental IFSAR equation for broadside imaging geometry. SAR interferometers for the measurement of topography can be implemented in one of two ways. In the case of singlepass interferometry, the system is configured to measure the two images at the same time through two different antennas usually arranged one above the other. The physical separation of the antennas is referred to as the baseline of the interferometer. In the case of repeat-track interferometry, the two images are acquired by physically imaging the scene at two different times using two different viewing geometries. So far all single-pass interferometers have been implemented using airborne SARs (49–51). The Shuttle Radar Topography Mission (SRTM), a joint project between the United States National Imagery and Mapping Agency (NIMA) and the National Aeronautics and Space Administration (NASA), will be the first spaceborne implementation of a single pass interferometer (52). Scheduled for launch in 1999, SRTM will use modified hardware from the C-band radar of the SIR-C system, with a 62-m-long boom and a second antenna to form a single-pass interferometer. The SRTM mission will acquire digital topographic data of the globe between 60⬚ north and south latitudes during one 11-day shuttle mission. The SRTM mission will also acquire interferometric data using modified hardware from the X-band part of the SIR-C/X-SAR system. The swaths of the X-band system, however, are not wide enough to provide global coverage during the mission. Most of the SAR interferometry research has gone into understanding the various error sources and how to correct their effects during and after processing. As a first step, careful motion compensation must be performed during processing to correct for the actual deviation of the aircraft platform from a straight trajectory (53). As mentioned before, the single-look SAR processor must preserve both the amplitude and the phase of the images. After single-look processing, the images are carefully co-registered to maximize the correlation between the images. The so-called interferogram is formed by subtracting the phase in one image from that in the other on a pixel-by-pixel basis. The interferometric SAR technique is better understood by briefly reviewing the difference between traditional and interferometric SAR processing. In traditional (noninterferometric) SAR processing, it is assumed that the imaged pixel is located at the intersection of the Doppler cone (centered on the velocity vector), the range sphere (centered at the antenna), and an assumed reference plane, as shown in Fig. 17. Since the Doppler cone has its apex at the center of the range sphere, and its axis of symmetry is aligned with the velocity vector, it follows that all points on the intersection of the Doppler cone and the range sphere lie in a plane orthogonal to the velocity vector. The additional information provided by cross-track interferometry is that the imaged point also has to lie on the cone described by a constant phase, which means that one no longer has to assume an arbitrary reference plane. This cone of equal phase has its axis of symmetry aligned with the interferometer baseline and also has its apex at the center of the range sphere. It then follows that the imaged point lies at the intersection of the Doppler cone, the range sphere, and the equal phase cone, as shown in Fig. 18. It should be pointed out that in actual interferometric SAR processors, the two images acquired by the two interferometric antennas are

REMOTE SENSING BY RADAR

Figure 17. In traditional (noninterferometric) SAR processing, the scatterer is assumed to be located at the intersection of the Doppler cone, the range sphere, and some assumed reference plane.

actually processed individually using the traditional SAR processing assumptions. The resulting interferometric phase then represents the elevation with respect to the reference plane assumed during the SAR processing. This phase is then used to find the actual intersection of the range sphere, the Doppler cone, and the phase cone in three dimensions. Once the images are processed and combined, the measure phase must be unwrapped. During this procedure, the measured phase, which only varies between 0⬚ and 360⬚, must be

Figure 18. Interferometric radars acquire all the information required to reconstruct the position of a scatterer in three dimensions. The scatterer is located at the intersection of the Doppler cone, the range sphere, and the interferometric phase cone.

481

unwrapped to retrieve the original phase by adding or subtracting multiples of 360⬚. The earliest phase unwrapping routine was published by Goldstein et al. (54). In this algorithm, areas where the phase will be discontinuous due to layover or poor SNRs are identified by branch cuts, and the phase unwrapping routine is implemented such that branch cuts are not crossed when unwrapping the phases. Phase unwrapping remains one of the most active areas of research, and many algorithms remain under development. Even after the phases have been unwrapped, the absolute phase is still not known. This absolute phase is required to produce a height map that is calibrated in the absolute sense. One way to estimate this absolute phase is to use ground control points with known elevations in the scene. However, this human intervention severely limits the ease with which interferometry can be used operationally. Madsen et al. (53) reported a method by which the radar data are used to estimate this absolute phase. The method breaks the radar bandwidth up into upper and lower halves, and then it uses the differential interferogram formed by subtracting the upper half spectrum interferogram from the lower half spectrum interferogram to form an equivalent low-frequency interferometer to estimate the absolute phase. Unfortunately, this algorithm is not robust enough in practice to fully automate interferometric processing. This is one area where significant research is needed if the full potential of automated SAR interferometry is to be realized. Absolute phase determination is followed by height reconstruction. Once the elevations in the scene are known, the entire digital elevation map can be geometrically rectified. Reference 53 reported accuracies ranging between 2.2 m root mean square (rms) for flat terrain and 5.5 m rms for terrain with significant relief for the NASA/JPL TOPSAR interferometer. An alternative way to form the interferometric baseline is to use a single-channel radar to image the same scene from slightly different viewing geometries. This technique, known as repeat-track interferometry, has been mostly applied to spaceborne data starting with data collected with the L-band SEASAT SAR (54–59). Other investigators used data from the L-band SIR-B (60), the C-band ERS-1 radar (61,62), and more recently the L-band SIR-C (63) and the X-band X-SAR (64). Repeat-track interferometry has also been demonstrated using airborne SAR systems (65). Two main problems limit the usefulness of repeat-track interferometry. The first is due to the fact that, unlike the case of single-pass interferometry, the baseline of the repeat-track interferometer is not known accurately enough to infer accurate elevation information from the interferogram. Reference 62 shows how the baseline can be estimated using ground control points in the image. The second problem is due to differences in scattering and propagation that results from the fact that the two images forming the interferogram are acquired at different times. One result is temporal decorrelation, which is worst at the higher frequencies (58). For example, C-band images of most vegetated areas decorrelate significantly over as short a time as 1 day. This problem, more than any other, limits the use of the current operational spaceborne single-channel SARs for topographic mapping, and it has led to proposals for dedicated interferometric SAR missions to map the entire globe (66,67).

482

REMOTE SENSING BY RADAR

Along-Track Interferometry. In some cases, the temporal change between interferometric image contains much information. One such case is the mapping of ocean surface movement. In this case, the interferometer is implemented in such a way that one antenna images the scene a short time before the second antenna, preferably using the same viewing geometry. Reference 68 described such an implementation in which one antenna is mounted forward of the other on the body of the NASA DC-8 aircraft. In a later work, Ref. 69 measured ocean currents with a velocity resolution of 5 m/s to 10 m/s. Along-track interferometry was used by Refs. 70 and 71 to estimate ocean surface current velocity and wavenumber spectra. This technique was also applied to the measurement of ship-generated internal wave velocities by Ref. 72. In addition to measuring ocean surface velocities, Carande (73) reported a dual baseline implementation, implemented by alternately transmitting out of the front and aft antennas, to measure ocean coherence time. He estimated typical ocean coherence times for the L-band to be about 0.1 s. Shemer and Marom (74) proposed a method to measure ocean coherence time using only a model for the coherence time and one interferometric SAR observation. Differential Interferometry. One of the most exciting applications of radar interferometry is implemented by subtracting two interferometric pairs separated in time from each other to form a so-called differential interferogram. In this way, surface deformation can be measured with unprecedented accuracy. This technique was first demonstrated by Gabriel et al. (75) using data from SEASAT data to measure millimeterscale ground motion in agricultural fields. Since then this technique has been applied to measure centimeter- to meterscale co-seismic displacements (76–81) and to measure centimeter-scale volcanic deflation (82). The added information provided by high-spatial-resolution co-seismic deformation maps was shown to provide insight into the slip mechanism that would not be attainable from the seismic record (79,80). Differential SAR interferometry has also led to spectacular applications in polar ice sheet research by providing information on ice deformation and surface topography at an unprecedented level of spatial details. Goldstein et al. (83) observed ice stream motion and tidal flexure of the Rutford Glacier in Antarctica with a precision of 1 mm per day and summarized the key advantages of using SAR interferometry for glacier studies. Joughin (84) studied the separability of ice motion and surface topography in Greenland and compared the results with both radar and laser altimetry. Rignot et al. (85) estimated the precision of the SAR-derived velocities using a network of in situ velocities and demonstrated, along with Joughin et al. (86), the practicality of using SAR interferometry across all the different melting regimes of the Greenland Ice Sheet. Large-scale applications of these techniques is expected to yield significant improvements in our knowledge of the dynamics, mass balance, and stability of the world’s major ice masses. One confusing factor in the identification of surface deformation in differential interferograms is due to changing atmospheric conditions. In observing the earth, radar signals propagate through the atmosphere, which introduces additional phase shifts that are not accounted for in the standard geometrical equations describing radar interferometry. Spatially varying patterns of atmospheric water vapor changes the lo-

cal index of refraction, which, in turn, introduces spatially varying phase shifts to the individual interferograms. Since the two (or more) interferograms are acquired at different times, the temporal change in water vapor introduces a signal that could be on the same order of magnitude as that expected from surface deformation, as discussed by Goldstein (87). Another limitation of the technique is temporal decorrelation. Changes in the surface properties may lead to complete decorrelation of the images and no detectable deformation signature (78). Current research is only beginning to realize the full potential of radar interferometry. Even though some significant problems still have to be solved before this technique will become fully operational, the next few years will undoubtedly see an explosion in the interest and use of radar interferometry data.

NONIMAGING RADARS Scatterometers Scatterometers measure the surface backscattering cross section precisely in order to relate the measurement to the geophysical quantities such as the ocean wind speed and the soil moisture (88). Since spatial resolution is not a very important parameter, it is usually sacrificed for better accuracy in the backscattering cross-section measurement. Typical resolution of a scatterometer is several tens of kilometers. Since the backscattering cross section depends strongly on the surface roughness and the dielectric constant, scatterometer measurements are used to determine these surface properties. For example, since ocean surface roughness is related to the ocean wind speed, scatterometers can be used to measure the ocean wind speed indirectly. Over land, backscattering cross-section measurements can be used to estimate the surface moisture content on a global scale. However, since the scatterometer resolution size is on the order of several kilometers, the retrieved information is not as useful as the one derived from a high-resolution imaging radar. As wind blows over the ocean surface, the first wave that is generated by the coupling between wind and the ocean surface is the 1.7 cm waves. As the wind continues to blow, energy is transferred to other parts of the surface spectrum and the surface wave starts to grow. Since the ocean surface has a large dielectric constant for the microwave spectrum, the backscattering is mainly related to the surface roughness. Therefore, it is reasonable to believe that the backscattering cross section is sensitive to the wind speed via surface roughness (89). Since the surface roughness scale at the size of the radar wavelength strongly influences the backscattering cross section, the scatterometer wavelength should be at the centimeter scale in order to derive the surface wind speed. Specifically, the surface roughness size (⌳) responsible for backscattering is related to the radar wavelength as =

λ 2 sin θ

(52)

where is the incidence angle. This wavelength ⌳ is also known as the Bragg wavelength which represents a resonance scale in the scattering surface.

As discussed in the signal fading and speckle section, radar return measurements are contaminated by the speckle noise. In order to measure the backscattering cross section accurately, a large number of independent observations must be averaged (90). This can be done in the frequency domain or the time domain. In scatterometry, a commonly adopted parameter for the backscattering cross-section measurement accuracy is Kp, defined as Kp =

pvar{σ

0 meas }

(53)

σ0

which is the normalized standard deviation of the measured backscattering cross section (91). To obtain an accurate measurement, Kp must be minimized. Once the received power is measured, the backscattering cross section can be determined by using the radar equation. In this process, the noise power is also estimated and subtracted from the received power. Then, this backscattering cross section is related to the wind vector via a geophysical model function (92). In general, a model function can be written as σ0 = F (U, f, θ, α, p, . . . )

(54)

where U is the wind speed, f is the radar frequency, is the incidence angle, 움 is the azimuth angle, and p denotes the radar signal polarization. Due to a lack of rigorous theoretical models, empirical models have been used for scatterometry applications. The precise form of the model function is still debated and is currently the subject of intense study. Figure 19 shows a schematic of the model functions. As an example of a model function, Wentz et al. (93,94) have used SEASAT data to derive a Ku-band geophysical model function known as SASS-2. From a geophysical model function, one can observe that 0 is a function of the radar azimuth angle relative to the

Incidence angle

Backscattering cross section (dB)

10

0° 10°

0 Larger incidence angle

20° 30°

–10

40° –20

Backscattering cross section (dB)

REMOTE SENSING BY RADAR

483

VV Higher wind speed

HH VV HH

0

180 Azimuth angle (deg)

360

Figure 20. Backscattering cross section in terms of the radar azimuth angle relative to the wind direction. Note that 0 in the upwind direction is slightly higher than 1 in the downwind direction.

wind direction. Figure 20 shows the double sinusoidal relationship (92). That is, 0 is maximum at upwind (움 ⫽ 0⬚) and downwind (움 ⫽ 180⬚) directions, while it is minimum near the crosswind direction (움 ⫽ 90⬚ and 270⬚). As can be seen from Fig. 20, 0 in the upwind direction is slightly higher than 0 in the downwind direction. In principle, a unique wind vector can be determined due to this small asymmetry. However, extremely accurate measurements are required to detect this small difference. It is clear that more than one 0 measurement must be made at different azimuth angles to determine the wind direction. In order to explain the wind direction determination technique, we use a simple model given by σ0 = AU γ (1 + a cos α + b cos 2α)

(55)

where A, a, b, and 웂 are empirically determined for the wind speed U measured at a reference altitude (usually at 19.5 m above the ocean surface). As can be seen from Eq. (55), two measurements provide the wind speed U and the wind direction with a fourfold ambiguity; therefore, additional measurements are needed to remove the ambiguity. Otherwise, auxiliary meteorological information is required to select the correct wind direction from ambiguous vectors (95). Spaceborne scatterometers are capable of measuring global wind vectors over oceans to be used to study upper ocean circulation, tropospheric dynamics, and air–sea interaction. Examples of spaceborne scatterometers are SASS (SEASAT scatterometer), ERS-1 (96,97), and NSCAT. Their radar parameters are shown in Table 1. To estimate a wind vector, multiple colocated 0 measurements from different azimuth angles are required. Hence, the antenna subsystem is the most important component in the

Table 1. Spaceborne Scatterometer Parameters

1

10 Wind speed (m/s)

100

Figure 19. Schematic scatterometer model function. Using this geophysical model function, backscattering measurements are related to wind speed.

Frequency Spatial resolution Swath width Number of antennas Polarization Orbit altitude

SASS

ERS-1

NSCAT

14.6 GHz 50 km 500 km 4 VV, HH 800 km

5.3 GHz 50 km 500 km 3 VV 785 km

14 GHz 25, 50 km 600 km 6 VV, HH 820 km

REMOTE SENSING BY RADAR

yyyyyyy ;;;; ;;; ;;;;yyy yyyy ;;;

scatterometer design. Multiple fan beam and scanning spot beam antennas are widely used configurations. The next-generation scatterometer, known as SeaWinds, implements a scanning pencil-beam instrument in order to avoid the difficulties of accommodating a traditional fan-beam space scatterometer. The RF and digital subsystems of a scatterometer are similar to other radars except for a noise source for onboard calibration. Both internal and external calibration devices are required for a scatterometer to provide accurate backscattering cross-section measurements. The resolution along the flight track can be improved by applying the Doppler filtering technique to the received echo. That is, if the echo is passed through a bandpass filter with a center frequency f D and a bandwidth ⌬f, then the surface resolution ⌬x can be improved as

h f x =

2v λ

2v 2

2

λ −

3/2

(56)

f D2

where h is the platform altitude and v is the platform velocity (88). Altimeters A radar altimeter (88) measures the distance between the sensor and the surface at the nadir direction to derive a topographic map of the surface. Ranging accuracy of a spaceborne radar altimeter is a few tens of centimeters. Even though an altimeter can measure the land surface topography, the resulting topographic map is not very useful since the resolution of a radar altimeter is on the order of a few kilometers. However, this is satisfactory for oceanographic applications since high-resolution measurements are not required. The geoid is the equipotential surface that corresponds to the mean sea level. Since the geoid is the static component of the ocean surface topography, it can be derived by averaging repetitive measurements over a long time period. The spatial variation in the geoid provides information on different geoscientific parameters. For example, large-scale variations (앑1000 km) in the geoid are related to processes occurring deep in the Earth’s interior. A radar altimeter transmits a short pulse to the nadir direction and measures the round-trip time (T) accurately. Hence, the distance (H) from the sensor to the surface can be calculated from H=

vT 2

(57)

Here, v is the speed of a radar wave in the propagating medium. The height error (⌬H) can be written in terms of the velocity error (웃v) and the timing error (웃T) as δH =

Tδv vδT + 2 2

(58)

The velocity error results from the refractive index variation due to ionosphere and atmosphere. The timing error is mainly related to the finite signal bandwidth and the clock accuracy on the spacecraft. In addition, small-scale roughness varia-

Beam-limited

Transmit pulse

Pulse-limited

Transmit pulse

Figure 21. Beam-limited and pulse-limited altimeter footprints.

tion due to surface elevation causes an electromagnetic bias (98). This bias is about 1% of the significant wave height (SWH). These errors must be estimated and corrected to achieve the required height accuracy. The altimeter resolution can be determined by either the radar beamwidth or the pulse length. If the beam footprint is smaller than the pulse footprint, the altimeter is called beamlimited. Otherwise, it is pulse-limited (see Fig. 21). The beamlimited footprint is given by h/L, while the pulse limited footprint is 2兹ch, where h is the altitude, L is the antenna length, and is the pulse length. The altimeter mean return waveform W(t) (99,100) can be written as W (t) = F (t)∗ q(t)∗ p(t)

(59)

where F(t) is the flat surface impulse response including radar antenna beamwidth and pointing angle effects, q(t) is the surface height probability density function, and p(t) is the radar system impulse response. Here, the symbol * denotes the convolution operator. As illustrated in Fig. 22, the return pulse shape changes for different surface roughness. As an example of radar altimeters, we briefly describe the TOPEX radar altimeter (101) that has been used to measure the sea level precisely. The resulting rms height accuracy of a single-pass measurement is 4.7 cm (102). This information is used to study the circulation of the world’s oceans. The TOPEX altimeter is a dual-frequency (5.3 GHz and 13.6 GHz) radar in order to retrieve the ionospheric delay of the radar

Relative received power

484

Flat surface return 1

1/2

Rough surface return Return from mean surface

Time Figure 22. Altimeter return pulse shape for different surface roughnesses.

REMOTE SENSING BY RADAR

signal since ionosphere is a dispersive medium. The TOPEX microwave radiometer measures sea surface emissivity at three frequencies (18 GHz, 21 GHz, and 37 GHz) to estimate the total water vapor content. In addition, the satellite carries a global positioning system (GPS) receiver for precise satellite tracking. All these measurements are used to produce highaccuracy altimeter data. Recent research topics related to radar altimetry can be found in Ref. 102. Radar Sounders Radar sounders are used to image subsurface features by measuring reflections from dielectric constant variations. For example, an ice sounding radar can measure the ice thickness by detecting the ice–ocean boundary (103). In order to penetrate into subsurface, a long-wavelength radar is desired. Various radar sounding techniques are well-summarized in Ref. 104. In order to image subsurface features, a radar signal must penetrate to the target depth for satisfactory SNR. Like other radars, subsurface radar should have an adequate bandwidth for sufficient resolution to detect buried objects or other dielectric discontinuity. For a ground penetrating radar (105), a probing antenna must be designed for efficient coupling of electromagnetic radiation into the ground. The depth resolution can be obtained by using similar techniques described in the previous sections. However, the physical distance must be estimated from the slant range information and the real part of the medium refractive index. In order to enhance horizontal resolution, one can use the synthetic aperture technique. However, the matched filter construction is very difficult since the medium dielectric constant is usually inhomogeneous and unknown. The most important quantity to design a subsurface radar is the medium loss that determines the penetration depth. For a ground subsurface radar, it is advantageous to average many samples or increase effective pulse length to enhance SNR. Polarimetric information is also important when buried objects are long and thin since strong backscattering is produced by a linearly polarized signal parallel to the long axis. For an airborne (106) or a spaceborne radar sounder (107), subsurface returns must be separated from unwanted surface returns. Since a surface return is usually much stronger than a subsurface return, the radar must be designed for extremely low sidelobe. A surface return can be an ambiguous signal if it is at the same range as a subsurface return (surface ambiguity). This problem becomes more serious as the altitude of a radar becomes higher or the antenna gain becomes lower. Clearly, future research activities are required to overcome this difficulty. In addition, when the medium loss is large, the radar must have a large dynamic range to detect the small subsurface return. As an example of orbiting radar sounders, we will briefly describe the Apollo 17 lunar sounder radar (107). The objectives of the sounder experiment were to detect subsurface geological structures and to generate a lunar surface profile. Since lunar soil and rock exhibit less attenuation due to the absence of free water, one may expect deeper subsurface penetration compared with the observations on Earth. This sounder, operating at three frequencies (5 MHz, 15 MHz, and 150 MHz), was also used to generate a lunar surface profile using the strong surface return.

485

Cloud Radar Most meteorological radars (108) operate at centimeter wavelengths in order to avoid the significant attenuation by precipitation. However, cloud radars operate at millimeter wavelength since clouds cannot be observed easily by using conventional centimeter wavelength radars (109). In order to minimize absorption, the radar frequency must be in the spectral windows whose center frequencies are 35 Hz, 100 Hz, and 150 GHz. The first millimeter wave radar observations of clouds were done in the 35 GHz window (110). Benefiting from the technology development at 94 GHz, a 94 GHz spaceborne cloud radar has been proposed where even higher radar reflectivity is expected at a shorter wavelength. For a cloud radar, high dynamic range and low sidelobes are required to detect a weak signal scattered from a cloud in presence of large surface return. For a pulsed radar, the received power (Pr) can be written as Pr =

Pt G2 λ2V −2α ηe (4π )3 r4

(60)

where Pt is the peak transmit power, G is the antenna gain, is the wavelength, V is the resolution volume, is the volumetric radar cross section, and 움 is the one-way loss (111). If the cloud particles are much smaller than the radar wavelength, using the Rayleigh scattering method, the radar reflectivity (Z) can be written as Z=

ηλ4 π 5 |K|2

(61)

where K ⫽ (n2 ⫺ 1)/(n2 ⫹ 2) and n is the complex refractive index of a particle. The radial velocity (vr) is measured by using the Doppler frequency ( fd) as vr =

fd λ 2

(62)

Cloud radar measurements provide radar reflectivity and radial velocity profiles in terms of altitude. Recent airborne cloud radars (112,113) can measure polarimetric reflectivity which can provide the additional information such as the linear depolarization ratio (LDR). Using these parameters, a cloud region classification (ice, cloud droplets, mixed-phase hydrometeors, rain, and insects) can be achieved (111). If multiwavelength measurements are made, it may be possible to estimate the drop size distribution. Rain Radar The accurate measurement of rainfall is an important factor in understanding the global water and energy cycle. Rain radars measure the rain reflectivity that can be used to estimate the parameters related to rainfall using inversion algorithms (114). As an example of rain radars, one of tropical rainfall measuring mission (TRMM) (115) instruments is a single-frequency (13.8 GHz) , cross-track scanning radar for precipitation measurement. The satellite altitude is 350 km and the scanning swath is 220 km. The range and surface horizontal resolutions are 250 m and 4 km, respectively. Using TRMM data, rain profiles can be estimated (114).

486

REMOTE SENSING BY RADAR

The operation frequency (13.8 GHz) is selected by considering both antenna size and attenuation. At this frequency, the antenna size does not have to be too large and the attenuation is small enough to measure rainfall near the surface. As an airborne rain radar, the National Aeronautics and Space Administration and the Jet Propulsion Laboratory developed an airborne rain-mapping radar (ARMAR) that flies on the NASA DC-8 aircraft (116). ARMAR operates with TRMM frequency and geometry to understand the issues related to TRMM rain radar. Due to the downward looking geometry, it is possible that surface clutter return may obscure the return from precipitation. Even for an antenna looking off-nadir, the nadir return can be at the same range as precipitation. In order to overcome these difficulties, both antenna sidelobes and pulse compression sidelobes must be low (앑 ⫺60 dB relative to the peak return) enough to detect precipitation. In order to measure rain reflectivity accurately, it is necessary to calibrate the radar precisely. Radar measurements such as the reflectivity, the H and V polarization phase difference, and the differential reflectivity (HH and VV) can be used to estimate rain rate and rain profile.

BIBLIOGRAPHY 1. C. Elachi, Introduction to the Physics and Techniques of Remote Sensing, New York: Wiley, 1987. 2. F. T. Ulaby, R. K. Moore, and A. K. Fung, Microwave remote sensing: active and passive, Radar Remote Sensing and Surface Scattering and Emission Theory, Vol. 2, Dedham, MA: Artech House, 1982. 3. J. C. Curlander and R. N. McDonough, Synthetic Aperture Radar Systems and Signal Processing, New York: Wiley, 1991. 4. J. P. Ford, Resolution versus speckle relative to geologic interpretability of spaceborne radar images: A survey of user preferences, IEEE Trans. Geosci. Remote Sens., GRS-20: 434–444, 1982. 5. H. A. Zebker, J. J. van Zyl, and D. N. Held, Imaging radar polarimetry from wave synthesis, J. Geophys. Res., 92: 683–701, 1987. 6. R. L. Jordan, B. L. Huneycutt, and M. Werner, The SIR-C/XSAR synthetic aperture radar system, IEEE Trans. Geosci. Remote Sens., GRS-33: 829–839, 1995. 7. J. J. van Zyl, A technique to calibrate polarimetric radar images using only image parameters and trihedral corner reflectors, IEEE Trans. Geosci. Remote Sens., GRS-28: 337–348, 1990. 8. H. A. Zebker and Y. L. Lou, Phase calibration of imaging radar polarimeter Stokes matrices, IEEE Trans. Geosci. Remote Sens., GRS-28: 246–252, 1990. 9. A. L. Gray et al., Synthetic aperture radar calibration using reference reflectors, IEEE Trans. Geosci. Remote Sens., GRS-28: 374–383, 1990. 10. A. Freeman, Y. Shen, and C. L. Werner, Polarimetric SAR calibration experiment using active radar calibrators, IEEE Trans. Geosci. Remote Sens., GRS-28: 224–240, 1990. 11. J. D. Klein and A. Freeman, Quadpolarisation SAR calibration using target reciprocity, J. Electromagn. Waves Appl., 5: 735– 751, 1991. 12. H. A. Zebker et al., Calibrated imaging radar polarimetry: Technique, examples, and applications, IEEE Trans. Geosci. Remote Sens., GRS-29: 942–961, 1991.

13. A. Freeman et al., Calibration of Stokes and scattering matrix format polarimetric SAR data, IEEE Trans. Geosci. Remote Sens., GRS-30: 531–539, 1992. 14. A. Freeman, SAR calibration: An overview, IEEE Trans. Geosci. Remote Sens., GRS-30: 1107–1121, 1992. 15. A. Freeman et al., SIR-C data quality and calibration results, IEEE Trans. Geosci. Remote Sens., GRS-33: 848–857, 1995. 16. J. A. Kong et al., Identification of earth terrain cover using the optimum polarimetric classifier, J. Electromagn. Waves Appl., 2: 171–194, 1988. 17. H. H. Lim et al., Classification of earth terrain using polarimetric synthetic aperture radar images, J. Geophys. Res., 94: 7049– 7057, 1989. 18. J. J. van Zyl and C. F. Burnette, Bayesian classification of polarimetric sar images using adaptive a priori probabilities, Int. J. Remote Sens., 13: 835–840, 1992. 19. E. Rignot and R. Chellappa, Segmentation of polarimetric synthetic aperture radar data, IEEE Trans. Image Process., 1: 281– 300, 1992. 20. E. Rignot and R. Chellappa, Maximum a-posteriori classification of multifrequency, multilook, synthetic aperture radar intensity data, J. Opt. Soc. Amer. A, 10: 573–582, 1993. 21. E. J. M. Rignot et al., Mapping of forest types in Alaskan boreal forests using SAR imagery, IEEE Trans. Geosci. Remote Sens., GRS-32: 1051–1059, 1994. 22. L. E. Pierce et al., Knowledge-based classification of polarimetric SAR images, IEEE Trans. Geosci. Remote Sens., GRS-32: 1081–1086, 1994. 23. J. J. van Zyl, Unsupervised classification of scattering behavior using radar polarimetry data, IEEE Trans. Geosci. Remote Sens., GRS-27: 36–45, 1989. 24. L. L. Hess et al., Delineation of inundated area and vegetation along the amazon floodplain with the SIR-C synthetic aperture radar, IEEE Trans. Geosci. Remote Sens., GRS-33: 896–904, 1995. 25. R. H. Lang and J. S. Sidhu, Electromagnetic backscattering from a layer of vegetation, IEEE Trans. Geosci. Remote Sens., GRS-21: 62–71, 1983. 26. J. A. Richards, G. Sun, and D. S. Simonett, L-Band radar backscatter modeling of forest stands, IEEE Trans. Geosci. Remote Sens., GRS-25: 487–498, 1987. 27. S. L. Durden, J. J. van Zyl, and H. A. Zebker, Modeling and observation of the radar polarization signature of forested areas, IEEE Trans. Geosci. Remote Sens., GRS-27: 290–301, 1989. 28. F. T. Ulaby et al., Michigan microwave canopy scattering model, Int. J. Remote Sens., 11: 1223–1253, 1990. 29. N. Chauhan and R. Lang, Radar modeling of a boreal forest, IEEE Trans. Geosci. Remote Sens., GRS-29: 627–638, 1991. 30. G. Sun, D. S. Simonett, and A. H. Strahler, A radar backscatter model for discontinuous coniferous forest canopies, IEEE Trans. Geosci. Remote Sens., GRS-29: 639–650, 1991. 31. S. H. Yueh et al., Branching model for vegetation, IEEE Trans. Geosci. Remote Sens., GRS-30: 390–402, 1992. 32. Y. Wang, J. Day, and G. Sun, Santa Barbara microwave backscattering model for woodlands, Int. J. Remote Sens., 14: 1146– 1154, 1993. 33. C. C. Hsu et al., Radiative transfer theory for polarimetric remote sensing of pine forest at P-Band, Int. J. Remote Sens., 14: 2943–2954, 1994. 34. R. H. Lang et al., Modeling P-Band SAR returns from a red pine stand, Remote Sens. Environ., 47: 132–141, 1994. 35. M. C. Dobson et al., Dependence of radar backscatter on conifer forest biomass, IEEE Trans. Geosci. Remote Sens., GRS-30: 412– 415, 1992.

REMOTE SENSING BY RADAR 36. T. LeToan et al., Relating forest biomass to SAR data, IEEE Trans. Geosci. Remote Sens., GRS-30: 403–411, 1992. 37. K. J. Ranson and G. Sun, Mapping biomass of a northern forest using multifrequency SAR data, IEEE Trans. Geosci. Remote Sens., GRS-32: 388–396, 1994. 38. A. Beaudoin et al., Retrieval of forest biomass from SAR data, Int. J. Remote Sens., 15: 2777–2796, 1994. 39. E. Rignot et al., Radar estimates of aboveground biomass in boreal forests of interior Alaska, IEEE Trans. Geosci. Remote Sens., GRS-32: 1117–1124, 1994. 40. M. L. Imhoff, Radar backscatter and biomass saturation: Ramifications for global biomass inventory, IEEE Trans. Geosci. Remote Sens., GRS-33: 511–518, 1995. 41. E. J. Rignot, R. Zimmerman, and J. J. van Zyl, Spaceborne applications of P-Band imaging radars for measuring forest biomass, IEEE Trans. Geosci. Remote Sens., GRS-33: 1162–1169, 1995. 42. K. J. Ranson, S. Saatchi, and G. Sun, Boreal forest ecosystem characterization with SIR-C/XSAR, IEEE Trans. Geosci. Remote Sens., GRS-33: 867–876, 1995. 43. M. C. Dobson et al., Estimation of forest biophysical characteristics in northern Michigan with SIR-C/X-SAR, IEEE Trans. Geosci. Remote Sens., GRS-33: 877–895, 1995. 44. E. S. Kasischke, N. L. Christensen, and L. L. Bourgeau-Chavez, Correlating radar backscatter with components of biomass in loblolly pine forests, IEEE Trans. Geosci. Remote Sens., GRS33: 643–659, 1995. 45. Y. Oh, K. Sarabandi, and F. T. Ulaby, An empirical model and an inversion technique for radar scattering from bare soil surfaces, IEEE Trans. Geosci. Remote Sens., GRS-30: 370–381, 1992. 46. P. C. Dubois, J. J. van Zyl, and T. Engman, Measuring soil moisture with imaging radars, IEEE Trans. Geosci. Remote Sens., GRS-33: 915–926, 1995. 47. J. Shi and J. Dozier, Inferring snow wetness using C-Band data from SIR-C’s polarimetric synthetic aperture radar, IEEE Trans. Geosci. Remote Sens., GRS-33: 905–914, 1995. 48. L. C. Graham, Synthetic interferometer radar for topographic mapping, Proc. IEEE, 62: 763–768, 1974. 49. H. Zebker and R. Goldstein, Topographic mapping from interferometric SAR observations, J. Geophys. Res., 91: 4993–4999, 1986. 50. H. A. Zebker et al., The TOPSAR interferometric radar topographic mapping instrument, IEEE Trans. Geosci. Remote Sens., GRS-30: 933–940, 1992. 51. N. P. Faller and E. H. Meier, First results with the airborne single-pass DO-SAR interferometer, IEEE Trans. Geosci. Remote Sens., GRS-33: 1230–1237, 1995. 52. J. E. Hilland et al., Future NASA spaceborne missions, Proc. 16th Digital Avionics Syst. Conf., Irvine, CA, 1997. 53. S. N. Madsen, H. A. Zebker, and J. Martin, Topographic mapping using radar interferometry: Processing techniques, IEEE Trans. Geosci. Remote Sens., GRS-31: 246–256, 1993. 54. R. M. Goldstein, H. A. Zebker, and C. Werner, Satellite radar interferometry: Two-dimensional phase unwrapping, Radio Sci., 23: 713–720, 1988. 55. F. K. Li and R. M. Goldstein, Studies of multibaseline spaceborne interferometric synthetic aperture radars, IEEE Trans. Geosci. Remote Sens., GRS-28: 88–97, 1990. 56. C. Prati and F. Rocca, Limits to the resolution of elevation maps from stereo sar images, Int. J. Remote Sens., 11: 2215–2235, 1990.

487

57. C. Prati et al., Seismic migration for SAR focusing: Interferometrical applications, IEEE Trans. Geosci. Remote Sens., GRS28: 627–640, 1990. 58. H. A. Zebker and J. Villasenor, Decorrelation in interferometric radar echoes, IEEE Trans. Geosci. Remote Sens., GRS-30: 950– 959, 1992. 59. C. Prati and F. Rocca, Improving slant range resolution with multiple SAR surveys, IEEE Trans. Aerosp. Electron. Syst., 29: 135–144, 1993. 60. A. K. Gabriel and R. M. Goldstein, Crossed orbit interferometry: Theory and experimental results from SIR-B, Int. J. Remote Sens., 9: 857–872, 1988. 61. F. Gatelli et al., The wavenumber shift in SAR interferometry, IEEE Trans. Geosci. Remote Sens., GRS-32: 855–865, 1994. 62. H. A. Zebker et al., Accuracy of topographic maps derived from ERS-1 interferometric radar, IEEE Trans. Geosci. Remote Sens., GRS-32: 823–836, 1994. 63. E. R. Stofan et al., Overview of results of spaceborne imaging radar-C, X-band synthetic aperture radar (SIR-C/X-SAR), IEEE Trans. Geosci. Remote Sens., GRS-23: 817–828, 1995. 64. J. Moreira et al., X-SAR interferometry: First results, IEEE Trans. Geosci. Remote Sens., GRS-33: 950–956, 1995. 65. A. L. Gray and P. J. Farris-Manning, Repeat-pass interferometry with airborne synthetic aperture radar, IEEE Trans. Geosci. Remote Sens., GRS-31: 180–191, 1993. 66. A. Moccia and S. Vetrella, A tethered interferometric synthetic aperture radar (SAR) for a topographic mission, IEEE Trans. Geosci. Remote Sens., GRS-31: 103–109, 1992. 67. H. A. Zebker et al., Mapping the world’s topography using radar interferometry: The TOPSAT mission, Proc. IEEE, 82: 1774– 1786, 1994. 68. R. M. Goldstein and H. A. Zebker, Interferometric radar measurements of ocean surface currents, Nature, 328: 707–709, 1987. 69. R. M. Goldstein, T. P. Barnett, and H. A. Zebker, Remote sensing of ocean currents, Science, 246: 1282–1285, 1989. 70. M. Marom et al., Remote sensing of ocean wave spectra by interferometric synthetic aperture radar, Nature, 345: 793–795, 1990. 71. M. Marom, L. Shemer, and E. B. Thronton, Energy density directional spectra of nearshore wave field measured by interferometric synthetic aperture radar, J. Geophys. Res., 96: 22125– 22134, 1991. 72. D. R. Thompson and J. R. Jensen, Synthetic aperture radar interferometry applied to ship-generated internal waves in the 1989 Loch Linnhe experiment, J. Geophys. Res., 98: 10259– 10269, 1993. 73. R. E. Carande, Estimating ocean coherence time using dualbaseline interferometric synthetic aperture radar, IEEE Trans. Geosci. Remote Sens., GRS-32: 846–854, 1994. 74. L. Shemer and M. Marom, Estimates of ocean coherence time by interferometric SAR, Int. J. Remote Sens., 14: 3021–3029, 1993. 75. A. K. Gabriel, R. M. Goldstein, and H. A. Zebker, Mapping small elevation changes over large areas: Differential radar interferometry, J. Geophys. Res., 94: 9183–9191, 1989. 76. D. Massonnet et al., The displacement field of the Landers earthquake mapped by radar interferometry, Nature, 364: 138– 142, 1993. 77. D. Massonnet et al., Radar interferometric mapping of deformation in the year after the Landers earthquake, Nature, 369: 227–230, 1994. 78. H. A. Zebker et al., On the derivation of coseismic displacement fields using differential radar interferometry: The Landers earthquake, J. Geophys. Res., 99: 19617–19634, 1994.

488

REMOTE SENSING GEOMETRIC CORRECTIONS

79. G. Peltzer, K. Hudnut, and K. Feigl, Analysis of coseismic displacement gradients using radar interferometry: New insights into the Landers earthquake, J. Geophys. Res., 99: 21971– 21981, 1994. 80. G. Peltzer and P. Rosen, Surface displacement of the 17 May 1993 Eureka Valley, California, earthquake observed by SAR interferometry, Science, 268: 1333–1336, 1995. 81. D. Massonnet, P. Briole, and A. Arnaud, Deflation of Mount Etna monitored by spaceborne radar interferometry, Nature, 375: 567–570, 1995. 82. D. Massonnet and K. Freigl, Satellite radar interferometric map of the coseismic deformation field of the M ⫹ 6.1 Eureka Valley, California earthquake of May 17, 1993, Geophys. Res. Lett., 22: 1541–1544, 1995. 83. R. M. Goldstein et al., Satellite radar interferometry for monitoring ice sheet motion: Application to an Antarctic ice stream, Science, 262: 1525–1530,1993. 84. I. R. Joughin, Estimation of ice-sheet topography and motion using interferometric synthetic aperture radar, Ph.D. thesis, Univ. of Washington, Seattle, 1995. 85. E. Rignot, K. Jezek, and H. G. Sohn, Ice flow dynamics of the Greenlandice sheet from SAR interferometry, Geophys. Res. Lett., 22: 575–578, 1995. 86. I. R. Joughin, D. P. Winebrenner, and M. A. Fahnestock, Observations of ice-sheet motion in Greenland using satellite radar interferometry, Geophys. Res. Lett., 22: 571–574, 1995. 87. R. M. Goldstein, Atmospheric limitations to repeat-track radar interferometry, Geophys. Res. Lett., 22: 2517–1520, 1995. 88. C. Elachi, Spaceborne Radar Remote Sensing: Applications and Techniques, New York: IEEE Press, 1988. 89. M. A. Donelan and W. J. Pierson, Radar scattering and equilibrium ranges in wind-generated waves with application to scatterometry, J. Geophys. Res., 92: 4971–5029, 1987. 90. W. J. Plant, The variance of the normalized radar cross section of the sea, J. Geophys. Res., 96: 20643–20654, 1991. 91. C. Y. Chi and F. K. Li, A comparative study of several wind estimation algorithms for spaceborne scatterometers, IEEE Trans. Geosci. Remote Sens., 26: 115–121, 1988. 92. F. M. Naderi, M. H. Freilich, and D. G. Long, Spaceborne radar measurement of wind velocity over the ocean—an overview of the NSCAT scatterometer system, Proc. IEEE, 79: 850–866, 1991. 93. F. J. Wentz, S. Peteherych, and L. A. Thomas, A model function for ocean radar cross sections at 14.6 GHz, J. Geophys. Res., 89: 3689–3704, 1984. 94. F. J. Wentz, L. A. Mattox, and S. Peteherych, New algorithms for microwave measurements of ocean winds: Applications to SEASAT and the special sensor microwave imager, J. Geophys. Res., 91: 2289–2307, 1986. 95. M. G. Wurtele et al., Wind direction alies removal studies of SEASAT scatterometer-derived wind fields, J. Geophys. Res., 87: 3365–3377, 1982. 96. C. L. Rufenach, J. J. Bates, and S. Tosini, ERS-1 scatterometer measurements—Part I: The relationsihp between radar cross section and buoy wind in two oceanic regions, IEEE Trans. Geosci. Remote Sens., 36: 603–622, 1998. 97. C. L. Rufenach, ERS-1 scatterometer measurements—Part I: An algorithm for ocean-surface wind retrieval including light winds, IEEE Trans. Geosci. Remote Sens., 36: 623–635, 1998. 98. E. Rodriguez, Altimetry for non-Gaussian oceans: Height biases and estimation of parameters, J. Geophys. Res., 93: 14107– 14120, 1988. 99. G. S. Brown, The average impulse response of a rough surface and its applications, IEEE Trans. Antennas Propag., AP-25: 67– 74, 1977.

100. D. B. Chelton, E. J. Walsh, and J. L. MacArthur, Pulse compression and sea level tracking in satellite altimetry, J. Atmos. Oceanic Technol., 6: 407–438, 1989. 101. A. R. Zieger et al., NASA radar altimeter for the TOPEX/POSEIDON project, Proc. IEEE, 79: 810–826, 1991. 102. TOPEX/POSEIDON: Geophysical evaluation, J. Geophys. Res., 99: 1994. 103. R. Bindschadler et al., Surface velocity and mass balance of ice streams D and E, West Antarctica, J. Glaciol., 42: 461–475, 1996. 104. D. J. Daniels, D. J. Gunton, and H. F. Scott, Introduction to subsurface radar, IEE Proc., Part F, 135: 278–320, 1988. 105. Special issue on ground penetrating radar, J. Appl. Geophys., 33: 1995. 106. T. S. Chuah, Design and development of a coherent radar depth sounder for measurement of Greenland ice sheet thickness, Radar Systems and Remote Sensing Laboratory RSL Technical Report 10470-5, Univ. of Kansas Center for Res., 1997. 107. L. J. Porcello et al., The Apollo lunar sounder radar system, Proc. IEEE, 62: 769–783, 1974. 108. R. J. Doviak and D. S. Zrnic, Doppler Radar and Weather Observations, Orlando, FL: Academic Press, 1984. 109. R. M. Lhermitte, Cloud and precipitation remote sensing at 94 GHz, IEEE Trans. Geosci. Remote Sens., 26: 256–270, 1997. 110. P. V. Hobbs et al., Evaluation of a 35 GHz radar for cloud physics research, J. Atmos. Oceanic Technol., 2: 35–48, 1985. 111. S. P. Lohmeier et al., Classification of particles in stratiform clouds using the 33 and 95 Ghz polarimetric cloud profiling radar system (CPRS), IEEE Trans. Geosci. Remote Sens., 35: 256– 270, 1997. 112. A. L. Pazmany et al., An airborne 95 GHz dual polarized radar for cloud studies, IEEE Trans. Geosci. Remote Sens., 32: 731– 739, 1994. 113. G. A. Sadowy et al., The NASA DC-8 airborne cloud radar: Design and preliminary results, Proc. IGARSS ’97, 1997, pp. 1466– 1469. 114. Z. S. Haddad et al., The TRMM ‘day-1’ radar/radiometer combined rain-profiling algorithm, J. Meteorol. Soc. Jpn., 75: 799— 809, 1997. 115. J. Simpson, R. F. Adler, and G. R. North, A proposed tropical rainfall measuring mission (TRMM) satellite, Bull. Amer. Meteorol. Soc., 69: 278—295, 1988. 116. S. L. Druden et al., ARMAR: An airborne rain-mapping radar, J. Atmos. Oceanic Technol., 11: 727—737, 1994.

JAKOB J. VAN ZYL YUNJIN KIM California Institute of Technology

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3605.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Remote Sensing Geometric Corrections Standard Article Jose F. Moreno1 1University of Valencia, Valencia, Spain Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3605 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (195K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Spatial Resolution General Parametric Approach for Image Geocoding Polynomial Approximations Automatic Registration Techniques The Need of Ground Control Points for Accurate Registration Cartographic Projections The Problem of Topographic Distortions Resampling Calibration and Removal of Technical Anomalies Spatial Mosaicking Multitemporal Compositing

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3605.htm (1 of 2)17.06.2008 15:43:27

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3605.htm

Operational Processing and Data Volume Considerations Keywords: calibration; distortion; global positioning system; ground control points; mosaicking; orbital mechanics; preprocessing; projection; rectification; registration; resampling; resolution About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3605.htm (2 of 2)17.06.2008 15:43:27

488

REMOTE SENSING GEOMETRIC CORRECTIONS

REMOTE SENSING GEOMETRIC CORRECTIONS Satellite remote-sensing data have become an essential tool in many applications, mainly environmental monitoring. Although remote-sensing techniques have been applied in many different fields, operational use is still far from being achieved. New applications and increased possibilities are expected for the coming years due to the availability of a new series of advanced sensors with improved capabilities. Apart from the need for a better understanding of the remotely sensed signals, most of the problems in using remotesensing data have been due to inadequate data processing. Data-processing aspects become essential for actually obtaining useful information from remote-sensing data and for J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

REMOTE SENSING GEOMETRIC CORRECTIONS

489

deriving usable products. Within these data-processing aspects, geometric corrections play an essential role. Multitemporal analyses are required for monitoring of changes and evolution, while multisensor data are typically needed to solve a particular problem. Integration of the different data, acquired under different viewing geometries and with different geometric characteristics (resolution, pixel shape, sensor spatial response function, interpixel overlaps, etc.), requires careful processing to avoid misleading results. The article is structured as follows: after introduction of the topic and the driving concept of spatial resolution, a method to account for the geometric distortions introduced by orbital motion, scanning, and panoramic view in remote sensing data is presented. Although the relevant details depend very much on each particular sensor or even on each particular acquisition mode of each sensor, a general parametric approach is presented which can be potentially applied to any sensor, both airborne and spaceborne cases, with minimal changes of inputs parameters. Due to the many sensors and systems available for a given application, a general, consistent method to process the data is preferable, instead of specific approaches for each new particular sensor. After a description of the approach step by step, some approximations traditionally used are briefly presented (polynomial distortion models, linear cross-correlation), and then the need of some external cartographic reference is discussed, addressing the problems arising from the use of many different cartographic projections and the difficulties in the mathematical modeling of the earth as a simple geometrical figure. The problems introduced by topographic structure are then discussed. Then, when the geometric transformation model image-map is defined, the resampling of the image to a different pixel size/ shape is discussed, pointing out difficulties with classical approaches and new algorithms based on image restoration. Other external factors which must be also taken into account, like calibration and removal of technical anomalies (noises) are then considered as additional problems. Finally, the generation of mosaics representing a large geographical area by compositing several or many individual images and the multitemporal compositing for monitoring changes are discussed, with a final comment about the challenges posed by the large amount of data to be processed with current or near-future sensors in the context of operational use of the data in practical applications. To clarify some of the aspects, most of the examples given correspond to the case of NOAA (National Oceanographic and Atmospheric Administration) AVHRR (Advanced Very High Resolution Radiometer) data, one of the most widely used remote sensing data and one good example on how geometric distortions affect remote sensing images and how methods can be applied to correct for such distortions. By using general parametric approaches, similar methods have been applied for a wide range of sensors, like, for instance, from very high resolution hyperspectral airborne data (1) to low resolution data from meteorological satellites (2).

about tens of kilometers (for low-resolution satellite passive microwave sensors). Obviously, the techniques, approaches, and limitations are quite different for each resolution range. However, to solve a particular problem it is often necessary to merge data from different sensors and with different resolutions, and then problems appear in handling the varying resolutions. An additional problem is given by the fact that high spatial resolution instruments generally have low temporal resolution, while medium spatial resolution sensors have greater temporal resolution, since the limiting factor is the whole amount of data to be transmitted or stored on board. The concept of spatial resolution is well defined, especially in optoelectronics systems. The use of this concept in remote sensing is, however, not always clear and many times confusing. For one of the sensors described in more detail, the multispectral scanner (MSS) on board Landsat, the resolution has been defined in terms of geometric instantaneous field of view (IFOV), with values ranging from 73.4 m to 81 m, depending on the analysis criteria. The IFOV response function is another criteria, giving values between 66 m and 135 m for the derived corresponding resolution. Other criteria are based on the concept of effective resolution, giving a range between 87 m and 125 m. Other criteria, taking into account atmospheric spatial blurring and feature discrimination, assign other types of resolution ranging from 220 m to 500 m, depending on the final application. Another aspect different from spatial resolution is pixel spacing in the image. Correspondence between resolution and pixel spacing is sometimes unclear, unless a complete spatial characterization of the sensor is available for actual flight configuration. Actually, the spatial resolution of a sensor is a combination of the optical point spread function, sampling, and downstream electronics. In the case of synthetic-aperture radar (SAR) data the concept of spatial resolution is also somehow confusing, since the resolution depends on the way the original raw data is processed. Typical available systems can provide a resolution of only few meters. However, this resolution cannot be used in regular applications due to the presence of noise (or undesired signals). Two approaches are followed: spatial averaging and multilooking, reducing the spatial resolution by increasing pixel spacing, and local filtering reducing the spatial resolution but keeping the same pixel spacing. In both cases, the local filtering or average is performed over a window determined from the local level of noise. If the statistical properties of the data are so that the local entropy can be determined, optimal resolutions can be established. Otherwise, the effective resolution can be critical for accomplishing the requirements of the selected applications. Spatial resolution considerations play a significant role in the case of SAR data processing, especially in those approaches based on statistical analysis.

SPATIAL RESOLUTION

Many different approaches have been described in the literature for geometric registration or geocoding of remotely sensed data. Some of them are quite sensor-specific. Owing to the current available technologies for global positioning, a quite general approach can be adopted, valid even for satellites, aircraft, and most of the sensors, including optical and

A critical aspect to be considered previously to any processing of remote-sensing data is spatial resolution. Current available systems can produce images with resolutions ranging from a few centimeters (for very-high-resolution airborne sensors) to

GENERAL PARAMETRIC APPROACH FOR IMAGE GEOCODING

490

REMOTE SENSING GEOMETRIC CORRECTIONS

SAR (2,3). First, the platform trajectory is determined; second, the attitude angles of the platforms are instantaneously calculated; third, the viewing geometry of each particular sensor is related to platform orientation and the instantaneous viewing direction is derived, parametrized as a function of universal time t. Each point over the surface can be observed only at a given time t (Fig. 1). After derivation of the time t, by solving a system of coupled nonlinear equations, the exact viewing geometry can be derived for each point over the surface (map) or, alternatively, for each pixel in the image.

Table 1. NASA NORAD Two-Line Orbital Element Set Format, Detailing the Orbital Information Daily Available for All Orbiting Satellites. For Many of them, Additional, More Detailed Information Is Also Available from Other Specific Sources. Line

Column

Description

1

01–01 03–07 10–18

Line number of element data Satellite number International designator (last two digits of launch year ⫹ launch number of the year ⫹ piece of launch) Last two digits of epoch year Julian day and fractional portion of the day First time derivative of the mean motion (or ballistic coefficient) Second time derivative of the mean motion Drag term, General Perturbation-4 (GP4) perturbation theory (or radiation pressure coefficient) Ephemeris type Element number Line number of element data Satellite number Inclination (degrees) Right ascension of the ascending node (degrees) Eccentricity Argument of perigee (degrees) Mean anomaly (degrees) Mean motion (revolutions per day) Revolution number at epoch revolutions

19–20 21–32 34–43

Platform Trajectory Determination The platform trajectory can be derived by different methods, and typically becomes known quite accurately for non-realtime data processing and less accurately for real-time applications.

45–52 54–61

γ (t) O

2

β (t)

α (t)

63–63 65–68 01–01 03–07 09–16 18–25 27–33 35–42 44–51 53–63 64–68

n p

Additional numbers are added at the end of each line as a check sum. Example: ERS-2 1 23560U 95021A 97188.14082342 .00000503 00000-0 20289-3 0 4065 2 23560 98.5461 262.6711 0001003 102.2708 257.8609 14.32248108115670

(x(t), y(t), z(t))

(a) Celestial reference frame (inertial)

Satellite orbital information (ephemeris data)

Earth center reference frame

Orbital reference frame

Attitude reference frame

Image coordinates (line, pixel)

Direct modeling Inverse modeling

Geographical coordinates (x, y, z) (h, ϑ, ϕ )

(b) Figure 1. (a) Variables involved in the definition of local observation geometry: platform trajectory O ⬅ (x(t), y(t), z(t)) and orientation [움(t), 웁(t), 웂(t)] are functions of time t. Location P, with normal n, can only be observed at some time t⬘, which is determined by numerical solution of the system of parametric equations in t for the whole set of variables involved. (b) Steps in the geometrical processing of satellite remote sensing data.

Satellite Sensors: Orbital Mechanics. Most remote-sensing satellites are placed in quasi-polar orbits, because of the preferable observation repetivity, i.e., repeatable solar positions, and mainly due to helio-synchronicity capabilities (4–6). Two types of satellite orbits must be identified: some satellites are almost kept in a given, predefined orbit, by means of constant operations to compensate for deviations due to perturbations in the Earth’s gravitational field; other satellites are allowed to drift as a consequence of gravitational and other (such as atmospheric drag) perturbations. In both cases, daily routine monitoring of satellite positions allow determination of instantaneous positions with relative accuracy (few kilometers) on global basis. Several sources, mostly military, are available to navigate satellite data by means of actual ephemeris information. NORAD [currently U.S. Space Command (USSC)] two-line orbital element (TLE) sets (also called NASA Prediction Bulletins) for all orbiting satellites and some documentation and software are available via the World Wide Web and anonymous file transfer protocol from several sources, and they are updated almost daily. Each satellite is identified by a 22-character name (NORAD SATCAT name). A brief description of the format is included in Table 1 for reference. For more detailed information see Ref. 7. Other sources are also available. However, it is very important to point out that each ephemeris data [TLE, Television Infrared Observation Satellite

REMOTE SENSING GEOMETRIC CORRECTIONS

(TIROS) Bulletin United States (TBUS), etc.] are derived by means of a particular reduction of observations to some specific model (7–9). For each ephemeris type, the corresponding orbital extrapolation model has to be used in order to get consistent results according to the model used to derive the orbital elements. Ephemeris types are not exchangeable. The use of some orbital information with a wrong orbit extrapolation model is one of the major sources of problems in some current processing algorithms. Due to the recent advances in the global positioning system (GPS), very significant progress in satellite data processing is becoming possible in terms of geometric aspects, mainly due to the improved capabilities in positioning observation platforms. GPS (10) was developed by the U.S. Department of Defense as an all-weather, satellite-based system for military use, but now it is also available for civilian use, including scientific and commercial applications. The current GPS space segment includes 24 satellites, each one traveling in a 12-h, circular orbit at 20,200 km above Earth. The satellites are positioned so that about six are observable nearly all the time from any point on the earth. There is also a GPS control segment of ground-based stations that perform satellite monitoring and command functions. The GPS satellites transmit ranging codes on two radio-frequency carriers at L-band frequencies, allowing the locations of the receivers (the user segment) to be determined with an accuracy of about 16 m in a stand-alone, real-time mode. By using corrections sent from another GPS receiver at a known location, accuracy in relative position of 1 m can be obtained. Subcentimeter accuracies can be achieved through the simultaneous analysis of the dual-band carrier-phase data received from all GPS satellites by a global network of GPS receivers by means of postprocessing. The use of GPS techniques in the processing of remote-sensing data is not only limited to satellite positioning, but they are also used for identification of surface points used for reference to increase the accuracy in automatic registration. In more recent satellites, such as the European Earth Remote Sensing Satellites (ERS 1/2), a product called precise orbit is provided by ESA in specific format (after refinement of models plus observations with laser tracking data) with radial nominal accuracy below 1 m. Other systems, like SPOT and Topex/Poseidon, use a combined system of satellite laser ranging and a dual-Doppler tracking system (DORIS) that allows determination of satellite’s position to within a few centimeters from the earth’s center. With the new GPS systems, accuracies in satellite positioning within 2.5 cm have been demonstrated. Hopefully, when all these techniques become fully operational in all new platforms, geometric data processing of data acquired by such platforms will be much more easy and accurate. Airborne Sensors. The same positioning capabilities are true not only for satellite systems but also for the case of airborne sensors. In the case of airborne sensors, geometric distortions are critical due to changes in aircraft flight patterns, especially for low-altitude flights. Moreover, there is no possible simplified orbital modeling here, but the trajectory must be kept in the form of (x, y, z) coordinates and local trajectory reconstructed by means of polynomials in time t.

491

Platform Orientation and Sensor Geometry Once the platform trajectory is determined, a reference basis on each point of the trajectory must be defined. For instance, one axis can be chosen in the direction of the instantaneous velocity vector, and another axis can be normal to the plane defined by the velocity vector and the local geocentric or geodetic (see Ref. 11) direction, with the third axis completing the orthogonal reference. In this reference frame, each sensor on board the platform has specific viewing direction vectors, which are sensor-dependent, but are firmly linked to the platform orientation. A convenient way of optimizing computations is to reduce formulation to a given vector u that defines the local viewing direction. This vector changes with time. If selected properly, we can guarantee that variations in u can be described as small-angle rotations, the so-called attitude variations. If we rotate the vector u an angle around the direction given by a vector w, the components of u change according to w, σ )u u = (w w · u )w w + cos σ (w w × u ) × w + sin σ (w w × u) u = R(w (1) with u⬘ the transformed vector. The transformation R(w, ) can also be expressed as a three-axis rotation of angles 1 (pitch), 2 (roll), and 3 (yaw), around the three unitary vectors that define the instantaneous local orbital reference frame, as in the classical modeling of attitude angle effects w, σ ) = R(ee 3 , σ3 )R(ee2 , σ2 )R(ee 1 , σ1 ) R(w

(2)

As attitude angles are always very small, the order of rotations is not quite important. Moreover, Eq. (1) can be reduced, at the first order, to w × u) u = u + σ (w

(3)

Parameterization of w and as a function of three (because w is unitary) functions of time t allows a very adequate description of platform attitude variations (3,12), and references therein. Determination of Instantaneous Surface Observation Geometry Since all intervening variables are parametrized as functions of time t, the resulting approach implies the necessity for solving a system of nonlinear equations in parametric form (13), the universal time (UT) being used as iterative parameter. Once the observation time is obtained by solving the corresponding equations, the satellite position, platform orientation, and sensor angles can be immediately determined, which allows direct derivation of the exact pixel number in the image. Also, observation time directly determines the line number of the image (shifts of the whole image data are sometimes necessary due to satellite internal clock drifts). Moreover, observation time allows derivation of instantaneous solar position, which makes possible exact pixel-bypixel illumination corrections, which is critical in the case of optical data, especially for mosaicking and multitemporal composites. In the case of SAR data the general parameterization is similar, but due to the way the synthetic image is generated other concepts appear in the parameterization of SAR image

492

REMOTE SENSING GEOMETRIC CORRECTIONS

geometric corrections (slant-range, Doppler frequency). Due to the way in which the image is generated, motion compensation schemes applied in the processing of raw data is preferable to a posteriori geometric corrections of the already resulting (flat) image in slant range, especially if topography plays a significant role. The approach just described is called inverse mapping because the observation geometry (line, column) is derived for each ground point. An alternative is to use the so-called direct mapping in which each image point is mapped into the surface cartographic projection. The first approach can result in oversampling, while the second approach can result in gaps. This second approach is typically more efficient and useful if no topographic information is available. However, for adequate resampling procedures the inverse mapping becomes more appropriate. POLYNOMIAL APPROXIMATIONS Under some circumstances (small, almost linear distortions), the general image transformation functions x = f (x, y),

y = g(x, y)

(4)

can be approximated as simple polynomials

x ≈ a0 + a1 x + a2 y + a3 x2 + a4 y2 + a5 xy + · · · y ≈ b0 + b1 x + b2 y + b3 x2 + b4 y2 + b5 xy + · · ·

(5)

Note that this is nothing more than an approximation, useful only if few terms have to be retained for a given accuracy requirement, and cannot be considered as a general geometric correction procedure, even if a high-degree polynomial is used. Typical distortions cannot be described by polynomials, and numerical problems arise for high-degree polynomials (a ninth-degree polynomial is almost the limit for double-precision numerical calculations). AUTOMATIC REGISTRATION TECHNIQUES When increasing the amount of data does not allow detailed processing of each single image, automatic registration techniques are developed. The most widely used approach is based on linear cross-correlation, i.e., applicability only for almost-linear distortions. The essential idea is to define the transformation between two images in the form I(x, y) → I(x + x, y + y) ↔ I (x , y )

(6)

The linear parameters ⌬x and ⌬y are determined by iteratively maximizing the correlation between I(x, y) and I⬘(x⬘, y⬘). The procedure is accelerated by reducing the range of ⌬x, ⌬y by working in a small window. The method becomes applicable only for small (linear) distortions, and it is typically used as a second step in the refinement of automatic procedures based on stand-alone geometrical models by using ephemeris or attitude data as a single input without any reference point, by cross-registration to a reference image, or for images with very small geometric distortions (close-nadir viewing cameras with optical or electronic compensation of attitude deviation effects).

More general transformations such as I(x, y) → I(( λ(x + x), µ(y + y)))

(7)

are possible, but extremely time-consuming and impractical unless and 애 can be considered as constant across the whole image.

THE NEED OF GROUND CONTROL POINTS FOR ACCURATE REGISTRATION Unfortunately, completely automated techniques still cannot provide accurate registration. Although the platform trajectory can be known very accurately, orientation angles and attitude changes cannot be known, at least by now, with the necessary accuracy, so that to achieve a subpixel registration among different images, some reference points [ground control points (GCP)] have to be identified and used in a refinement procedure. Satellite clock drifts and assumptions in the orbital model are additional sources of errors that need to be compensated. The refinement procedures allow slight changes in orientation directions of the platform or sensor, redefined by means of GCP, and/or the platform trajectory, to compensate for the observed errors when ‘‘nominal’’ values are used first. In other cases, the refinement procedure is actually a second geometric correction procedure (based on polynomial transformation) after the image has been precorrected by a nominal full geometrical model. But the identification of GCPs in the image and maps is not easy. The use of GPS for identification of GCP has increased the accuracy of using GCP techniques tremendously. Automatic correlation techniques can only be applied over boundaries (coastal, lakes, rivers), but the identification of GCPs in the image allows the use of single-point features in a very precise way. Still the major problem in using GCPs is the lack of operationality, since the method requires necessarily the intervention of an operator, which not only slows the whole procedure but can also introduce subjective bias due to the different skills of each operator. Unfortunately, accurate registration still requires GCPs. The spectacular advance in platform positioning techniques in the past few years now makes it possible to reduce the number of GCPs to an absolute minimum. The determination of the instantaneous attitude angle will also soon be possible, by means of differential GPS techniques with different receivers located at the edges of the platform, but is still insufficient for accurate positioning, especially for very-high-resolution data. However, the major problems that make the use of GCPs necessary are the deficiencies in the cartographic modeling of the earth, and the lack of elevation information, in many cases, for each GCP.

CARTOGRAPHIC PROJECTIONS When images have to be transformed from the original acquisition geometry (typically not useful for most applications) to some cartographic reference, a specific map projection has to be chosen. Local maps available for each area define the type of cartographic projection, which are different in each coun-

REMOTE SENSING GEOMETRIC CORRECTIONS

try. It is often difficult to identify the best projection to be used for a given particular problem. To avoid such problems, in many cases data are simply projected to a latitude–longitude grid, where the latitude-longitude relative factor is compensated to make the resulting pixel (in kilometers over the surface) almost square. This is only possible for some reference latitude, which is chosen as the latitude of the central point in the reference area. One advantage of this projection, apart from the simplicity, is that most applications require computation of derived quantities that are given as functions of latitude and longitude, so that computations are easy if a latitude–longitude grid is used. For cartographic applications, however, local references are needed. Fortunately, computational tools are available that allow transfer of data from any cartographic projection to any other by means of mathematical transformations. The

problem is then the resampling of the data. Each time the geometry of the data is changed resampling is needed, with the unavoidable loss of some information and introduction of interpolation artefacts. Single-step procedures from the original acquisition geometry to the final cartographic product, by using a single resampling of the data, are always preferable.

THE PROBLEM OF TOPOGRAPHIC DISTORTIONS Two effects have to be taken into account. On the one hand, given a sensor altitude over a reference surface (typically the earth ellipsoid) the effect of varying altitude of the target over the same reference surface is to introduce horizontal displacement ⌬X [see Fig. 2(a)] with additional geometric distortion, plus a change in the sensor–target distance, relevant for ra-

α Dapparent

Dtrue hsensor

α’

htarget

∆X

Xtrue hsensor (a)

493

(b)

Figure 2. Effects introduced by topography in the geometric processing of remote sensing data. (a) Geometric distortions due to relief for off-nadir viewing geometry, including horizontal displacement ⌬X of apparent positions, change in sensor–target distance D, and changes in the local illumination angle 움⬘ over the nominal illumination angle for a horizontal surface 움. (b) Radiometric distortions due to changes in illumination angle and viewed area, as well as additional reflections due to adjacent slopes, for the case of topography (bottom) as compared to the case of flat surfaces (top).

494

REMOTE SENSING GEOMETRIC CORRECTIONS

diometric corrections and with relevant effects in radar data processing. On the other hand, the change from a flat to a rugged surface introduces alterations if the effective illumination angle and effective scattering area, plus additional reflections coming from adjacent slopes with alterations in the desired signal from the target [Fig. 2(b)]. The first effect is purely geometric and can be easily accounted for provided enough information about earth topographic structure (digital terrain model). The second one is much more difficult to correct, and only first-order effects are usually compensated in correction approaches. The parametric geometric model fully describes the incorporation of topographic effects as direct, especially in the inverse mapping approach. Several techniques have been suggested to include (at least first-order) topographic corrections in polynomial models. The method is only applicable in areas with low topographic distortions, like close-nadir viewing or limited altitude changes across the area. For more general cases, a full 3-D geometrical model is required to account for geometric projections of objects over the perpendicular plane to the viewing direction. The modeling of radiometric effects due to topography is, in a rigorous way, quite difficult: local slope and orientation, plus the local horizon line, at least, have to be determined for each point in the image. Each viewing/illumination condition determines a changing geometry in the resulting scene. Although most studies consider very simple approaches to describe radiation exchanges in rugged terrain, other approaches take full advantage of computer graphics technologies for a realistic description of intervening effects. Ways to speed up calculations while still keeping a realistic description of the major intervening effects have been developed (14), but a proper description of effects introduced by topography, suitable for correction/normalization of the data from such perturbations, remains an open issue.

RESAMPLING Once the remotely sensing data can be located geographically or registered over a spatial database, resampling techniques become the next critical issue. The most simple method is called the nearest-neighbor technique, in which you simply assign to each new pixel the value of the closest one in the original image, without the need for recalculations. But many more advanced techniques have been developed (15–20). Assign a value of 1.0 for the central processing unit (CPU) time for the nearest-neighbor algorithm, considered as the basic procedure for a typical geometric processing, including registration, UTM projection, and resampling. The relative increment in CPU time required by different interpolation algorithms applied in the resampling varies only few percent for ‘‘standard’’ interpolation approaches: bilinear (1.02), cubic convolution (1.07), cubic B spline (1.10). However, if sensorspecific optimum-interpolation approaches are used (see Refs. 18 and 20), CPU time increases drastically: analytical optimum interpolation (7.7), fully numerical optimum interpolation (360.5). Even with more sophisticated processing to achieve more accurate results, an increase in CPU time by a factor of 360 is not acceptable for operational use. Analytical simplifications are more practical but still give reasonable accuracies (18). Resampling considerations become only critical

for multisensor studies, especially when very different spatial resolution data have to be merged. In all optimum interpolation methods, the basic idea is not to increase the apparent resolution, but to provide interpolated values for new pixels resulting from geometrical transformations. Another type of resampling is specifically oriented to create ‘‘super-resolution’’ products by enhancement of the highfrequency content of the image. This is possible with the help of at least two resolution data. However, some recent approaches use many different low-resolution images of a given area, each acquired under a slightly different viewing geometry (only partial overlaps among IFOVs in each different images or images shifted one with respect each other by a small fraction of the IFOV). In this way high-resolution information is reconstructed by iterative processing of the multiple views (21,22). Super-resolution resampling techniques have been intensively used to increase usefulness of low-resolution passive microwave or scatterometer data, for which spatial resolution is typically very poor but many views are available for each area. Although these techniques are still in early stages of development, they appear to be really successful only in areas with highly contrasted spatial substructures. CALIBRATION AND REMOVAL OF TECHNICAL ANOMALIES Several sensor-specific technical anomalies have to be considered in the geometric processing (23). Calibration typically refers to radiometric calibration. In the case of SAR data, calibration refers not only to radiometric but also some geometric aspects. Correct interpretation of SAR data requires deconvolution of the signal from artifacts due to antenna gain pattern and then directly related to local variations in incidence angle for each observed area. Calibration also means intersensor normalization. Only few sensors acquire the images on a strictly pixel-by-pixel basis (AVHRR (Advanced Very High Resolution Radiometer) is a typical example). In the case of the Landsat Thematic Mapper (TM), 16 lines are simultaneously acquired by an array of detectors. In the case of the Satellite Pour l’Observation de la Terre (SPOT), each whole line is acquired simultaneously by charge-coupled device (CCD) arrays. Electrooptical multidetectors, with CCD technology and advanced optical fiber devices, will be the common technology for sensors in the near future. Since the behavior of multiple detectors composing the final image is not the same, intersensor normalization or recalibration is strictly required to avoid artifacts in the image (different intensity vertical strips, horizontal stripping, and nonregular spatial noises). Additional problems are due to the scanning devices, especially for dual-scan systems (forward and reverse) such as Landsat TM. Nonlinear figure-8-shaped distortions in line geometry must be optically compensated, but local geometric distortions resulting in the images are quite difficult to remove. SPATIAL MOSAICKING Satellite data acquisitions are typically done along strips of limited width. The width varies from hundred of meters, for very-high-resolution systems, to thousands of kilometers, for low-resolution systems. The reason for this variation is mainly the limited capabilities of data transmission from sat-

495

T4 – T5 ≅ ρ wv α 1 ≅ ρ aeros

Radiometric calibration

Aerosol-type spatialization

Cloud screening

Thermal infrared atmospheric correction

Cloud classification

AVHRR 5 ground radiance

AVHRR 4 ground radiance

AVHRR 3 ground radiance

AVHRR 2 ground reflectance

AVHRR 1 ground reflectance

Surface temperature

Surface NDVI

Surface albedo

Thermal radiance model

Emissivity corrections

Bidirectional reflectance model

Multitemporal composites

Database management

Figure 3. Schematic diagram illustrating the whole processing chain for NOAA AVHRR data. Similar processing schemes are used for most remote-sensing data systems. SGP ⫽ simplified general perturbation; nDVI ⫽ normalized difference vegetation index; wv ⫽ water vapor; aeros ⫽ aerosols.

Atmospheric data: • radiosounding • visibility

Internal calibration data (+ deep-space counts) External calibration data

Topographic normalization

Visible and near-infrared atmospheric correction

Geometric correction and resampling

Sun angles Satellite angles

Brouwer-Lyddane SGP 4

Orbital extrapolation

Digital elevation model (three dimensional)

Reference data: • Year • Day of the year • Image time

Orbital data (TBUS/TLE)

496

REMOTE SENSING GEOMETRIC CORRECTIONS

ellites to Earth. Since the whole data volume is limited, an increase of spatial coverage is done at the expense of reduction in spatial resolution. For many applications, mainly those requiring high-resolution data, a single strip is not enough to cover the study area, and several images have to be ‘‘mosaicked’’ to make a single image of the area. A large image area is defined and all pixels are set to zero values. Then, each single image is reference over the large frame background. When two single images overlap each other, a decision has to be made about how to combine both pixels to define the unique value in the mosaic. Accurate geometric registration of each single image forming the mosaic is not enough to make the mosaic look like a single image. Single images are acquired under different viewing geometries, and illumination corrections are needed in order to avoid artifacts in the boundaries between original single images. Since images are acquired at different times, motions or changes in targets (i.e., clouds) can result in discontinuities. Simple image-processing techniques are often used (local histogram equalizations plus local cross-correlation and linear composites across overlaps) to improve appearance. However, physically based methods are preferred to compensate for perturbing effects, especially if the data have to be used in numerical studies or as input to physical models after the mosaic images have been produced. MULTITEMPORAL COMPOSITING Monitoring the surface condition by remote sensing implies the use of multitemporal data. Obviously, geometric registration among all the multidate images must be set within one pixel to make sense of the use of the composites. A critical issue is the necessity of accounting for the illumination dependence on the measured data. It is therefore necessary to keep track during the geometric processing about the observation angles but also the illumination angles, absolutely critical in the case of passive optical data. In the case of SAR data, this is no longer a serious limitation. This illumination independence of SAR data and also the cloud-transparency properties are the two major reasons for the superior capabilities of SAR remote sensing over optical systems for operational all-weather monitoring capabilities. However, proper corrections of antenna gain pattern effects that vary with local incidence angle are needed, especially over topographically structured areas, to avoid artefacts simply due to changing illumination conditions in multitemporal studies. OPERATIONAL PROCESSING AND DATA VOLUME CONSIDERATIONS Since remote-sensing data are used in practical applications, operational constraints limit the use of sophisticated algorithms, which in many cases represent an unacceptable amount of computer time or memory management. Most of the considerations previously noted are highly limited by the amount of data to be processed, which points to the subsequent need for simplified algorithms and numerically optimized techniques. Although it is true computer techniques have experienced tremendous improvements in processing speed and memory capabilities, the increase in the amount of

remote sensing data to be processed, as well as in the sophistication of the algorithms used for the processing of the data, has increased also in such a way that limitations exist. The optimum compromise between accuracy requirements and practicality should be achieved in each particular application by optimization of the codes and advanced memory management in processing facilities. The case of AVHRR data has become a typical example, partly due to the many applications of these data due to low cost and global availability, and also due to the peculiar characteristics of the system with highly nonlinear geometric distortions due to panoramic view and circular scanning. Figure 3 indicates the many steps involved in the whole AVHRR data-processing scheme. Most other data-processing schemes for other sensors or systems follow similar steps. The developments in AVHRR data processing (3,12) have become a good example of how improvements in data-processing techniques can drastically increase the usefulness of data in many new potential applications. However, the future is really challenging. The Earth Observing System (EOS) platforms will provide data at the rate of 13.125 Mbyte/s for the first EOS platform and slightly higher for the posterior series. Similar or even higher rates are expected for other systems, especially for those using active sensors like SAR. These data rates represent a real challenge for current computational algorithms and hardware technologies (24).

BIBLIOGRAPHY 1. P. Meyer, A parametric approach for the geocoding of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data in rugged terrain, Remote Sens. Environ., 49: 118–130, 1994. 2. H. De Groof, G. De Grandi, and A. J. Sieber, Geometric rectification and geocoding of JPL’s AIRSAR data over hilly terrain, Proc. 3rd Airborne Synthetic Aperture Radar Workshop, J. J. van Zyl (ed.), NASA JPL Pub. 91-30, 1991, pp. 195–204. 3. J. Moreno and J. Melia´, A method for accurate geometric correction of NOAA AVHRR HRPT data, IEEE Trans. Geosci. Remote Sens., 31: 204–226, 1993. 4. R. R. Bate, D. D. Mueller, and J. E. White, Fundamentals of Astrodynamics, New York: Dover, 1971. 5. K. I. Duck and J. C. King, Orbital mechanics for remote sensing, in R. N. Colwell (ed.), Manual of Remote Sensing, 2nd ed., Falls Church, VA: American Society Photogrammetry, 1983, vol. I, chap. 16, pp. 699–717. 6. A. E. Roy, Orbital Motion, 3rd ed., Bristol: Hilger, 1988. 7. F. R. Hoots and R. L. Roehrich, Models for propagation of NORAD elements sets, Spacetrack Rep. No. 3, NORAD, Aerospace Defense Command, Peterson AFB, CO, 1980. 8. D. Brouwer, Solution to the problem of artificial satellite theory without drag, Astron. J., 64: 378–397, 1959. 9. P. R. Escobal, Methods of Orbit Determination, New York: Wiley 1965. 10. E. D. Kaplan (ed.), Understanding GPS: Principles and Applications, Norwood, MA: Artech House, 1996. 11. J. Morrison and S. Pines, The reduction from geocentric to geodetic coordinates, Astron. J., 66: 15–16, 1961. 12. G. W. Rosborough, D. G. Baldwin, and W. J. Emery, Precise AVHRR image navigation, IEEE Trans. Geosci. Remote Sens., 32: 644–657, 1994.

REPAIRABLE SYSTEMS 13. W. H. Press et al., Numerical Recipes, Cambridge, UK: Cambridge Univ. Press, 1986. 14. J. Dozier and J. Frew, Rapid calculation of terrain parameters for radiation modeling from digital elevation data, IEEE Trans. Geosci. Remote Sens., 28: 963–969, 1990. 15. R. Bernstein, Digital image processing of Earth observation sensor data, IBM J. Res. Develop., 20: 40–57, 1976. 16. R. Bernstein et al., Image geometry and rectification, in R. N. Colwell (ed.), Manual of Remote Sensing, 2nd ed., Falls Church, VA: American Society Photogrammetry, vol. I, chap. 21, pp. 873– 922, 1983. 17. R. G. Keys, Cubic convolution interpolation for digital image processing, IEEE Trans. Acoust. Speech Signal Process., 29: 1153– 1160, 1981. 18. J. Moreno and J. Melia´, An optimum interpolation method applied to the resampling of NOAA AVHRR data, IEEE Trans. Geosci. Remote Sens., 32: 131–151, 1994. 19. S. K. Park and R. A. Schowengerdt, Image reconstruction by parametric cubic convolution, Comput. Vision Graphics Image Process., 23: 258–272, 1983. 20. G. A. Poe, Optimum interpolation of imaging microwave radiometer data, IEEE Trans. Geosci. Remote Sens., 28: 800–810, 1990. 21. D. Baldwin, W. Emery, and P. Cheeseman, Higher resolution Earth surface features from repeat moderate resolution satellite imagery, IEEE Trans. Geosci. Remote Sens., 36: 244–255, 1998. 22. P. Cheeseman et al., Super resolved surface reconstruction from multiple images, NASA-AMES Tech. Rep. FIA-94-12, 1994. 23. P. N. Slater, Remote Sensing Optics and Optical Systems, Reading, MA: Addison-Wesley, 1980. 24. M. Halem, Scientific computing challenges arising from spaceborne observations, Proc. IEEE, 77: 1061–1091, 1989.

JOSE F. MORENO University of Valencia

RENDERING (COMPUTER GRAPHICS). See GLOBAL ILLUMINATION.

RENEWABLE ENERGY. See OCEAN THERMAL ENERGY CONVERSION.

RENEWABLE SYSTEMS. See REPAIRABLE SYSTEMS.

497

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3616.htm

}{{}}

●

HOME ●

ABOUT US ●

CONTACT US ●

HELP

Home / Engineering / Electrical and Electronics Engineering

Wiley Encyclopedia of Electrical and Electronics Engineering Visible and Infrared Remote Sensing Standard Article William J. Emery1 1University of Colorado, Boulder, CO Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W3616 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (753K)

●

●

● ●

Recommend to Your Librarian Save title to My Profile Email this page Print this page

Browse this title ●

Search this title Enter words or phrases

Abstract The sections in this article are Development of Meteorological Satellites The AVHRR Data Ingest, Processing, and Distribution NOAA's Comprehensive Large Array-data Stewardship System (CLASS) National Polar-orbiting Operational Satellite System (NPOESS) NASA's MODerate resolution Imaging Spectroradiometer (MODIS) The Geostationary Observing Environmental Satellite (GOES) Development of Earth Resources Technology Satellites The French Spot Satellites Summary About Wiley InterScience | About Wiley | Privacy | Terms & Conditions

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3616.htm (1 of 2)17.06.2008 15:43:47

❍

❍ ❍

Advanced Product Search Search All Content Acronym Finder

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%2...S%20ENGINEERING/25. Geoscience and Remote Sensing/W3616.htm

Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.

file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ERING/25. Geoscience and Remote Sensing/W3616.htm (2 of 2)17.06.2008 15:43:47

VISIBLE AND INFRARED REMOTE SENSING The earliest satellite remote sensing information was supplied by video-style cameras on polar-orbiting satellites. Measuring an integral over a wide range of visible wavelengths, these cameras provided a new perspective to the study of the Earth, making it possible to view features from space that had previously been studied only from the Earth’s surface itself. One of the most obvious applications of these satellite data was monitoring clouds and their motion as indicators of changes in weather patterns. This application to meteorology was in fact the primary driving force in the early development of satellite remote sensing. As we matured from polar-orbiting to geostationary satellites our ability to monitor atmospheric conditions improved greatly, resulting in modern systems that give us very high-resolution images of atmospheric pattern changes in time and space. Early (1960s) weather satellites were spin stabilized with their axis and camera pointed at the Earth over the United States, meaning that the satellite could view the Earth only at the latitudes of North America. Today almost everyone on the Earth is accustomed to seeing frequent satellite images collected by geostationary satellites as part of their local weather forecast. This is a very dramatic change that has taken place in slightly over 30 years. As the meteorological satellites improved it became apparent that satellite data were useful for a great many other disciplines. Some of these other applications used data from the meteorological satellites directly while others awaited new sensors and satellite systems. During those developmental times the National Space and Aeronautics Administration (NASA) sponsored the NIMBUS program where a common satellite “bus” was used to carry into space various instruments requiring testing and evaluation. This excellent program ran for about 20 years and was amazingly successful in ﬂying and demonstrating the capabilities of a variety of sensors. One of the greatest successes was the start of the land surface remote sensing satellite series, known as the LANDSAT series. The ﬁrst few of these starting in 1972 actually used the NIMBUS platform as the spacecraft. Starting with LANDSAT-4 and -5 a new bus was built just for this satellite series. The operation of LANDSAT was privatized in 1980s, and unfortunately the privately developed LANDSAT-6 failed upon launch. LANDSAT-7 was planned to launch in 1998; once again a NASA satellite will usher in a new phase in the study of land surface remote sensing images. Land surface sensing satellites have not been exclusively American. Also designed to measure over the same wavelengths as LANDSAT was the French Syst`eme Pour l’Observation de la Terre (SPOT) satellite system. The series began with the operation of SPOT-1 on February 22, 1986, and continues today. Companies have been set up to process and sell the SPOT satellite data products.

DEVELOPMENT OF METEOROLOGICAL SATELLITES NASA’s Television Infrared Observation Satellite (TIROS1) (1), launched on 1 April 1960, gave us our ﬁrst systematic

images of Earth from space. This single television camera was aligned with the axis of this spin-stabilized satellite, which meant that it could point at the Earth only for a limited time each orbit (which naturally collected pictures for the latitudes of North America). This experimental satellite series eventually carried a variety of sensors, evolving as technology and experience increased. Working together, NASA and the Environmental Science Services Administration (ESSA), merged into the National Oceanographic and Atmospheric Administration (NOAA) at the latter’s formation in 1970, stimulated improved designs. TIROS1 through TIROS-X contained simple television cameras, while four of the ten satellites also included infrared sensors. One interesting development was the change in location of the camera from the spin axis of the satellite to pointing outward from this central axis. The satellite axis was also turned 90◦ so that now its side, rather than the central axis, pointed toward the Earth. Called the “wheel” satellite, this new arrangement allowed the camera to collect a series of circular images of the Earth, which when mosaicked together provided the ﬁrst global view of the Earth’s weather systems from space. Using this wheel concept, cooperation between NASA and ESSA initiated the TIROS Operational System (TOS) with its ﬁrst launch in 1966. Odd-numbered satellites carried improved Vidicon cameras and data storage/replay systems that provided global meteorological data, evennumbered satellites provided direct readout Automatic Picture Transmission (APT) video to low-cost VHF receiving stations. APT, now derived from Advanced Very High Resolution Radiometry (AVHRR) imagery, is still provided to thousands of simple stations in schools, on ships, and elsewhere worldwide. Nine wheel satellites, called ESSA-1 through ESSA-9, were launched between 1966 and 1969. The 1970s saw the Improved TOS (ITOS), which combined APT and global data collection/recording in each satellite. The major improvement was the use of guidance systems developed for ballistic missiles, which made it possible to stabilize the three axes of the spacecraft. Thus a single camera could be aimed at the Earth, eliminating the need to assemble a series of circular images to map the world’s weather. ITOS also introduced day/night acquisitions and a new series of Scanning Radiometers (SRs), which offered vastly improved data. Later, ITOS carried the Very High Resolution Radiometer (VHRR). As part of international weather data exchange, NOAA introduced the direct reception of VHRR data at no charge to ground stations built by an increasing number of users, beginning in 1972. ITOS-1 and NOAA-1, launched in 1970, were transition satellites of the ITOS series, whereas NOAA2 through NOAA-5, launched in 1972–1976, carried the VHRR instrument. The latest generation of this series has been operational since 1978. TIROS-N (for TIROS-NOAA) and NOAA7 through the latest NOAA-12/NOAA-14, include the AVHRR, discussed in the following section. The major advance introduced with this satellite series was the shift from an analog data relay to a fully digital system. Now the data are digitized onboard the spacecraft before being transmitted to Earth. Also the size and weight of the satel-

J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.

2

Visible and Infrared Remote Sensing

Figure 1. Advanced TIROS-N satellite.

lite has changed from under 300 kg with the ESSA series of satellites to over 1200 kg with the TIROS-N satellites. There has been one change in the TIROS-N series, and we now have the advanced A-TIROS-N, shown in Fig. 1, along with the advanced A-AVHRR or the AVHRR-2. The primary difference in the AVHRR is the addition of a second thermal infrared band to help in the correction for water vapor attenuation when computing sea surface temperature. The two polar orbiters ﬂying at the time this was ﬁrst written were NOAA-12 (morning orbit) and NOAA-14 (afternoon orbit). Usually the even-numbered satellites ﬂy in the morning and the odd numbered satellites ﬂy in the afternoon. Because of the premature failure of NOAA-13 it was necessary to put NOAA-14 in the afternoon orbit. A comparison between NOAA-12 and NOAA-14 is given in Table 1. We are now (2006) ﬂying NOAA 18 with one satellite (NOAA-N’) left to be launched in this series. THE AVHRR Throughout the developmental process, NOAA has followed a philosophy of meeting operational requirements with instruments whose potential has been proven in space. Predecessor instruments were ﬂown experimentally on experimental satellites before they were accepted and implemented on operational monitoring satellites. These instruments were redesigned to meet both the scientiﬁc and technical requirements of the mission; the goal of the redesign was to improve the reliability of the instrument and the quality of the data without changing the previously proven measurement concepts (2). This philosophy brings both beneﬁts and challenges to the user. Beneﬁts revolve around relative reliability, conservative technology, continuity of access, and application of the data compared with other satellite systems. Challenges include desires to use the system beyond its original design, which could have been advanced more rapidly (but at considerably more cost and/or risk of loss of continuity of data characteristics). Challenges also include conﬂicting desires by users for greater support for their own particular scientiﬁc disciplines with more advanced sensors and more sophisticated customer support (while often also desiring even lower-cost

imagery) from NOAA. AVHRR’s ancestors were the Scanning Radiometers (SRs), ﬁrst orbited on ITOS-1 in 1970. These early SRs had a relatively low spatial resolution (8 km) and fairly low radiometric ﬁdelity. The VHRR was the ﬁrst improvement over the SR and for a while ﬂew simultaneously with the SR. Later the VHRR was replaced by the AVHRR which combined the high resolution and monitoring functions. There are two series of AVHRR instruments. Built by ITT Aerospace/Optical Division in the mid1970s, the AVHRR/1 is a four-channel, ﬁlter-wheel spectrometer/radiometer while the AVHRR-2, built in the early 1980s, is identical except for the addition of a second longwave channel (5). The AVHRR instrument is made up of ﬁve modules: the scanner module, the electronics module, the radiant cooler, the optical system, and the base plate. Schwalb (3, 4) and ITT (5) provide detailed descriptions of AVHRR hardware. Starting in Nov. of 1998 a new version called the AVHRR3 (Table 2) was launched on NOAA-15. In this modiﬁcation of the sensor the mid-wavelength infrared channel is switched from approximately 1.6 µm during the day back to the previous 3.7 µm at night. Thus, the sensor is switching this channel approximately every 45 min during its 90 min orbit. This switch was introduced to provide new snow and ice detection during the day without a major change in the data downlink format. AVHRR channels 1 and 2 are calibrated before launch and designed to provide direct, quasi-linear conversion between the 10-bit digital numbers and albedo. In addition,

Visible and Infrared Remote Sensing

Band Band 1 (VIS) Band 2 (NIR) Band 3A (NIR) Band 3B (MIR) Band 4 (TIR) Band 5 (TIR)

Wavelength (µm) 0.58 to 0.68 0.725 to 1.1 0.58 to 1.64 3.55 to 3.93 10.3 to 11.3 11.5 to 12.5

Table 2. AVHRR3 Channel Characteristics Nadir Resol. (km) Swath Width (km) 1.1 1.1 1.1 1.1 1.1 1.1

the thermal channels are designed and calibrated before launch as well as in space (using the AVHRR as a “blackbody” at a measured temperature and cold space to represent 0 ◦ C) to provide direct, quasi-linear conversion between digital numbers and temperature in degrees Celsius. As the thermal infrared channels were optimized for measuring the skin temperature of the sea surface, their range is approximately −25◦ C to +49◦ C for channel 3, −100◦ C to +57◦ C for channel 4, and −105◦ C to +50◦ C for channel 5 for a typical NOAA 11 scene.

DATA INGEST, PROCESSING, AND DISTRIBUTION There are four classes of AVHRR data: (1) High Resolution Picture Transmission (HRPT) data are full-resolution (1 km) data received directly in real time by ground stations; (2) Global Area Coverage (GAC) data are sampled on-board to represent a 4.4 km pixel, allowing daily global coverage to be systematically stored and played back to NOAA ground stations at Wallops Island, Virginia, and Fairbanks, Alaska, and a station operated at Lanion, France, by the Centre National d’Edudes Spatiales (CNES); (3) Local Area Coverage (LAC) data are 1 km data recorded on-board for later replay to the NOAA ground stations; and (4) Automatic Picture Transmission (APT) is an analog derivative of HRPT data transmitted at lower resolution and high power for low-cost VHF ground stations. Kidwell (6) provides a handbook for users of AVHRR data. Special acquisitions of LAC data may be requested by anyone (7). HRPT, LAC, and GAC data are received by the three stations just mentioned and processed at NOAA facilities in Suitland, Maryland. In addition relatively low-cost, direct-readout stations can be set up to read the continuously broadcast HRPT data. Further information about these data, and about NOAA’s on-line cataloging and ordering system, can be obtained from: National Oceanic and Atmospheric Administration National Environmental Satellite Data and Information Service (NESDIS) National Climatic Data Center (NCDC) Satellite Data Services Division (SDSD) Princeton Executive Square, Suite 100 Washington, DC 20233 Tel: (301) 763-8400 FAX: (301) 763-8443

Users of large quantities of data may be able to obtain them more rapidly from NOAA, by making the appropriate individual arrangements with: Chief, NOAA/NESDIS Interactive Processing Branch (E/SP22) Room 510, World Weather Building Washington, DC 20233 Tel: (301) 763-8142

2048 2048 2048 2048 2048 2048

3

Typical Use

Daytime cloud and surface mapping Land-water boundaries Snow and ice detection Night cloud mapping, sea surface temperature Night cloud mapping, sea surface temperature Sea surface temperature

NOAA’S COMPREHENSIVE LARGE ARRAY-DATA STEWARDSHIP SYSTEM (CLASS) What was discussed earlier as the Satellite Active Archive (SAA) was subsumed by NOAA’s new and comprehensive CLASS system. CLASS is a web-based data archive and distribution system for NOAA’s environmental data. It is NOAA’s premier online facility for the distribution of NOAA and U.S. Department of Defense (DoD) operational environmental satellite data (Geostationary and Polar (GOES and POES), and DMSP) and derived data products. CLASS is evolving to support additional satellite data streams, such as MetOp, EOS/MODIS, NPP, and NPOESS, and NOAA’s in situ environmental sensors, such as NEXRAD, USCRN, COOP/NERON, and oceanographic sensors and buoys, plus geophysical and solar environmental data. CLASS is in its second year of a major 10-year growth program, adding new data sets and functionality to support a broader user base. To learn more about CLASS, please visit their online system at http://www.class.noaa.gov. CLASS Goals The CLASS project is being conducted in support of the NESDIS mission to acquire, archive, and disseminate environmental data. NESDIS has been acquiring this data for more than 30 years, from a variety of in situ and remote sensing observing systems throughout the National Oceanic and Atmospheric Administration (NOAA) and from a number of its partners. NESDIS foresees signiﬁcant growth in both the data volume and the user population for this data, and has therefore initiated this effort to evolve current technologies to meet future needs. The long-term goal for CLASS is the stewardship of all environmental data archived at the NOAA National Data Centers (NNDC). The initial objective for CLASS is to support speciﬁcally the following campaigns:

NOAA and Department of Defense (DoD) Polar-

orbiting Operational Environmental Satellites (POES) and Defense Meteorological Satellite Program (DMSP) NOAA Geostationary Operational Environmental Satellites (GOES) National Aeronautics and Space Administration (NASA) Earth Observing System (EOS) Moderateresolution Imaging Spectroradiometer (MODIS) National Polar-orbiting Operational Environmental Satellite System (NPOESS) The NPOESS Preparatory Program (NPP)

4

Visible and Infrared Remote Sensing

EUMETSAT Meteorological Operational Satellite (Metop) Program NOAA NEXt generation weather RADAR (NEXRAD) Program The development of CLASS is expected to be a longterm, evolutionary process, as current and new campaigns are incorporated into the CLASS architecture. This master project management plan deﬁnes project characteristics that are expected to be applicable over the life of the project. However, as conditions change over time, this plan will be updated as necessary to provide current and relevant project management guidance. The goal of CLASS is to provide a single portal for access to NOAA environmental data, some of which is stored in CLASS and some available from other archives. The major processes required to meet this goal that are in scope for CLASS are:

Ingest of environmental data from CLASS data providers

Extraction and recording of metadata describing the data stored in CLASS

Archiving data Browse and search capability to assist users in ﬁnding data

Distribution of CLASS data in response to user request

Identiﬁcation and location of environmental data that is not stored within CLASS, and connection with the owning system Charging for data, as appropriate (see out of scope note below) Operational support processes: 24 × 7 availability, disaster recovery, help desk/user support While the capability of charging for media delivery of data is a requirement for CLASS, the development of an eCommerce system to support ﬁnancial transactions is out of scope. CLASS will interface with another system, NESDIS e-commerce System (NeS), for ﬁnancial transactions; deﬁnition and implementation of the CLASS side of that interface is in scope for this project. Location/Organization CLASS is being developed and operated under the direction of the NESDIS Ofﬁce of Systems Development, having been under the direction of the NESDIS Chief Information Ofﬁcer (CIO) during the period 2001—2003 (through Release 2.0). CLASS is being developed by NOAA and contractor personnel associated with the Ofﬁce of Systems Development (OSD) at the CLASS-MD site in Suitland, MD and at the CLASS WV site in Fairmont, WV, the National Climatic Data Center (NCDC) in Asheville, NC, and the National Geophysical Data Center (NGDC) in Boulder, CO. The operational system is currently located at the NOAA facility in Suitland, MD, with a second operational facility at the NCDC facility in Asheville, NC.

The project management is conducted by the CLASS Project Management Team (CPMT), with representatives from each government and contractor team participating in CLASS development and operations and chaired by the OSD Project Manager. Section 2.2 describes the organizational structure for the CLASS project. Data Data stored in CLASS includes the following categories:

Environmental data ingested and archived by CLASS. Currently this includes data for each of the campaigns listed in Section 1.1, and certain product and in situ data Descriptive data received with the environmental data, used to support browsing and searching the CLASS archive Descriptive data maintained by CLASS to support searching for data maintained in other (non-CLASS) repositories Operational data required to support the general operation of the system that is not related to environmental data (e.g., user information, system parameters) System Overview An overview of the CLASS system is shown here in Fig. 2, which is a ﬂow chart for the system. The system functions are to: ingest satellite data and derived data products, create browse products for user selected products and store them online, create netCDF ﬁles from selected product data, store some ﬁles in permanent cache and others in temporary cache, and archive all data in robotic system. In terms of user services CLASS is to: set up the user proﬁle, initiate catalog search, view search results: catalog data, browse images, dataset coverage maps, order data, view order status, visualize and download product data netCDF ﬁles. Visualization will be done at the user interface in response to requests for browse images of the selected data set. The primary value of these browse images is to determine if the right area was selected and whether or not an image contains an abundance of cloud cover. The system delivers the URL of the browse image to the user’s computer for display. Once it is determined that the data granule is to be ordered the order is conﬁrmed by the user and the order module locates the data set in terms of data type, time range, geographic range or other criteria. The data ﬁles are located in the robotic storage and retrieved for processing. The requested ﬁle is generated from the storage ﬁle and the requested ﬁle is put on temporary cache for user retrieval. The data is kept there for several days. These ﬁles can be transferred by standard FTP for future analysis and storage by the users. The system notiﬁes users by email when the data ﬁles are ready for transfer. It is possible to regularly subscribe for certain types and amounts of data. Similarly it is possible to “bulk order” data on CLASS.

Visible and Infrared Remote Sensing

Visualiz ation

Data Products And

Metadata

In gest Store Data

Data

Visual ize

Access Data

Main Monitor Control

Interface with

Custo mers

Users

Data Cache Data Providers

5

Proces Orders Archive

Orders

CLASS Operator s

Figure 2. CLASS System Overview Diagram.

The system command unit oversees the operation of CLASS and monitors its activities. There is a backup system to insure continued operation and secure archival of all the data in CLASS.

NATIONAL POLAR-ORBITING OPERATIONAL SATELLITE SYSTEM (NPOESS) On May 15, 1994 The U.S. President issued a “Decision Directive” that the three separate polar-orbiting satellite programs operated by NOAA, the DoD and NASA should be converged into a single program of polar-orbiting satellites. An Integrated Program Ofﬁce (IPO) was formed and located in Silver Springs, Md. Which was staffed by NOAA, NASA and DoD personnel contributed by each agency to this joint effort. Although a NOAA program the procurement of this system followed along DoD lines and representatives of these agencies met and deﬁned a list of Environmental Data Records (EDRs) that the sensors on NPOESS must be able to measure and provide. With each EDR an accuracy, precision were also given by the IPO. The primary sensors were then deﬁned by the IPO and requests for proposals were initiated. The instruments were initially contracted separately by the government. The primary sensor is the Visible/Infrared Imaging Radiometer Suite (VIIRS), which was to take over the functions of the AVHRR in terms of the visible and infrared measurements. In addition VIIRS was to include the bands of NASA’s MODerate resolution Imaging Spectrometer (MODIS) that had ocean color bands instead of the broad visible band of the AVHRR. The NPOESS satellite (Fig. 3) is a relatively large satellite and will carry a number of instruments. There will ﬁrst be an NPOESS Preparatory Platform (NPP), which is a NASA satellite that will carry three of the future NPOESS instruments. The ﬁrst will be the VIIRS just mentioned in addition to the Cross-track Infrared Sounder (CrIS) and the Advanced Technology Microwave Sounder (ATMS).

Figure 3. NPOESS satellite.

These latter two instruments exploit the infrared and passive microwave portions of the spectra to proﬁle the lower atmosphere providing information critical for operational weather forecasting. The VIIRS will have 21 channels ranging from 412 nm to 11450 nm. The visible channels are quite narrow (20 nm) to allow for the computation of ocean color indications of chlorophyll and to accurately sense land surface vegetation. A schematic for the VIIRS instrument is presented here in Fig. 4, which shows the processing of a photon received by the instrument to the end product.

6

Visible and Infrared Remote Sensing

A prime contractor was selected to build the NPOESS satellites and the corresponding ground system. This was awarded to Northrop Grumman together with Raytheon Corp. It was decided to have the prime contractor also handle the administration of the subcontracts to the selected instrument vendors. The VIIRS had been awarded to Raytheon Santa Barbara Research Systems, the CrIS to ITT in Fort Wayne, Indiana and the ATMS to Aerospace Corp., which was later taken over by Northrop Gruman as well. During its development phase this program dramatically overran its budget and ﬁnally hit a boundary where a DoD program was required by congress to undergo a review. This review was agreed to by NOAA and NASA. As a result of this review NPOESS has been dramatically scaled back with the cancellation of some planned instruments and the reduction from three to only two satellites operating at the same time. Also the launch of the ﬁrst NPOESS satellite was delayed from 2009 to 2012.

NASA’S MODERATE RESOLUTION IMAGING SPECTRORADIOMETER (MODIS) MODIS was the primary imager selected for NASA’s Earth Observing System. Awarded to Raytheon’s Santa Barbara lab it became a precursor for the NPOESS VIIRS instrument. First launched on NASA’s TERRA satellite on Dec.18, 1999 MODIS began collecting data on Feb. 24, 2000. The MODIS instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 µm to 14.4 µm. The responses are custom tailored to the individual needs of the user community and provide exceptionally low out-of-band response. Two bands are imaged at a nominal resolution of 250 m at nadir, with ﬁve bands at 500 m, and the remaining 29 bands at 1 km. A

Figure 4. VIIRS schematic operation description.

±55-degree scanning pattern at the EOS orbit of 705 km achieves a 2,330-km swath and provides global coverage every one to two days. The Scan Mirror Assembly uses a continuously rotating double-sided scan mirror to scan ±55-degrees and is driven by a motor encoder built to operate at 100 percent duty cycle throughout the 6-year instrument design life. The optical system consists of a two-mirror off-axis afocal telescope, which directs energy to four refractive objective assemblies; one for each of the VIS, NIR, SWIR/MWIR and LWIR spectral regions to cover a total spectral range of 0.4 to 14.4 µm. A high-performance passive radiative cooler provides cooling to 83K for the 20 infrared spectral bands on two HgCdTe Focal Plane Assemblies (FPAs). Novel photodiode-silicon readout technology for the visible and near infrared provide unsurpassed quantum efﬁciency and low-noise readout with exceptional dynamic range. Analog programmable gain and offset and FPA clock and bias electronics are located near the FPAs in two dedicated electronics modules, the Space-viewing Analog Module (SAM) and the Forward-viewing Analog Module (FAM). A third module, the Main Electronics Module (MEM) provides power, control systems, command and telemetry, and calibration electronics. The system also includes four onboard calibrators as well as a view to space: a Solar Diffuser (SD), a v-groove Blackbody (BB), a Spectroradiometric calibration assembly (SRCA), and a Solar Diffuser Stability Monitor (SDSM). A second MODIS instrument was launched on the AQUA satellite May 4, 2002 giving us both a morning (TERRA) and afternoon (AQUA) Earth viewing with MODIS instruments. The afternoon orbit of AQUA is one in which a whole series of satellites will be operated and it has become known as the A-Train.

Visible and Infrared Remote Sensing

7

THE GEOSTATIONARY OBSERVING ENVIRONMENTAL SATELLITE (GOES) To monitor changes in the atmosphere related to the Earth’s weather accurately, it is necessary to sample much more frequently than is possible with a polar-orbiting satellite. Thus geostationary satellite sensors were created to make it possible to sample rapidly from geostationary orbit. The much greater altitudes of geostationary orbits (36,000 km versus 800 km for polar orbits) require much more sensitive instruments than were ﬂown earlier in polar orbit. In addition, because the early geostationary satellites were spin-stabilized, the sensors needed to be able to operate on this spinning satellite. Thus each scan could view the Earth only for a period of time set by the rotation of the spacecraft. The solution came in the form of what was originally called the spin scan camera, which was later called the spin scan radiometer. The key to the success of this unit was that it used the spin of the satellite to mechanically cause the telescope to scan the Earth’s surface and also used the satellite’s spin to increment the scan mirror vertically. Called the Visible Infrared Spin Scan Radiometer (VISSR, Fig. 5), it would scan the surface of the Earth from north to south or vice versa. This basic system continued up to the early 1990s when a new generation of geostationary orbiting environmental satellites (GOES) were deployed. Using control systems similar to those used in polar orbiter, the new GOES are three-axis stabilized and maintain their orientation to the Earth (Fig. 6). This orientation makes it possible to “stare” at the Earth, increasing the amount of radiative energy available for Earth surface sensing and so resulting in higher spatial resolutions. This new constant orientation to the Earth also results in a satellite that is constantly heated at one end and cooled at the other. A complex system is required to maintain thermal equilibrium. The GOES Imager (Fig. 7) is a multichannel instrument designed to sense radiant and solar-reﬂected energy from sampled areas of the Earth. The multielement spectral channels simultaneously sweep east-west and west-east along a north-to-south path by means of a two-axis mirror scan system. The instrument can produce full-Earth disk images, sector images that contain the edges of the Earth, and various sizes of area scans completely enclosed within the Earth scene using a new ﬂexible scan system. Scan selection permits rapid continuous viewing of local areas for monitoring of mesoscale (regional) phenomena and accurate wind determination. The GOES system produces a large number of primary data products. They include

Figure 5. GOES spinner spacecraft.

Figure 6. Three-axis stabilized GOES.

Detection and monitoring of forest ﬁres resulting from natural causes and/or human-made causes and monitoring of smoke plumes, Precipitation estimates, Total column ozone concentration (potential data product), and Relatively accurate estimates of total outgoing longwave radiation ﬂux (potential data product). These data products are summarized in Table 3. These data products enable users to monitor severe storms accurately, to determine winds from cloud motion, and, when combined with data from conventional meteorological sensors, to produce improved short-term weather forecasts. The major operational use of 1 km resolution vis-

Basic day/night cloud imagery and low-level cloud and fog imagery,

Upper and lower tropospheric water vapor imagery, Observations of land surface temperature data with strong diurnal variation,

Sea surface temperature data, Winds from cloud motions at several levels and hourly cloud-top heights and amounts,

Albedo and infrared radiation ﬂux to space, important for climate monitoring and climate model validation,

Figure 7. The GOES Imager.

8

Visible and Infrared Remote Sensing

ible and 4 km resolution infrared multispectral imagery is to provide early warnings of threatening weather. Forecasting the location of probable severe convective storms and the landfall position of tropical cyclones and hurricanes is heavily dependent upon GOES infrared and visible pictures. The quantitative temperature and moisture and wind measurements are useful for isolating areas of potential storm development. GOES data products are used by a wide variety of both operational and research centers. The National Weather Service’s (NWS’s) extensive use of multispectral imagery provides early warnings of threatening weather and is central to its weather monitoring and short-term forecast function. Most nations in the western hemisphere depend on GOES imagery for their routine weather forecast functions as well as other regional applications. GOES data products are also used by commercial weather users, universities, the Department of Defense, and the global research community, particularly the International Satellite Cloud Climatology Project, through which the world’s cloud cover is monitored for the purpose of detecting change in the Earth’s climate. Users of GOES data products are also found in the air and ground trafﬁc control, ship navigation, agriculture, and space services sectors. The GOES system serves a region covering the central and eastern Paciﬁc Ocean; North, Central, and South America; and the central and western Atlantic Ocean. Paciﬁc coverage includes Hawaii and the Gulf of Alaska. This is accomplished by two satellites, GOES West located at 135◦ west longitude and GOES East at 75◦ west longitude. A common ground station, the Central Data Antenna (CDA) station located at Wallops, Virginia, supports the interface to both satellites. The NOAA Satellite Operations Control Center (SOCC), in Suitland, Maryland, provides spacecraft scheduling, health and safety monitoring, and engineering analyses. Delivery of products involves ground processing of the raw instrument data for radiometric calibration and Earth location information and retransmission to the satellite for relay to the data user community. The processed data are received at the control center and disseminated to the NWS’s National Meteorological Center (NMC), Camp Springs, Maryland, and NWS forecast ofﬁces, including the National Hurricane Center, Miami, Florida, and the National Severe Storms Forecast Center, Kansas City, Missouri. Processed data are also received by Department of Defense installations, universities, and numerous private commercial users. An example infrared (IR) image is shown in Fig. 8 for GOES 9 located in the eastern position.

Figure 8. Full disk GOES-9 IR image on June 19, 1995.

GOES Generations We are now in the second generation of the 3-axis stabilized GOES series of satellites, which will take us into the next decade, and we are planning for their follow on in 2012. This will be the GOES-R program, which will see a dramatic improvement in the capability of the GOES imager. The new Advanced Baseline Imager (ABI) already awarded to ITT will have channels similar to MODIS and the future VIIRS instruments. Even though in geostationary orbit the ABI will have capabilities closer to MODIS including multiple infrared channels and narrow visible channels for ocean color and land surface vegetation studies. In addition the ABI will be able to rapidly scan not only the entire hemisphere that it views but also even more rapidly the central U.S. and even more quickly smaller “mesoscale” regions where the development of severe weather is anticipated. Recently due to anticipated cost and complexity the future GOES-R sounder known as the Hyperspectral Environmental Sounder (HES) has been cancelled at least for the ﬁrst couple of GOES-R satellites. Instead there is an expectation that the present generation of GOES sounding instruments will be carried by these satellites. DEVELOPMENT OF EARTH RESOURCES TECHNOLOGY SATELLITES The ﬁrst satellite in the Earth Resources Technology Satellite (ERTS) program was launched in 1972 and designated ERTS 1. In 1975 the program was redesigned as the LANDSAT program to emphasize its primary area of interest, land resources. The mission of LANDSAT is to provide for repetitive acquisition of high-resolution multispectral data of the Earth’s surface on a global basis. As mentioned earlier, the ﬁrst three Earth Resources Technology Satellites (ERTS-1,2,3) were actually modiﬁed NIMBUS satellites with a sensor suite focused on sensing the land surface. These were later referred to as LANDSAT-1, 2, and 3 and all three greatly outlasted their design life of just one year. They carried a four-channel Multispectral Scanner (MSS), a three-camera return beam vidicon (RBV), a data collection system (DCS), and two video tape recorders. The MSS operated over the following spectral intervals: band 4 (0.5 to 0.6 µm), band 5 (0.6 to 0.7 µm), band 6 (0.7 to 0.8 µm), and band 7 (0.8 to 1.1 µm). Three independent cameras,

Visible and Infrared Remote Sensing

making up the RBV, covered three spectral bands: bluegreen (0.47 to 0.575 µm), yellow-red (0.58 to 0.68 µm), and near-infrared (0.69 to 0.83 µm). Both systems viewed a ground scene of approximately 185 km by 185 km area with a ground resolution of about 80 m. On LANDSAT 1, the RBV was turned off because of a malfunction. LANDSAT-3 added a ﬁfth channel in the thermal infrared (10.4 to 12.6 µm) to the MSS. The LANDSAT satellites are in a sun-synchronous, near polar orbit with a ground swath extending 185 km (115 mi) in both directions. LANDSAT-4 and -5 were inclined 98◦ and had an orbital cycle of 16 days and an equatorial crossing of 9:45 A.M. local time. The altitude of the satellites is 705 km (437 mi) for both LANDSAT-4 and -5. Working together, LANDSAT-4 and -5 offer repeat coverage of any location every 8 days. At the equator, the ground track separation is 172 km, with a 7.6% overlap. This overlap gradually increases as the satellites approach the poles, reaching 54% at 60◦ latitude. The data from both instruments were transmitted directly to a ground receiving station when the satellite was within range. During those periods when the satellite was not in view of a US owned and operated ground station the satellites were only turned on according to a predetermined schedule. The satellite did not carry enough tape recording capacity to record very many of the MSS images, and it thus motivated the international science community to use its global recording capability to download the images from these satellites and make them available to interested parties. This practice led to the establishment of LANDSAT receiving stations all over the world. At present, these stations must pay an annual license fee to EOSAT Corporation in order to receive and use LANDSAT data in their own area of recording coverage. LANDSAT-4 and -5 were built as speciﬁc satellites for their applications dropping the NIMBUS bus along the way. Not only was the spacecraft a new system, but a new scanner also had been developed for these satellites. Called the thematic mapper (TM), the instrument was designed to give some very speciﬁc signatures for various types of resource related surface features. The TM has seven spectral bands covering four regions of the electromagnetic spectrum: bands 1 to 3 the visible range (0.45 µm to 0.69 µm), band 4 the near-infrared (1.55 µm to 2.35 µm), and the thermal infrared band 7 (10.4 µm to 12.5 µm). There was also a physical difference between the TM and the MSS in that the MSS scanned only in one direction, whereas the TM scanned and collected data in two different directions. Also in the TM, the target energy fell almost directly on the detector face, whereas the MSS energy must travel ﬁber optics before getting to the detector, making it easier for the TM to sense changes in the target. A summary of all LANDSAT satellites is given in Table 4, which shows that only one satellite is operating in 1997 and then on a reduced scale. The loss of LANDSAT-6 has severely curtailed research efforts with data from these satellites.

9

THE FRENCH SPOT SATELLITES A program also designed to measure in the same wavelength domain as LANDSAT was the French Syst`eme Pour l’Observation de la Terre (SPOT) satellite system. The series began with the operation of SPOT-1 on February 22, 1986, which carried the multimission platform that could be modiﬁed for future payloads. Operational activities of SPOT-1 ceased at the end of 1990, but it was later reactivated on March 20, 1992 and later stopped tracking on August 2, 1993. SPOT-1 was put into a sun-synchronous orbit with a repeat cycle of 26 days (5 days with pointing capability). The SPOT-1 payload consisted of two High Resolution Visible (HRV) scanners, which were pointable CCD pushbroom scanners with off-nadir viewing: ±27◦ (in steps of 0.6◦ ) or 460 km. These scanners can be operated in two modes: multispectral and panchromatic. In the panchromatic mode, the sensor was capable of a 10 m spatial resolution, which in the multispectral domain doubled to 20 m. In both conﬁgurations, the swath width was 60 km. The sensor bands were 0.50 µm to 0.59 µm, 0.61 µm to 0.68 µm, 0.79 µm to 0.89 µm, and 0.51 µm to 0.73 µm. This conﬁguration was retained for SPOT-2, which was made operational on January 21, 1990. The next satellite SPOT3 was launched on September 26, 1993 and made operational on May 24, 1994. On November 13, 1996, the satellite entered the safehold mode and was no longer operational. Since then, SPOT-2 has continued to operate. In 1997 both SPOT-1 and SPOT-2 have operated simultaneously, providing greatly improved coverage. The SPOT-3 payload consisted of two HRVs (same as SPOT-1), POAM II (Polar Ozone and Aerosol Measurement), and DORIS (Doppler Orbitography and Radio-positioning Integrated by Satellite). As an example of this class of satellites the 2 m × 2 m × 4.5 m SPOT-1 weighed 1907 kg and had a solar array system capable of producing 1 kW from a span of 8.14 m. The orbital altitude at the equator was 822 km and the orbital period was 101.4 min. The on-board tape recorders were capable of 22 min of data collection. The SPOT orbit is sun-synchronous and nearly circular. The altitude is again about 822 km, the inclination is 98◦ , there are 14 and 5/26 orbits in each Earth day for a repeat cycle of 26 days during which time the satellite has completed 369 orbits. Because the valid comparison of images of a given location acquired on different dates depends on the similarity of the conditions of illumination, the orbital plane must also form a constant angle relative to the sun direction. This is achieved by ensuring that the satellite overﬂies any given point at the local time, which in turn requires that the orbit be sun-synchronous (descending node at 10:30 A.M.).

10

Visible and Infrared Remote Sensing

Figure 9. SPOT’s stereo viewing capability.

A big change is planned for SPOT-4, which will now carry two HRV Infrared (HRVIR) radiometers. The spatial resolutions and swath widths have not changed from earlier systems. The spectral bands are changed slightly and are 0.50 µm to 0.59 µm, 0.61 µm to 0.68 µm, 0.79 µm to 0.89 µm, 1.58 µm to 1.75 µm, and 0.61 µm to 0.68 µm. In addition, SPOT-4 will carry the Vegetation Monitoring Instrument (VMI), which has a resolution of 1 km and a swath width of 2,000 km. The sensor bands are 0.43 µm to 0.47 µm, 0.50 µm to 0.59 µm, 0.61 µm to 0.68 µm, 0.79 µm to 0.89 µm, and 1.58 µm to 1.75 µm. A SPOT-5 is planned for late 1999 and will likely carry the same sensor array as does SPOT-4. One of the unique capabilities of the SPOT series of satellites is their ability to produce stereo imagery for making three-dimensional images of ground subjects. This ability is achieved by being able to point the sensor in a desired direction (Fig. 9). SPOT’s oblique viewing capacity makes it possible to produce stereo pairs by combining two images of the same area acquired on different dates and at different angles, due to the parallax thus created. A base/height (B/H) ratio of 1 can be obtained for a viewing angle of 24◦ to the east and to the west. For a stereo-pair comprising a vertical view and one acquired at 27◦ , a B/H of 0.5 is obtained. Stereo pairs are mainly used for stereo-plotting, topographic mapping, and automatic stereo-correlation, from which Digital Elevation Models (DEMs) can be directly derived without the need for maps. SPOT satellites can transmit image data to the ground in two ways, depending on whether or not the spacecraft is within range of a receiving station. As the satellite proceeds along its orbit, four situations arise concerning imagery acquisition and image data transmission to ground. 1. The satellite is within range of a Direct Receiving Station (DRS), so imagery can be down-linked in realtime provided both satellite and DRS are suitably programmed. The DRS locations are shown in Fig. 10. 2. The satellite is not within range of a SPOT DRS. Programmed acquisitions are executed, and the image data are stored on the on-board recorders. 3. The satellite is within range of a main receiving station (Kiruna or Toulouse). It can thus be programmed either to down-link image data in real-time or play back the on-board recorders and transmit image data recorded earlier during the same orbital revolution. 4. The rest of the time, the satellite is on standby ready to acquire imagery in accordance with uplinked commands.

Figure 10. SPOT Direct Receiving Stations.

Applications of Satellite Imagery In this section we will present some of the applications of visible, near-infrared, and thermal infrared satellite data. This review of satellite data applications can by no means be comprehensive, and we will instead present examples of the types of applications that can be made with these satellite images. Our review will naturally represent the experience of the author, and it is not intended to neglect other important applications. It is simply easier to present those application most familiar to the author. Visible and Near-Infrared Applications Meteorological Applications. The very ﬁrst use of satellite data was the depiction of changes in cloud patterns that were associated with atmospheric systems. First Earth views were restricted by a spin-stabilized satellite to a limited latitude band. It was increased to daily coverage of the entire Earth as the sensor was oriented to point out from the axis of rotation, which was changed to be perpendicular to the Earth’s surface. These global maps were a collage of different synoptic conditions collected over 24 h. Even though these global world views were new and unique, the meteorologists really wanted a time series of their areas of interest, which led to the development of the GOES satellites with their ability to sample a full hemisphere every half hour (Fig. 5). For limited locations, the sampling frequency can be increased to ﬁve images every few minutes. Initially the satellite images themselves were the primary sources of information for the weather forecasters. Cloud images were interpreted together with atmospheric pressure maps and some early numerical model results to create the local forecast. The ability to follow the movement of the cloud features in time greatly enhanced the understanding of the evolution of various atmospheric phenomena. Later a system was developed to proﬁle the atmospheric temperature from geostationary orbit called the VISSR Atmospheric Sounder (VAS). Similar to TOVS, the VAS used infrared channels to estimate the atmospheric temperature proﬁles. The advent of the three-axis stabilized GOES satellites ushered in a new era of atmospheric proﬁling with the new GOES sounder. These sounding proﬁles are assimilated into numerical models that do the prediction. Because it is much easier to deﬁne the relationship between the numerical models and the satellite’s temperature proﬁles, these are then primary sources of forecast information. The images continue to have value particularly

Visible and Infrared Remote Sensing

11

for the study of poorly understood weather phenomena. The ability to sample rapidly in time has made the new GOES satellites very important for scientiﬁc studies that are trying to understand the formation of severe weather and their causes. Vegetation Mapping. There are many land surface applications that beneﬁt from the combination of near-infrared sensing with visible sensor data. For this reason we treat it together with the purely visible channels which are widely exploited for many different applications. It is not possible to treat each application individually here so we make a few comments about some of the more important applications of these near-infrared data. One of the most popular combinations of AVHRR data uses the visible (0.53 µm to 0.68 µm) channel together with the near-infrared (NIR; 0.71 µm to 1.6 µm) to become the Normalized Difference Vegetation Index (NDVI) which is deﬁned as NDVI =

VIS − NIR STN DEV (VIS) − STN DEV (NIR)

(1)

This vegetation index combines the visible and nearinfrared to form an index that when normalized by the standard deviations ranges between +1 and −1. This index indicates how well the vegetation is growing rather than the amount of biomass present. The visible channel responds to the greening up of the vegetation as it emerges either as a new plant or one that has lain dormant. With SPOT and LANDSAT, it is possible to actually view this change in the green radiation band. For other satellites such as the NOAA-AVHRR, the visible channel is so broad that it covers all the visible wavelengths. We can still use the formulation in Eq. (1) to compute our NDVI because the visible channel is centered in the red. The NIR channel responds to the mesophyll structure in the leaves, which is a function of the amount of water held in the leaves. This is an important sensing mechanism because the leaves will go from brown to green early in the growing season. Later on, their greenness will be saturated, and the leaves will not get any greener. During this time, the leaves will indicate the condition of the plant as they reﬂect the NIR wavelengths back up to the satellite. In reality, the NDVI does not really go from −1 to +1 but instead ranges from low negatives up to 0.7 or 0.8. For display on a satellite image, the NDVI must be converted back to a raster image, which requires a scaling from the NDVI value to an 8-bit raster display value (which limits the dynamic range). As an example we present in Fig. 11 the NDVI for the state of Kansas, which is predominantly wheat-growing country. Here we show a series of years all using the same color scale so that changes in color are related to changes in the health of the vegetation. The substantial variability between individual years can clearly be seen even in these small images. Note that the color scale has been created to be brown when the NDVI is low and yellow to red when the vegetation is very healthy. Also the NDVI is never greater than 220 even though it has a theoretical maximum of 255 (the 8 bit equivalent of +1) nor does it go below 120 (−1 would be equal to 0). These NDVI images have been constructed from a series of daily AVHRR images using channels 1 and 2. Because

Figure 11. The NDVI for Kansas for 6 years 1982–1987.

clouds will obscure the surface in either of these channels, it is necessary to eliminate the cloud cover. This is done in a series of steps. First a cloud classiﬁer or locator program is run to determine which features are clouds. Those pixels clearly deﬁned as clouds are set to a uniform value (usually 0 or 255) in the individual image. The non-cloud pixels of the individual images are composited to construct the single image that covers a number of days. Compositing of the NDVI is done by retaining the maximum value whenever it occurs. This overwrites any values that are not the maximum. Clouds will have the effect of lowering the NDVI so that this type of maximum compositing further reduces the effects of clouds. Realize, however, that even this NDVI composite is not totally free from cloud effects. Clouds that are smaller than the AVHRR pixel size will be brought into this calculation. Here again the NDVI maximum composite will minimize the effects of these sub pixel clouds. Other sources of error in the NDVI calculation are the effects of atmospheric aerosols and possible errors in geolocation of the pixel. The latter very dramatically affects the NDVI particularly in areas of strong inhomogeneity. It is often a difﬁcult problem to detect as the changes will just appear as normal temporal changes in the NDVI. The aerosol inﬂuence is also difﬁcult to recognize and is difﬁcult to correct for. There are few direct measurements of atmospheric aerosols to use for correcting the satellite data derived NDVI. Unfortunately, the satellite data technique for estimating aerosols is also based on a combination of the visible and near-infrared channels—the very data that was used to construct the NDVI. Cases where the aerosol effect becomes large are often associated with volcanic eruptions that introduce ash into the upper atmosphere, changing the visible and near-infrared signals. These events can last for weeks and even months depending on the strength and location of the ash plume. Very strong eruptions with a marked ash component have produced ash plumes that have circled the globe before the ash has diffused sufﬁciently to no longer inﬂuence the NDVI computation. There is considerable discussion on how to interpret the NDVI and how to relate the satellite-derived model to climate and terrestrial phenomena (15, 16). However, there is little doubt that the NDVI provides a useful space perspective monitor of vegetation, land cover, and climate, if used carefully. The index has been produced and utilized globally (6) and regionally (16–20).

12

Visible and Infrared Remote Sensing

Figure 12. Sea surface temperature (◦ C) from infrared AVHRR data.

Thermal Infrared Applications Sea Surface Temperature (SST). Initially, thermal infrared imagery was collected to provide information on the atmosphere when the Earth was not in sunlight. Later it became clear that there were many nonmeteorological applications of these thermal infrared images. One wellknown application is the computation of sea surface temperature. Traditionally computed from the reports of ships at sea and later autonomous drifting buoys, the satellite infrared data provided a substantial increase in the area covered for any speciﬁc time. Cloud cover was still a major problem because it obscured the infrared signal from the ocean. In addition, atmospheric moisture in the form of water vapor signiﬁcantly altered the SST signal. In spite of these limitations, the satellite SST data were such a large increase in the time/space coverage of SST measurements that they completely changed the way SST was routinely calculated. Prior to the advent of satellite data, the SST measurements were limited to the major shipping routes and the limited areas where drifting buoys were deployed. Later the distribution of buoy data increased, and they became a major source of in situ SST data that was used to “calibrate” the thermal infrared satellite SST data. In this calibration, the buoy data are used as “ground truth,” and the satellite SST algorithm is adjusted to ﬁt the buoy in situ SST measurement. Even with the satellite, SST data global ﬁelds are not usually full on a daily basis, and a compositing procedure is needed to create a global SST map (Fig. 12). This is a two-week composite of the Multi-Channel (MC) SST (21) with a 40 km spatial resolution. The satellite data used to generate this SST map were the 4 km spatial resolution Global Area Coverage from the NOAA AVHRR. Each image was ﬁrst navigated to locate the pixels accurately. Clouds have then been ﬁltered from each of the individual scenes used in the composite. In computing the composite SST, a maximum temperature has been retained because residual clouds will always lower the temperatures. This is much the same logic as is used with the maximum NDVI composite. The resultant ﬁeld has further been smoothed by a Laplacian ﬁlter to reduce any noise in the spatial signal. Notice the white patches that indicate cloud cover even after the compositing procedure. The striped appearance of Hudson Bay is due to an error in the smoothing program. The main features of the SST ﬁeld are the tropical core of high SST, the coldest temperatures in the Polar Regions, and the highly variable regions at the boundaries of these

Figure 13. SST for May 1986 from the AVHRR with coincident buoy tracks (white).

ﬁelds. In the tropics, much of the spatial variability caused by waves propagating in the equatorial waveguide. It is notable that strong warm currents such as the Kuroshio and Gulf Stream are not readily apparent as patches of warm SST. This is likely the result of low spatial resolution and the Laplacian smoothing. The strong western boundary currents have simply been smoothed out of the map. It is difﬁcult to judge just how much credibility to give to individual features. One would like to say something about various current systems, but most known currents do not appear on this map. Furthermore, heating and cooling can have very dramatic effects, particularly on the tropical warm SSTs. In the southern polar region, it is tempting to suggest that the dark blue band represents the polar front with its meanders and eddy formation. Still in Fig. 9 this dark blue band extends much farther north than is usually thought to be the case for the polar front. Another suggestion of the expression of ocean currents by the SST pattern in the southern ocean can be seen in Fig. 13 where a month-long satellite SST similar to that in Fig. 12 has been computed and the coincident tracks of drifting buoys have been superimposed. Note how well the buoys appear to follow the SST isotherms, particularly in the area around Cape Horn and into the Indian Ocean. Buoy trajectories just north of the light-blue area exhibit north and south excursions that are matched by the frontal features in the SST map. Because these are regions occupied by the Antarctic Circumpolar Current (ACC), it is realistic to have these apparently large velocities in the area. These fronts are known to be locations of strong currents that together make up the ACC. This is also the case south of the Australian bight where the buoy tracks now appear to follow the Subantarctic Front to the south of the light blue band. The continuation of strong currents is emphasized by the long trajectories for this monthly period. This is particularly apparent when these trajectories are compared with the monthly trajectories located to the north of the light-blue area. Comparing the buoy locations also emphasizes that the coverage by drifting buoys alone is meager and that any SST made without the satellite data would be forced to interpolate between widely separated data points. For this particular comparison, it should be noted that there was a speciﬁc program to drop buoys in the southern ocean. When ship tracks are added, the data coverage increases, but then only along shipping routes. There are many areas that a ship does not visit nor are there usually any drifting buoy measured SSTs. Thus satellite data must be used to

Visible and Infrared Remote Sensing

compute a global SST with any hope for decent coverage. There are many problems involved with combining infrared satellite SSTs with in situ measured SSTs. In situ measurement of SST is not a simple task, and there is the potential that errors exist both in archived in situ SSTs and in satellite SST ground-truthing procedures using in situ SST. Heating or cooling of the sea surface occurs as a result of the air-sea ﬂuxes, horizontal and vertical heat advection, and mixing within the ocean. It is the nature of the air-sea ﬂuxes of heat to complicate the measurement of SST in the ocean. The heat exchange associated with three of the four components of the net heat ﬂux—the sensible heat ﬂux, the latent heat ﬂux, and the longwave heat ﬂux—occurs at the air-sea interface. The heat exchange associated with the fourth component—the shortwave radiation—occurs over the upper few tens of meters of the water column. Typically, the sum of the three surface ﬂuxes is negative (the ocean gives heat to the atmosphere), and the sea surface is cooled relative to temperatures just a few millimeters below the surface. During daylight hours, the shortwave radiation penetrates and warms the water where it is absorbed. The absorption is an exponential function of depth, and the energetic red light is absorbed more rapidly (e-folding depth of approximately 3 m) than the less energetic blue-green light (e-folding depth of roughly 15 m), with the exact rate of absorption at each frequency depending on water clarity. Heating caused by solar insolation can produce a shallow, warm, near-surface layer meters to tens of meters in thickness. The surface is still cooled by evaporation, longwave radiation, and sensible heat ﬂux; but the “cooler” surface temperature can, because it is the surface of the layer warmed by the sun, be greater than a temperature from below a shallow diurnal mixed layer [Fig. 14(a)]. During the day, the strong vertical temperature gradients within and at the base of the region that experiences diurnal warming make the measurement of sea surface temperature from engine intakes, temperature sensors attached to buoys, and other ﬁxed-depth instruments difﬁcult. At night, this shallow warm layer cools, and the continued surface cooling results in a skin temperature below that a few centimeters deeper [Fig. 14(b)]. As discussed in Schlussel ¨ et al. (22), the difference between the skin SST and the bulk SST, 2 m below the surface, ranges from −1.0◦ C to 1.0◦ C over a 6-week period with a mean difference of 0.1◦ C to 0.2◦ C (positive values indicate a cool skin temperature). Attempts to make in situ measurements of SST fall into two classes. Traditionally measurements have been made from ships with “bucket samplers.” These buckets sample a temperature in the upper meter of the ocean. Because of its ease of operation, shipboard SST measurements shifted to the reporting of “ship-injection” cooling water temperature collected from intakes ranging from 1 m to 5 m below the sea surface. A study by Saur (23) indicated that heating in the engine room resulted in a bias of about +1.5◦ C in these injection temperatures. Such problems with ship SSTs have led many people to the exclusive use of SST data from moored and drifting buoys. These systems sample temperature at a wide variety of depths ranging from 0.5 m to 1.5 m, depending on the hull shape and behavior in the wave ﬁeld. Moored buoys with large reserve buoyancy

13

Figure 14. Ideal near-surface temperature proﬁles.

do not oscillate as much vertically as do the drifting buoys, which are often designed to minimize wind drag. Moored buoys provide temperature time series at a point, but the buoy does move up and down as it travels within its “watch circle.” All these in situ SST measurements sample a nearsurface temperature that is commonly referred to as the bulk SST. This bulk SST has been used in the common “bulk formulas” for the computation of the surface ﬂuxes of heat and moisture. Depending on the surface wind and wave and solar heating conditions, the bulk temperature differs greatly from the skin SST. It is the temperature of this thin molecular boundary layer (24) that radiates out and can be detected by infrared sensors. The sharp temperature gradient (Fig. 12) in this molecular sublayer is always present at surface wind speeds up to 10 m/s (25). At higher wind speeds, the skin layer is destroyed by breaking waves. Earlier studies (25, 26) have shown, however, that the skin layer reestablishes itself within 10 s to 12 s after the cessation of the destructive force. Direct measurement of the skin temperature is not possible because any intrusion will destroy the molecular layer. Even drifting buoys cannot measure the temperature of this radiative skin layer because contact with the skin will destroy it for the duration of the contact. Shipmounted infrared radiometers have been used (22, 27) to observe the ocean’s skin temperature without the effects of atmospheric SST signal attenuation that are suffered by satellite-borne infrared sensors. In order to overcome the many errors inherent in this type of measurement, a reference bucket calibration system was developed. In this system, the bucket is continuously refreshed with sea water so as to ensure that no skin layer develops. The temperature of the reference bucket is continuously monitored,

14

Visible and Infrared Remote Sensing

and the radiometer alternately views the sea surface and the bucket, providing regular calibration of the radiometer. This calibration not only includes an absolute reference to overcome any drift of the radiometer blackbody references or instrument electronics but also will provide a calibration of the radiometer for the effects of the nonblackness of the sea surface, the contributions of reﬂected radiation from sky and clouds, and the possible contamination of the entrance optics by sea spray. Monitoring the bucket temperatures with platinum resistance thermometers, accurate to 0.0125◦ C, made it possible to measure the skin temperatures accurate to 0.05◦ C. Other investigators (28) have relied on internal reference temperatures for the calibration of the skin SST radiometer using pre- and post-cruise calibrations to ensure the lack of drift in the reference values. To account for the nonblackness of the sea surface and reﬂected sky radiation, the radiometer was periodically turned to point skyward during variety of conditions. The resulting correction was an increase in the measured radiometric SST by an average of 0.25◦ C. This rather large average value emphasizes the need for a continuous reference calibrator for the ship skin radiometer measurements of SST. All previously used radiometers have employed a single channel. Because the satellite radiometers that these ship measurements are designed to calibrate have more than a single channel, it would be useful if the shipborne radiometer also had multiple thermal infrared channels. Unlike the satellite measurements, where these different thermal channels are intended to be used for atmospheric water vapor attenuation correction, the multichannel shipboard radiometer would provide redundant measures of the emitted surface radiation and a better source of calibration information for the satellite SST. Sea Surface Temperature Patterns. Some applications of satellite SST estimates do not involve the need for computing the precise SST, but rather it is the SST pattern that is important not the SST magnitude. Much as was seen in Fig. 8 where the drifting buoy trajectories matched the shapes of the temperature contours, the pattern indicates the current structure that we are interested in. This is most clearly the case for a western boundary current such as the Gulf Stream (Fig. 15) where the dark red indicates the core of the current that splits off into meanders and then eddies. In this image, we see three cold eddies that have separated from the Gulf Stream. The westernmost two are clearly still part of the Gulf Stream, but the eddy to the east appears to have separated as marked by the colder (green) center. Note the meander to the northwest where colder slope-water has been entrained from the north. This is the source water of the eddies that separate to the south. Different studies have used the shape of the Gulf Stream core to estimate the long-term behavior of the current system. A line that separates the cold waters to the north from those of the Gulf Stream can be drawn. This is done every 2 weeks, and the resulting lines are then analyzed for the time/space character of the Gulf Stream. Other studies have examined the interaction between the eddies and coastal bottom topography. Warm rings that

Figure 15. Sea surface temperature of the Gulf Stream region.

separate to the north eventually reach the continental shelf break where a strong upwelling that feeds the local fauna occurs. These warm eddies are known to be the locations of good ﬁshing conditions, and some groups sell SST maps as a service to ﬁshermen. Combining Visible and Thermal Infrared Data Snow Cover Estimation. The process of routinely estimating seasonal snow cover and converting it to snow water equivalence (SWE) for water resource management is currently labor intensive, expensive, and inaccurate because of the inability of in situ measurements to cover all the spatial variations in a large study area. It is attractive to use satellite data to monitor the snow cover because of the unique synoptic view of the satellite (29). If data from operational weather satellites can be used to estimate reliably annual snow pack and annual SWE, the seasonal snow pack assessments can be improved, and the water supply for the coming summer season can be better predicted. We introduce a new method for using satellite data from the Advanced Very High Resolution Radiometer to estimate the snow cover and the SWE from a series of images composited over time to remove cloud contamination. A pseudo-channel 6 is introduced that combines two AVHRR channels to discriminate snow from cloud objectively. Daily snow retrievals in individual AVHRR images are composited over one-week intervals to produce a clear image of the snow cover for that week. This technique to produce snow cover maps routinely from AVHRR imagery is different from that used operationally (30). Our April 1990 snow cover was based on a composite of 7 to 14 different images, whereas the operational estimates are derived from a relatively limited number of AVHRR images. A key to estimating SWE from a snow cover map derived from satellite data is a knowledge of the relationship between snow cover, snow depth, and the resultant SWE. A long (25 year) time series of historical snow pack measurements from Colorado SNOTEL and SNOW COURSE

Visible and Infrared Remote Sensing

sites was used to develop correlations between snow depth and elevation in the snow covered regions. It is well known (31) that above the tree line, or in areas where trees are so sparse that they provide little protection from the wind, there is no clear relationship between SWE and elevation. Instead, wind moves much of the snow after it falls, and other weather variables, like solar radiation, cause snow ablation to be larger in exposed areas. In most of the basins that we studied, less than 15% of the ground area is above tree line. Thus, we ignored this effect and assumed that SNOTEL and SNOW COURSE measurements provided good estimates of the snowpack at the elevations in each drainage basin. Colorado was separated into seven river drainage basins, and the relationship between snow depth and elevation for each of these basins was calculated. The lowest correlations were not for those areas with the greatest elevations but rather for those areas with the poorest coverage in terms of ground measurement sites of snow depth. In one case, it was necessary to eliminate some sites in order to improve the correlations. These sites were located in the same region and were apparently not representative. The AVHRR snow cover maps were merged with a digital terrain map (DTM) of Colorado, and the snow elevations were computed from this merged set. Using linear regressions between snow depth and elevation from the historical data, these snow elevations were converted into SWE for each individual drainage basin. In each basin, river run-off was estimated from the gauged measurements of each river. There were large differences between individual basin gauge values (cumulated from all river gauges in the individual basins) and with the SWE estimates from the satellite and DTM data. These differences appeared to cancel out when the entire state was considered. There are transmountainal diversions, human-made tunnels used to divert water from one basin to another, that routinely shift water from one basin to another to satisfy demand. The only valid comparison was to lump the seven basins together and make an estimate for the state as a whole. This statewide estimate of SWE from the satellite imagery was found to be within 11.2% of the combined river run-off from the gauged rivers (taken from the maximum in 1990), when the ﬁrst two weeks of April 1990 were used to make the composite image of snow coverage. In order to determine how to use AVHRR data to assess winter snowpack conditions, it was necessary to decide when during the year such an assessment should be made. Because we are interested only in the long-term snowpack and not in the short-term transient snow, a long time series of snowpack measurements and multiple consecutive days of spatial snow coverage (from AVHRR) are considered. We analyzed 25 years of data from the US Department of Agriculture Soil Conservation Service (SCS) based on automated SNOTEL stations and manual SNOW COURSE stations across the state of Colorado. The SNOTEL stations measure snow depth by monitoring snow pillows, collapsible bags ﬁlled with antifreeze solution. The SNOW COURSE data are seasonal measurements of snow depth and water equivalence made manually over bimonthly intervals during the snow season. Each of the SNOTEL stations is manually veriﬁed once each year. While the SNO-

15

TEL measurements are recorded each day and are transmitted from these sites to a central location only on the ﬁrst and ﬁfteenth of each month. These values are published yearly in the Colorado Annual Data Summary, and digital values are available from the SCS West National Technical Center computer in Portland, Oregon. The 25 years of SNOTEL data were averaged to ﬁnd the mean annual SWE. The steplike character of this diagram reﬂects the fact that it is built from biweekly observations averages represented by the steps. There is a linear increase from October to the maximum at the beginning of April. After that there is a sharp decrease during the melt season of late May through July. We therefore selected the ﬁrst part of April as the maximum snow extent and hence the time when we should estimate the annual snow cover from the AVHRR imagery. Snow Cover from AVHRR Imagery. Snow Image Calculation. The primary advantage of satellite imagery for snow mapping is the synoptic areal coverage provided by a single satellite image. This eliminates the changes over time and space that take place while collecting SNOW COURSE measurements. Satellite images also provide much greater spatial resolution than is possible even by combining SNOW COURSE measurements with automated SNOTEL sites. The disadvantage of the satellite imagery is the fact that snow and clouds appear similar in the visible and thermal infrared images. Both are bright reﬂective targets having very similar values in the AVHRR visible channel 1 (0.58 µm to 0.635 µm). Clouds and snow cover also have similar temperatures resulting in similar thermal infrared (channels 3, 4, and 5) signatures. To discriminate between cloud and snow, we introduce a ﬁctitious channel 6 that is a combination of channels 3 (3.7 µm to 3.935 µm) and 4 (10.3 µm to 11.35 µm). We deﬁne channel 6 as Chan 6 = (Chan 3 − Chan 4)

(2)

Because channel 3 contains both thermally emitted radiation and reﬂected radiation, the difference between channels 3 and 4 removes the thermal response portion from channel 3. In channel 6, clouds should appear much “brighter” than the snow pixels. The channel 6 image is combined with the channel 1 visible image to determine clearly which areas of the visible image are clouds. Both clouds and snow are bright in channel 1 whereas clouds appear brighter than snow cover in the synthetic channel 6 image. Where bright values in channel 1 match with the bright features in the channel 6 image they are conﬁrmed as cloud cover and masked out as such in the channel 1 image. The remaining bright values in channel 1 then represent the snow cover. This procedure is very different than the supervised classiﬁcation procedure used by Baumgartner et al. (32) and Baumgartner (29). In this type of approach, it is possible to deﬁne “pure snow cover pixels” based on in situ measurements (33). It is not possible to assess quantitatively the degree to which a “snow cover” pixel is occupied by snow, but we believe that the snow signatures in channels 1 and “6” indicate that the pixel is more than 75% snow covered. Most

16

Visible and Infrared Remote Sensing

of the pixels will be greater than that in snow cover with most of them at 100%. No in situ data were collected to analyze which of the snow pixels was 100% covered and which were less. This is an area for future study in order to better establish the accuracy of this type of remote sensing for mapping snow cover. The procedure just described was used to create “snow images” for each day in the month of April 1990. In general, at least one AVHRR image per day was available for analysis. In some cases, there were two images (morning and afternoon satellite passes), and a correction was applied to account for differences in solar azimuth angles. Snow and clouds reﬂect light in the visible part of the spectrum. Radiation with wavelengths > 1.2 m is strongly absorbed by snow, and a snow-cloud discrimination channel centered at 1.6 m is planned for later versions of the AVHRR. In the thermal infrared, both snow and clouds absorb strongly in the range between 10 m and 12 m, but at 3.7 m clouds will reﬂect more light than snow. Hence the virtual channel 6 value will be greater for clouds. The problem that arises with high cirrus clouds is that they have a high ice crystal content and an albedo similar to snow. For this reason it was necessary to do some additional processing to further remove cloud residuals. Cloud Residue Removal. After this mapping procedure was applied to each image, two different types of cloud versus snow discrimination errors remain. First, some snow pixels may have been mistakenly eliminated as clouds. Second, some clouds that have been improperly identiﬁed as snow will remain. These errors are partially accounted for by producing a composite image over time. Because clouds move over short time periods and snow changes more slowly over time, it is possible to retrieve those portions of individual images that are identiﬁed as snow and then composite them over a period of time. Thus bright features that change rapidly in time (between two images) are identiﬁed as clouds rather than snow even if they have passed the earlier ﬁlter process. In this way, we compensate for those image pixels that are clouds incorrectly identiﬁed as snow. Only those pixels that are identiﬁed as snow in more than one image are retained as snow cover. This temporal composite has an additional beneﬁt. Because we are interested in the snow cover that contributes to the annual basin run-off, we are not interested in the transient snow cover that may exist right after a snowfall. We are instead interested in the continuing seasonal snow cover that is found above the snowline (approximately 9,000 ft), which is the primary contributor to the spring water shed in Colorado. Our temporal image composites will eliminate the possible contribution from short-term transient snow cover. Composite Snow Images. For our study week, long periods were used to composite the snow cover retrievals. Separate composite “snow images,” as well as a composite image, were computed for the ﬁrst and second weeks of April 1990. In these images, the bright white areas are the snowcovered regions and clearly depict the mountain ridges. The images seem to indicate a heavier snow cover for the ﬁrst week than for the second week. This was caused by fresh

snowfall early in the ﬁrst week that then settled (melting, compacting, etc.) into snowpack in the second week. During this ﬁrst week, the transient snow cover leads to a false estimate of the overall snowpack. The image for both weeks is likely to be more representative of the total snowpack as these transient conditions are smoothed out over this longer time period. A technique was introduced to distinguish objectively between clouds and snow cover in multichannel AVHRR images. Introducing a pseudo-channel 6 as the difference between infrared channels 3 and 4, it was possible to discriminate snow cover from clouds by using channel 6 and the visible image of channel 1. Both clouds and snow cover appear bright in the visible channel, whereas clouds are darker in channel 6 so that it is possible to identify those portions of the channel 1 image that represent clouds. These are then masked off, and the remaining bright portions of the channel 1 image are determined to be snow cover. Using a relationship between elevation and snow water equivalent developed for each major river basin from 25 years of historical in situ measurements, the satellite snow cover estimates for April 1990 were converted to a snow water equivalent. These values were then compared with the gauged values of river run-off and found to agree with 11% for the ﬁrst two weeks of April when averaged over the entire state of Colorado. It was not possible to use this analysis for individual basins because water was diverted from one basin to another according to need. In addition, the individual weeks in April did not perform as well primarily because of a strong snowfall early in the ﬁrst week. These results suggest that it is possible to enhance snow cover assessments for Colorado using satellite remote sensing. Even though this approach will not replace the in situ measurements, it may be possible to reduce the number of expensive SNOW COURSE measurements or perhaps to improve the present snow pack assessments without any additional in situ measurements. Ice Motion. It is possible to use a sequence of ice images to estimate ice motion. Here it is possible to use visible and infrared imagery in addition to passive microwave imagery that will not be discussed here. The basic requirement is to have excellent geolocation for each of the images. Any misregistration will result in an error in the estimated ice motion. For visible and infrared images, there is the usual requirement that the image be cloud free. This is something that almost never happens in Polar Regions where cloud cover is more prevalent than in other locations. Because of this extreme cloudiness, we use a different compositional technique. Instead of compositing the brightness values of the individual images, we composite only those areas that are clear enough to calculate the ice velocity vector. Thus we are now seeking areas that are clear in both the ﬁrst and second image so that we can compute the ice velocity vector. We then composite the vectors from each pair of images over a period of time much as we did before for the individual images. Ice motion can be clearly seen in the two images in Fig. 16, which are of the area between Greenland and Spitsber-

Visible and Infrared Remote Sensing

17

correlation position. When search windows are overlapped, a densely populated velocity vector ﬁeld is produced. This procedure can either be repeated for every image pair, or as above the clear portions of the images can be used to compute vectors that are themselves then composited to form a larger vector ﬁeld. It is more attractive, however, to use the MCC approach to calculate the ice motion from the passive microwave images of the Special Sensor Microwave Imager (SSM/I), which has been done for both hemispheres by Emery et al. (35). Thanks to the lack of sensitivity of the passive microwave to atmospheric constituents such as water vapor, it is possible to view the ice surface regardless of cloud cover or the presence of atmospheric moisture. The only drawback is a decrease in spatial resolution going from 1 km with the AVHRR to a minimum of 12.5 km with the 85.5 GHz channel of the SSM/I. Lower-frequency SSM/I channels have a corresponding increase in resolution size up to about 25 km with the 37 GHz channel. Forest Fire Detection and Monitoring

Figure 16. (a) Near-infrared image of sea ice in Fram Strait on April 21, 1986. (b) Near-infrared image of sea ice in Fram Strait on April 22, 1986.

gen known as Fram Strait. In this area, ice from the Arctic basic ﬂows as in a funnel through this region, after which it continues to ﬂow south. The strong shears present in this current regime are reﬂected in the very distorted images of the sea ice. We note that the clarity in terms of cloud cover is extremely unusual for these images and one cannot expect to view this area as clearly as this on any one occasion. The technique used to detect the ice motion is the Maximum Cross Correlation (MCC) technique (34), which uses the cross correlation between the ﬁrst and second images. To seek out the motion between the images, a large search window is used in the second images with a small “template” from the ﬁrst image. The goal is to ﬁnd the location at which the cross correlation between the template and search windows is a maximum. That then is the end of a velocity vector that reaches from the center of the search window to the center of the template window in this maximum

Another application that uses both the visible and thermal infrared channels is that of forest ﬁre detection and monitoring. The vast area of most continents that is covered with forests requires constant observation to be able to spot ﬁres when they ﬁrst start and to deploy ﬁre ﬁghting men and equipment to control and put out the ﬁres. A variety of systems are used today. They include ground-based electrical systems to detect lightening strikes that reach the ground along with observation towers built so that forest observers can see greater parts of the forest. These are supplemented with aerial surveillance, which also requires human observations. All these methods are manpower intensive and cannot possibly cover the entire area where forest ﬁres occur. By comparison, satellite-based methods are or at least can be made fairly automated requiring human intervention mainly in the response to the site suspected to be on ﬁre. Because of the potential for false ﬁre identiﬁcation, an additional conﬁrmation is usually needed before resources will be deployed to ﬁght the ﬁre. It may be that, after considerable experience, it will be clear what satellite image signatures attend deﬁnite ﬁres, reducing the number of ﬁres that require additional conﬁrmation. There are two fundamental signatures of ﬁres in weather satellite imagery. First is the “hot spot,” which is the active ﬁre itself (Fig. 17). This will be seen in the thermal infrared image as an extremely hot group of pixels. The ﬁre must be a fairly large ﬁre to ﬁll the 1 km pixels of the AVHRR. The infrequent coverage of the SPOT and LANDSAT satellites does not make them optimum for rapid detection and monitoring. The frequent coverage of GOES is available only at the 1 km resolution in the visible channel, the lower resolution in the thermal infrared makes it impossible to map the hot spot unless it is greater than 4 km in size. One of the real problems in ﬁnding the hot spot is the fact that the smoke plume often obscures the actively burning part of the ﬁre. But this smoke plume can also be used as a supplemental source of information on the ﬁre itself. Here the problem is distinguishing the smoke plume from the ambient cloud cover. In the visible channel, the smoke and clouds look quite similar, and it is often

18

Visible and Infrared Remote Sensing

Figure 17. Thermal infrared (top) and 3.7 µm image (bottom) from the Yellowstone Fires 1988.

very difﬁcult to discriminate between the two. The thermal infrared image provides some additional information because the cloud should be much colder than the smoke plume. Also the shape and position of the smoke plume can be used to infer something about the intensity of the ﬁre and its position. All this information can be pooled to develop a description of the ﬁre even before it is seen on the ground. The major shortcoming of this approach is the need for more frequent coverage than is available from polarorbiting sensors. With the present two-polar orbiter satellite system, it is only possible to view the ﬁre every 4 to 6 h, which is enough time for the ﬁre to really have changed its intensity, its direction, etc. GOES gives the required temporal resolution with its half-hourly imagery, but the problem is the lower spatial resolution in the thermal infrared channels. This is mostly the effect of sampling at a much higher altitude for GOES (22,000 km) as compared with ∼800 km for the AVHRR. For the present, it is best to combine the visible GOES imagery with the AVHRR images to construct a more complete image of the ﬁre and how it is progressing over time. This information can be used to augment (rather than replace) the ground-based ﬁre detection and monitoring systems. In regions where there are no ﬁremonitoring stations, it may be that the satellite data are the primary sources of information for both detecting and monitoring the ﬁre. Here they will guide the deployment of resources to ﬁght the ﬁre. Although there is a lot of research that still needs to be done regarding the detection and monitoring of forest ﬁres with satellite data, it is possible to construct a system that supplies the pertinent agencies with important information on the outset of the ﬁres, their location and some selected information about ﬁre intensity and progression in space/time. This system can be largely automated with no need for operator intervention until it comes to analyzing image features to evaluate the effect of cloud cover or perhaps to estimating which way the ﬁre will progress. This is an important beneﬁt of the satellite system when compared with any land-based system. SUMMARY There are a great many other applications of visible and infrared satellite imagery than those that have been pre-

sented in this article. In addition, new applications of these data are being discovered all the time, and it is impossible to stay abreast of the different processes being sensed by the satellite or at least being derived from the satellite data. One area of particular potential is merging the visible and infrared traditional satellite imagery with data from passive and active microwave sensors. These different bands are very complimentary with the microwave, offering all weather viewing at a substantially reduced spatial resolution. The optical channels, on the other hand, cannot provide uniform or comprehensive coverage because of cloud cover, but the temporal sampling is excellent. Thus we need to develop those indices that will take advantage of both types of data to yield new information about Earth’s surface processes. Another important research focus will be to ﬁnd the best possible ways that satellite data can be integrated with model data. The assimilation of satellite data is a promising development in weather forecasting and should be equally as beneﬁcial in studies of the Earth’s surface. By merging models and satellite data, we are able to infer something about the fundamental processes inﬂuencing the Earth as sensed by the satellite.

BIBLIOGRAPHY 1. L. J. Allison E. A. Neil Final Report on the TIROS 1 Meteorological Satellite System, NASA Tech. Rep. R-131, Goddard Space Flight Center, Greenbelt, MD, 1962. 2. J. C. Barnes M. D. Smallwood TIROS-N Series Direct Readout Services Users Guide, Washington, DC: National Oceanic and Atmospheric Administration, 1982. 3. A. Schwalb The TIROS-N/NOAA A-G satellite series. NOAA Tech. Memo., NESS 95, NOAA Washington, DC, 1978. 4. A. Schwalb Modiﬁed version of the TIROS N/NOAA A-G Satellite Series (NOAA E-J)—Advanced TIROS N (ATN), NOAA Tech. Memo. NESS 116, Washington, DC, 1982. 5. ITT, AVHRR/2 Advanced Very High Resolution Radiometer Technical Description, prepared by ITT Aerospace/Optical Division, Ft. Wayne, IN, for NASA Goddard Space Flight Center, Greenbelt, Maryland 20771, under NASA Contract No. NAS526771, 1982. 6. K. B. Kidwell (ed.) Global Vegetation Index Users Guide, Washington, DC: NOAA/NESDIS/NCDC/SDSD, 1990. 7. M. Weaks LAC Scheduling; Proc. North Amer. Polar Orbiter Users Group, 1st Meet., available from the NOAA National Geophysical Data Center, Boulder, CO, 1987, pp. 63–70. 8. G. R. Rosborough D. Baldwin W. J. Emery Precise AVHRR image navigation, IEEE Geosc. Remote Sens., 32: 644–657, 1994. 9. D. Baldwin W. J. Emery AVHRR image navigation, Ann. Glaciology, 17: 414–420, 1993. 10. D. Baldwin W. Emery P. Cheeseman Higher resolution earth surface features from repeat moderate resolution satellite imagery, IEEE Geosci. Remote Sens. in press. 11. W. J. Emery M. Ikeda A comparison of geometric correction methods for AVHRR imagery, Can. J. Remote Sens., 10: 46–56, 1984. 12. F. M. Wong A uniﬁed approach to the geometric rectiﬁcation of remotely sensed imagery, U. British Columbia, Tech. Rep. 84-6, 1984.

Visible and Infrared Remote Sensing 13. D. Ho A. Asem NOAA AVHRR image referencing, Int. J. Remote Sens., 7: 895–904, 1986. 14. L. Fusco K. Muirhead G. Tobiss Earthnet’s coordination scheme for AVHRR data, Int. J. Remote Sens., 10: 625–636, 1989. 15. C. J. Tucker et al. Remote sensing of total dry matter accumulation in winter wheat, Remote Sens. Environ., 13: 461, 1981. 16. C. J. Tucker P. J. Sellers Satellite remote sensing of primary production, Int. J. Remote Sens., 7: 139, 1986. 17. C. J. Tucker et al. Satellite remote sensing of total herbaceous biomass production in the Senegalese Sahel: 1980–1984, Remote Sens. Environ., 17: 233–249, 1985. 18. S. N. Goward et al. Comparison of North and South American biomass from AVHRR observations, GeocartoInternational, 1: 27–39, 1987. 19. S. N. Goward et al. Normalized difference vegetation index measurements from Advanced Very High Resolution Radiometer, Remote Sens. Environ., 35: 257–277, 1991. 20. C. Sakamoto et al. Application of NOAA polar orbiter data for operational agricultural assessment, Proc. North Amer. NOAA Polar Orbiter Users Group 1st Meet., NOAA National Geophysical Data Center, Boulder, CO: 1987, pp. 134–160. 21. E. P. McClain W. G. Pichel C. C. Walton Comparative performance of AVHRR-based multichannel sea surface temperatures, J. Geophys. Res., 90: 11587–11601, 1985. 22. P. Schluessel et al. On the skin-bulk temperature difference and its impact on satellite remote sensing of sea surface temperature, J. Geophys. Res., 95: 13341–13356, 1990. 23. J. F. T. Saur A study of the quality of sea water temperatures reported in logs of ships weather observations, J. Appl. Meteor., 2, 417–425, 1963. 24. R. Saunders The temperature at the ocean-air interface, J. Atmos. Sci., 24: 269–273, 1967. 25. E. Clauss H. Hinzpeter J. Mueller-Glewe Messungen der Temperaturstruktur im Wasser an der Grenzﬂaeche OzeanAtmosphare, ¨ Meteor Forschugsergeb., Reihe B. 5: 90–94, 1970. 26. G. Ewing E. D. McAlister On the thermal boundary layer of the ocean, Science, 131: 1374–1376, 1960. 27. H. Grassl H. Hinzpeter The cool skin of the ocean, GATE Rep., 14, 1,pp. 229–236, WMO/ICSU, Geneva, 1975. 28. P. A. Coppin et al. Simultaneous observations of sea surface temperature in the western equatorial Paciﬁc ocean by bulk, radiative and satellite methods. J. Geophys. Res., suppl., 96: 3401–3409, 1991. 29. M. F. Baumgartner Snow cover mapping and snowmelt runoff simulations on microcomputers, Remote Sens. Earth’s Environ., Proc. Summer School, Alpbach, Austria, 1989. 30. T. R. Carroll et al. Operational mapping of snow cover in the United States and Canada using airborne and satellite data, Proc. 1989 Int. Geosci. Remote Sens. Symp. 12th Canada. Symp. Remote Sens., Vancouver, B.C., Canada, 1989. 31. K. Elder J. Dozier J. Michaelsen Snow accumulation and distribution in an alpine watershed, Water Resources Res., 27: 1541–1552, 1991. 32. M. F. Baumgartner K. Seidel J. Martinec Toward snowmelt runoff forecast based on multisensor remote-sensing information, IEEE Trans. Geosci. Remote Sens., GE-25: 746–750, 1987. 33. A. Rango Assessment of remote sensing input into hydrologic models, Water Res. Bull., 21: 423–432, 1985.

19

34. R. N. Ninnis W. J. Emery M. J. Collins Automated extraction of sea ice motion from AVHRR imagery, J. Geophys. Res., 91: 10725–10734, 1986. 35. W. J. Emery C. W. Fowler J. A. Maslanik Satellite derived Arctic and Antarctic Sea Ice Motions: 1988–1994, Geophys. Res. Lett., 24: 897–900, 1997. Links

WILLIAM J. EMERY University of Colorado, Boulder, CO

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close