Radio Interferometry and Satellite Tracking
For a complete listing of titles in the Artech House Space Technology and...
236 downloads
2061 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Radio Interferometry and Satellite Tracking
For a complete listing of titles in the Artech House Space Technology and Applications Series, turn to the back of this book.
Radio Interferometry and Satellite Tracking Seiichiro Kawase
Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the U.S. Library of Congress. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library. Cover design by Vicki Kane
ISBN 13: 978-1-60807-096-1
© 2012 ARTECH HOUSE 685 Canton Street Norwood, MA 02062
All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.
10 9 8 7 6 5 4 3 2 1
Contents
Preface
xiii
Part I Radio Interferometer
1
1
Overview of Part I: Radio Interferometer
3
2
Receiving Antenna
7
2.1
Receiving Points and the Baseline
7
2.2
Reference Point
8
2.3
Polarization
11
2.4
Sidelobe
12
2.5
Mechanical Stability
12
3
Receiving Equipment
13
3.1
Frequency Conversion
13
3.2
Receiving Routes
14
v
vi
Radio Interferometry and Satellite Tracking
3.3
Phase Stability
16
3.4
Reference Correction
17
3.5
Cable Stability Condition
19
3.6
Reference Coupler Reference
20 21
4
Phase Detection
23
4.1
Direct Phase Measurement
23
4.2
Separate Measurement
24
4.3
Fourier Transform
26
4.4
Problem of Image Spectrum
27
4.5
Signal Processing for Phase Measurement
28
4.6
Noise Reduction
32
4.7
Tracking Nonbeacon Signals Reference
35 37
Appendix 4A: Window and Phase Measurement
38
4A.1
Beacon Measurement
38
4A.2
Nonbeacon Measurement
39
5
Signal, Noise, and Precision
41
5.1
Required SNR
41
5.2
Signal Power and Noise Power
42
5.3
Beacon Downlink Budget
45
5.4
Tracking a Weak Signal
46
5.5
Estimates in PFD Reference
47 49
Contents
vii
6
Error Factors
51
6.1
Baseline Error
51
6.2
Phase Ambiguity
53
6.3
Atmospheric Refraction
55
6.4
Effect of Rainwater Reference
57 57
7
Design and Installation
59
7.1
System Layout
59
7.2
Reflecting Interferometer
60
Part II Geostationary Satellite Orbit
65
8
Overview of Part II: Geostationary Satellite Orbit Reference
67 69
9
Kepler’s Laws
71
9.1
Kepler’s First Law
71
9.2
Kepler’s Second Law
73
9.3
Kepler’s Third Law
74
9.4
Physical Meanings
75
9.5
Significance of Kepler’s Laws
80
10
Near-Stationary Orbit
83
10.1
Geostationary and Near-Stationary Orbits
83
10.2
Orbit with Small Eccentricity
84
10.3
Motion Due to Small Eccentricity
86
10.4
Motion Due to Nonstationary Radius
89
viii
Radio Interferometry and Satellite Tracking
10.5
Motions in an Orbital Plane
90
10.6
Motion Perpendicular to an Orbital Plane
91
10.7
Relative Position Coordinates Reference
93 95
Appendix 10A: Width of Figure 8-Like Locus
95
11
Changing the Orbit
97
11.1
Orbital Energy
97
11.2
In-Plane Orbital Changes
99
11.3
In-Plane Orbital Maneuver
101
11.4
Inclination Maneuver
103
12
Orbital Perturbations
107
12.1
Perturbing Forces
107
12.2
Nonspherical Shape of the Earth
108
12.3
Patterns of Longitudinal Drift
111
12.4
Solar Radiation Pressure
113
12.5
Position of the Sun
117
12.6
Long-Term Effect
118
12.7
Gravity of the Sun
120
12.8
Tilting of the Orbital Plane
122
12.9
Gravity of the Moon
125
12.10
Sun-Moon Combined Effect Reference
128 129
Contents
ix
13
Station Keeping
131
13.1
EW Keeping for Drift-Rate Control
131
13.2
EW Keeping for Eccentricity Control
133
13.3
Combined EW Keeping
136
13.4
NS Keeping
137
13.5
Factors Depending on Satellites Reference
138 141
14
Overcrowding and Regulations
143
14.1
Orbital Regulations
143
14.2
Problem of Overcrowding Reference
145 145
Part III Interferometric Tracking
147
15
Overview of Part III: Interferometric Tracking
149
16
Tracking and Orbit Estimation
151
16.1
General Concept
151
16.2
Styles of Orbit Estimation
152
16.3
Choice of Estimation Style
154
16.4
Software Units
155
16.5
Meaning of Orbit Estimation
157
16.6
Tracking Using an Interferometer Reference
158 160
17
Azimuth-Elevation Tracking
161
17.1
Azimuth-Elevation Angles
161
x
Radio Interferometry and Satellite Tracking
17.2
Azimuth-Elevation Interferometer
163
17.3
Detection Unit Vector of a Baseline
164
17.4
Orbit Estimation
166
17.5
Accuracy Considerations
168
17.6
Nonhorizontal Baseline
168
18
Longitude Tracking
171
18.1
Satellite Longitudes
171
18.2
Longitude-Monitoring Interferometer
172
18.3
Orbit Estimation
173
18.4
Interferometer Setup
175
18.5 18.5.1 18.5.2 18.5.3
Monitoring Examples Single Satellite Two Satellites Different-Band Satellites Reference
175 175 177 179 180
19
Range-Azimuth Tracking
181
19.1
Combined Tracking for Orbit Estimation
181
19.2
Merit of Combined Tracking
183
19.3
Interferometer Hardware and Performance
183
19.4
Station Keeping with Safety Monitoring Reference
185 186
20
Differential Tracking
187
20.1
Differential Tracking Concept
187
20.2
Interferometer Hardware
188
Contents
xi
20.3
Orbit Estimation
190
20.4
Possible Applications Reference
191 192
21
Rotary-Baseline Interferometer
193
21.1
Rotary Baseline
193
21.2
Rotary Baseline with Mirrors
195
21.3
Rotary-Baseline Interferometer
196
21.4
Operation and Data Processing
199
21.5
Orbit Estimation
203
21.6
Long-Term Monitoring
205
21.7
Error Considerations
207
21.8
Error Calibration
209
21.9
Nongeometrical Error Reference
210 212
22
Geolocation Interferometer
213
22.1
Geolocation: Principle and Problem
213
22.2
Weak-Signal Detection
215
22.3
Delay Limit and Delay Line
217
22.4
Correlation Processing
219
22.5
Time-Integration Effect
220
22.6
Problem of Satellite-Transponder Phase
222
22.7
Phase Measurement Accuracy
223
22.8
Locating the Earth Station
224
22.9
Transponder Frequency Errors
228
xii
Radio Interferometry and Satellite Tracking
22.10
Orbital Information
228
22.11
Quick Orbit Estimation Reference
229 230
About the Author
231
Index
233
Preface The worldwide growth of space telecommunications has caused a rapid increase in the number of satellites operating in geostationary orbits. Satellites are being placed in orbit with less and less distance separating them; sometimes the amount of separation is so small that satellite control needs to operate with extreme caution to ensure orbital safety. Satellites currently being planned for launch are competing for vacant orbital positions, with more and more effort being required for coordination with other satellites. Satellites are thus faced with the problem of an overcrowded orbit. The purposes of this book are to address this problem and to show how radio interferometers can be used for tracking and monitoring the orbits of geostationary satellites in the overcrowded environment. Radio interferometry is a passive means of satellite tracking, with high accuracies theoretically possible for observing direction angles. Its potential use was noticed during the early years of artificial satellites. In actuality, however, there have been few or no cases of interferometric tracking, in particular for geostationary satellites. This is because interferometers had some inherent difficulties in establishing operational accuracy. This book will demonstrate that we can overcome the difficulties to make the interferometer truly capable of precise satellite tracking. Satellites are faced with an additional problem. RF interference tends to occur when an Earth station emits unwanted signals to satellites. Locating such an Earth station on the map requires a special tracking method, which is based
xiii
xiv
Radio Interferometry and Satellite Tracking
on the same principle as the satellite tracking interferometer. So this topic is also covered in this book. Chapters of this book are grouped into three parts. Part I addresses the fundamentals of the interferometer. It starts by defining the concepts and terminology, such as baseline vector, reference points, and interferometric phase. Next it covers interferometer hardware, including antennas, receiving equipment, and signal processing for phase detection. The accuracy of the tracking measurements is discussed in terms of signal and noise and other systematic errors. The contents of Part I are the essential items that must be considered for every interferometer. Part II discusses the orbital dynamics of geostationary satellites. Because our tracking targets are geostationary satellites, we need to know in what manner they move, if they move, in orbit. Discussions start with the fundamental laws of orbits, then go through maneuvers and perturbations, until finally reaching the station-keeping methods. Discussions are straightforward, without relying on complex mathematical equations, because we prefer an approach that makes comprehension easy while not losing exactness. One can regard Part II as a concise, understandable discourse on the theory of geostationary orbits. Part III illustrates how interferometers are used for satellite tracking. Different types of interferometers are shown, because they have different purposes of tracking and orbit estimation. Parts I and II are frequently referred to, as they are put together to derive interferometer applications. Use of an interferometer for locating unwanted Earth stations is also discussed in Part III. In regard to the content, Part I is categorized as electronic engineering, whereas Part II covers mechanical engineering. The author has made every effort to write Part I such that it can be followed without trouble by those who are in mechanical engineering, and Part II by those in electronic engineering, because an understanding of Part III requires both. Chapters are thus straightforward and meant to be self-contained; external material is referred to only if it is truly worth referral. For this reason, showing long lists of references is not the style of this book. The author wishes to thank the National Institute of Information and Communications Technology (NICT), where he worked in satellite communications, tracking, and orbital dynamics. Kashima Space Technology Center, a local branch of NICT, was the operational site for the interferometers seen in Chapters 18, 20, and 21; all members of the engineering and administration departments who gave support to those interferometer projects are cordially thanked. The author’s deepest thanks go to the late Dr. Erik Mattias Soop. At his suggestion the author became interested in interferometric tracking and
Preface
xv
tried orbit estimation analysis during a visit to the European Space Operations Center in the 1980s, and that was the starting point for the author’s involvement in interferometric tracking of geostationary satellites. Interferometric satellite tracking is a relatively young technology in the history of geostationary satellites that started in the 1960s. The author hopes the present book will attract interest in this young technology, thus promoting its further development, for surely it will give us momentum to confront the problem of overcrowded geostationary orbits.
Part I Radio Interferometer
1 Overview of Part I: Radio Interferometer The radio interferometer, or simply interferometer as we will refer to it, provides a means of measuring the directional angle of downlink microwaves from a target satellite. Its basic idea is illustrated in Figure 1.1. Antennas receive the satellite microwaves, and the relative phases measured between the antennas are used to point to the satellite direction. The pointing direction has two degrees of freedom, which are often expressed in azimuth and elevation angles. Correspondingly, the phases are measured between antennas (1) and (2) and between (2) and (3). Originally, there existed a method for measuring satellite direction by using a large-diameter parabolic antenna with autotracking. One can actually regard the interferometer as deriving from the autotracking antenna. The principle of the autotracking antenna may be understood as illustrated in Figure 1.2. If the satellite is right in front of the antenna, then by symmetry the satellite microwave arrives at feed elements (a) and (b) at the same time. If the satellite is at some slanted angle as indicated by the broken lines in Figure 1.2, the microwave will arrive earlier at B than at A of the antenna dish and, hence, will arrive earlier at element (b) than at (a). Let us assume here that we are receiving a beacon signal from the satellite. The relative time difference of signal arrivals at (a) and (b) is then detected as a relative phase difference. A drive motor then slews the antenna, until the phase difference becomes zero. This makes the antenna point right to the satellite, and we determine the satellite direction by reading the slewing angle of the driving shaft. Sometimes there is a single feed horn, instead of two elements, placed at the focal point. In this case the horn may be regarded as (a) and (b) being combined, and the relative phase difference is detected by picking up a higher order uneven mode excited in the horn. If the phases at A and B are different, 3
4
Radio Interferometry and Satellite Tracking
Figure 1.1 Basic interferometer.
Figure 1.2 Principle of autotracking.
the phase at the focal point shows uneven distribution, thus exciting the higher order mode. So, the tracking principle is similar to that of the two-element case. Tracking the satellite direction thus relies on the existence of a relative phase difference between A and B of the antenna dish. If this is so, we can place small antennas at A and B, instead of using a large dish, to detect the relative phase of A and B. This is the basic principle of the interferometer, and because we can take the A and B pair horizontally and vertically across the large antenna dish, two pairs of small antennas should appear. In this way the interferometer
Overview of Part I: Radio Interferometer
5
takes the shape illustrated in Figure 1.1, with antennas (1) and (2) working as one pair and antennas (2) and (3) as another pair. In the early period of satellite communications, Earth stations employed large-diameter antennas, because during that period, satellites were small in size and mass, and so had less transmission power than nowadays. Over the decades satellites have evolved to have more and more transmission power; correspondingly, the use of large antennas in Earth stations has become less and less frequent. This means that the Earth stations are now losing the ability to measure the direction of satellites, and this is where the interferometer can show its significance. The interferometer has advantages over the autotracking antenna. First of all, the interferometer does not need a large-diameter antenna. Its small antennas are placed at fixed points and do not need drive mechanisms if the tracking target is a geostationary satellite. The accuracy of direction measurement improves with the distance between the antennas. Low-cost, accurate satellite tracking thus becomes possible. In contrast, however, the interferometer has its own problems. The interferometer is based on precise phase measurement, whereas in reality it is no easy task to measure precise phases in a practical environment with various error sources existing. The interferometer has only small parts of A and B taken out from the large dish, as was illustrated in Figure 1.2, with the major part between A and B being discarded. So, the interferometer lacks a sharp beam that should point toward the satellite, and this causes some indefiniteness in determining the direction. Getting over these problems is essential for realizing an interferometer. Part I addresses these problems and considers how to solve them while discussing the design of interferometer hardware. Chapters 2 through 4 discuss the most basic elements of the interferometer, including antennas, receiving equipment, and phase detection. Chapter 5 discusses the quality of satellite downlinks, which is also a basic element. Chapters 6 and 7 discuss system design and installation, while considering how to eliminate error sources in order to eke out the best performance of the interferometer hardware. The antennas we saw in Figure 1.1 make up pair (1)–(2) and pair (2)–(3). Both pairs have identical functions. For that reason, we focus our interest on a single pair of antennas throughout the discussions in Part I. For the time being we will consider receiving satellite beacons; later, we will also consider nonbeacon signals. Upon discussing the interferometer hardware, we will assume the frequency band of 3 to 4 GHz (C band), 11 to 12 GHZ (Ku band), or both. This is because increasing numbers of satellites are now using these frequency bands, which adds to the overcrowding problem of orbital positions and frequency channels. Interferometric tracking was once used when the earliest artificial satellites were put into low Earth orbits [1]. Its use, however, did not last long
6
Radio Interferometry and Satellite Tracking
because it was soon replaced by Doppler, range rate, or ranging. Afterwards, interferometric satellite tracking was not often discussed. Using the interferometer for geostationary satellites is thus a new concept for us, and this is why the following discussions start with those basic elements. Discussions will proceed mostly in a self-contained manner with concise information about satellite communication links, for instance, by [2], given as background in the chapters, in particular, in Chapter 5.
References [1] Bate, R. R., D. D. Mueller, and J. E. White, Fundamentals of Astrodynamics, New York: Dover, 1971, pp.135–136, 138. [2] Agrawal, B. N., Design of Geosynchronous Spacecraft, Englewood Cliffs, NJ: Prentice-Hall, 1986, Chap. 7.
2 Receiving Antenna Antennas used for interferometers are basically the same as those antennas used in satellite communication Earth stations. However, our particular purpose for interferometric phase measurement requires us to consider the antennas from a different point of view. We must consider a reference point for each antenna for measurement. We must also consider antenna polarization in a different way and mechanical rigidity. These points are discussed in the following sections.
2.1 Receiving Points and the Baseline The interferometer we are going to discuss is principally made up of two receiving points #1 and #2, as illustrated in Figure 2.1, so that the phase difference can be measured between #1 and #2. The line segment connecting points #1 and #2 is called a baseline. The baseline is a vector quantity defined by its length and orientation. The interferometer of interest will have a baseline length of several meters or longer, but not much longer than a couple of tens of meters, because the interferometer will be placed on the premises of a satellite tracking station. If the target satellite to be tracked is in the direction perpendicular to the baseline in Figure 2.1, then its downlink signals arrive at the two receiving points at the same time. This is because the satellite is distant enough with regard to the baseline, and so the lines of sight to the satellite at point #1 and point #2 are parallel to each other. If the satellite direction changes by an angle θ, then point #1 becomes more distant from the satellite than point #2, by B sin θ. If λ is the wavelength of the satellite downlink signal, then the signal phase at point #1 will show a delay of 2π B sin θ/λ with regard to point #2. This kind of phase delay is called the interferometric phase, and measurement of the 7
8
Radio Interferometry and Satellite Tracking
Figure 2.1 Principle of interferometer, with receiving points #1 and #2.
interferometric phases will provide information about the satellite’s direction angles and, hence, the satellite’s orbital motion. For example, suppose we have a 10m-baseline interferometer for receiving a beacon signal that has a 25-mm wavelength (in a 12-GHz band). If angle θ changes from 0 to 0.001 deg, the interferometric phase will show a change of 2.5 deg. So, detecting the phase to a few degrees of resolution provides good information about the satellite’s orbital position because the satellite position is supposed to be controlled within a limited bandwidth of 0.1 deg. This example may be thought as a basic model for our interferometer.
2.2 Reference Point The receiving points in Figure 2.1 are assumed to be dimensionless, whereas in reality, receiving antennas have dimensions. So, we need to define a reference point for each antenna before defining the baseline of the interferometer. The reference point is considered as follows. Suppose we have an ideal antenna, that is, an antenna with a structure that is ideally symmetric, as shown in Figure 2.2. The antenna has an axially symmetric main dish, and its primary center feed puts an axially symmetric radiation pattern onto the main dish. The antenna is receiving a signal from a satellite right in front of the antenna. Now, suppose we rotate this antenna around a pivot line P1 by a small angle; this makes the main dish move to the position shown by the broken line in Figure 2.2. This rotation will cause no changes in the phase of the signal received by this antenna. Similarly, we rotate the antenna around a pivot line P2 by a small angle, while P2 is orthogonal to P1. Again this rotation causes no changes in the phase of the received signal. The cross point of P1 and P2 then has a good property and is valid as a reference point when tracking the changing directions of the satellite.
Receiving Antenna
9
Figure 2.2 Reference point of an ideal antenna.
The reference point is defined in this way for an ideally symmetric antenna or a center-fed parabolic antenna. If the antenna has a nonsymmetric structure, as is the case for commonly used offset-fed parabolic antennas, its reference point can be found by testing. Suppose we have two antennas acting as an interferometer, and they are receiving a target satellite’s signal. The test for one of the antennas is illustrated in Figure 2.3. We rotate the antenna around its elevation pivot P, by a small angle ∆θ. This rotation will perhaps cause a small change in the interferometric phase, by ∆φ. Let R1 be a line parallel to the line of sight to the satellite, and assume that
Figure 2.3 Finding the reference point by conducting an elevation rotation test.
10
Radio Interferometry and Satellite Tracking
this line is away from the pivot by x. The reference point is then somewhere on the line R1 if x satisfies
∆φ = 2 π
x ∆θ λ
(2.1)
This test must proceed in a short time period, say, a few minutes, so that the change in the satellite’s direction will be negligibly small during the test and so the phase change ∆φ will come from the antenna rotation only. Because Figure 2.3 is a side view, R1 is actually for a plane parallel to the line of sight to the satellite and parallel to the elevation rotation axis. The reference point thus exists somewhere in this plane R1. After setting the antenna back to its original pointing position, we do one more test as illustrated in Figure 2.4. We rotate the antenna by a small angle ∆θ around its azimuth axis P. This rotation causes a small phase change of ∆φ. The reference point then exists somewhere on a line R2 parallel to the line of sight to the satellite, and R2 is away from the azimuth axis by y, with y satisfying
∆φ = 2 π
y ∆θ λ
(2.2)
Here again R2 is for a plane that is parallel to the line of sight to the satellite and parallel to the azimuth rotation axis; within this plane R2, the reference point exists. The reference point exists therefore on the line at which planes R1 and R2 cross each other. This line will cross the main dish surface, and this crossing point can be set as the reference point of the antenna. We do the same
Figure 2.4 Finding the reference point by conducting an azimuth rotation test.
Receiving Antenna
11
test for the other antenna to find its reference point, and finally, the baseline is defined as connecting the two reference points. Note that the reference point we have defined is different from the phase center of an antenna. The phase center is a hypothetical point at which a spherical wave originates. So, it applies to horn antennas or omnidirectional antennas, but not to an antenna that radiates a parallel beam. The reference point is an abstract entity, but we need to define it by those tests. If the satellite is not stationary, the antennas must rotate to keep pointing to the satellite. In such a case, the reference points may become moving points, which complicates the process. Because our target is a geostationary satellite, the antennas are fixed without driving. So, the reference points are regarded as fixed points. If the two antennas have identical shapes and their primary feeds have identical patterns of radiation, then we do not need to know their exact reference points. Simply mark the geometric center of the main dish surface of each antenna. Then, connecting the two center points determines the baseline. This is valid because the target satellite is distant enough, and so a slight parallel shift in the baseline’s placing makes no difference in directional tracking. Using identically designed antennas is thus a good choice in designing an interferometer.
2.3 Polarization A microwave is characterized by what kind of shape its electric-field vector, or E vector, traces out as the wave propagates. If the E vector rotates in such a way that the vector traces out a screw-like shape, the wave is said to be circularly polarized, and according to the direction of the rotation, it is called either righthand circular polarization (RHCP) or left-hand circular polarization (LHCP). If the microwave propagates with its E vector confined in a fixed plane, that is, the vector does not rotate, it is linearly polarized (LP). Old terrestrial microwave links used horizontal and vertical polarizations, and these names were adopted later in satellite links. Actually, E vectors of LP downlinks from satellites are not precisely horizontal or vertical, but at some skew angles from being horizontal or vertical, according to the geometry of the satellite and the Earth station; however, it is practical to regard them as being horizontal or vertical in approximate generic sense. Thus a satellite communication downlink should be polarized as either RHCP, LHCP, horizontal LP, or vertical LP, and any Earth station antenna must have the same polarization as the downlink to be received. This condition, however, does not necessarily apply to the interferometer. Suppose our interferometer antennas are set to linear polarization. The interferometer can then operate for RHCP and LHCP downlinks at a loss of 3 dB, and for downlinks of vertical and horizontal LP at a loss of 3 dB or near if
12
Radio Interferometry and Satellite Tracking
the antenna polarizer is set at some intermediate angle. Such a setting is favorable if there are two or more satellites in the receiving antenna beam and we want to track every satellite in order to determine if any close approach among satellites is going to happen; that is, if we want to try orbital safety monitoring. If two or more satellites are operating within a small orbital region, their beacons, or telemetry carriers, must have been assigned to different frequencies by coordination. It is then possible for the interferometer to track each beacon separately as a target. Note here that the two antennas of the interferometer must be set at equal polarization angles. If they are not equal, a constant error will arise in the interferometric phase for a CP downlink, and this error will be positive or negative according to which polarization (RHCP or LHCP) the downlink has. This kind of error must never happen if we try the abovementioned orbital safety monitoring.
2.4 Sidelobe Because our tracking targets are geostationary satellites, we can assume that the target satellites should stay within the beams of fixed receiving antennas if the antennas are not too large in diameter and if the antennas are set at right pointing positions. If one of the antennas has a pointing error such that the target downlink is received by the antenna’s first sidelobe, then an error of 180 deg would arise in the interferometric phase. Sidelobe reception can occur because interferometric phase detections can have high sensitivities, as we will see later. If we are trying orbital safety monitoring, a fatal error occurs if one satellite is received by two antennas’ mainlobes while another satellite is received by one antenna’s mainlobe and the other antenna’s sidelobe.
2.5 Mechanical Stability The antennas must have rigid structures so as to withstand wind pressures. Because the wind pressure changes with time, the antenna structure will suffer deformations in a vibrating manner. It is important for the vibrating deformation to settle down to zero after the relief of the wind pressure; that is, the deformation must be elastic. Nonelastic deformations may occur if the vibration causes any slip between bolt-fixed parts of the antenna. The antenna must be rigid enough in this sense, and this requirement for rigidity becomes stricter for interferometers than for satellite communication antennas.
3 Receiving Equipment The next consideration after the antennas is the receiving equipment. The receiving equipment transfers the satellite signals collected by the antennas to a phase-measuring unit. Microwaves from the satellite are in the frequencies of gigahertz or higher, but the phase-measuring unit can only accept signals in the frequencies of tens of megahertz, because the unit works at a frequency for digital sampling. So, the receiving equipment must convert the signal frequency downward without losing any of the phase information contained in the original signal. This is again a particularity of the interferometer as compared with the case of satellite communications. In the following sections, we discuss how to maintain phase accuracy when receiving the satellite signals.
3.1 Frequency Conversion The mechanism of converting a signal frequency is illustrated in Figure 3.1. Here, an incoming high-frequency signal, or a radio-frequency (RF) signal as it is often called, is converted down to a lower, intermediate-frequency (IF) signal. Suppose the RF signal is written as sin(wR t + φR), with frequency ωR and phase φR. An oscillator, usually called a local oscillator, generates a sinusoidal signal: sin(ωL t + φL), with frequency ωL and phase φL. The RF and the local signals are multiplied together, or mixed, as it is often said:
sin( wR t + φR ) × sin( wLt + φL ) (3.1) 1 1 = cos [( wR - wL )t + ( φR - φL )] + cos [( wR + wL )t + ( φR + φL )] 2 2
13
14
Radio Interferometry and Satellite Tracking
The mixing yields two signals with different frequencies. One frequency is ωR - ωL, which is lower than the incoming RF, and the other is ωR + ωL, which is higher. This relationship is illustrated in Figure 3.2. We want the lower frequency for the IF, so we pick it up by using a filter, while cutting away the higher one. Sometimes the local frequency is set higher than the RF to obtain the IF frequency as |ωR − wL|. Choosing the local frequency ωL thus allows us to obtain the IF in a desired frequency. This kind of unit is called a downconverter. Now, turn to (3.1) and look at the relationship between phases. The original RF signal has a phase φR, while the downconverted IF has the phase φR − fL. That is, a shift of phase occurs when the RF is converted down to an IF, and the amount of shift equals the local signal’s phase. This is an important property of the downconverter. The frequencies and phases of the signals are summarized in Figure 3.1.
3.2 Receiving Routes The interferometer has two receiving antennas, which corresponds to two receiving routes that make up the receiving equipment, as illustrated in Figure 3.3. The two routes are identical. A receiving route begins with a low-noise amplifier (LNA), an amplifier with a special property. (The reason for its use will be clarified later in Chapter 5.) Next to come are the downconverters. Theoretically speaking, a single converter may convert the RF down to the desired frequency by the choice of the local frequency. Practically, a local frequency too near the RF may cause difficulties in the electronics, so it is common to use two or even more downconverters. In the present case, the RF is converted to an IF, and the IF is once more converted down to the frequency to be input to phase detection. We know that a downconverter causes a shift of signal phase. This shift is additive if there are two converters. Our purpose is to measure the phase
Figure 3.1 Converting the frequency.
Receiving Equipment
15
Figure 3.2 Relationships among frequencies.
Figure 3.3 Receiving equipment diagram. LNA: low-noise amplifier; D/C: downconverter; LO: local oscillator; RO: reference oscillator; H: hybrid.
difference between the satellite signals at the end of route #1 and at the end of route #2. For this measurement to be correct, the phase shift due to frequency conversion in route #1 and route #2 should be equal. This is why the local signals are distributed from a common local oscillator (LO) to every downconverter, as illustrated in Figure 3.3. Actually, the first and the second converters need different frequencies for their locals. So, the LO distributes a signal, for example, in 10 MHz, and each downconverter synthesizes its local signal as needed in it, while making its local phase equal to the distributed signal. Besides the LO, there is a reference oscillator (RO) that generates a sinusoidal signal and distributes it as common signals into the receiving routes. This is primarily for testing the receiving routes, but then it will have a more important function, which becomes critical for the interferometer, as we will see later.
16
Radio Interferometry and Satellite Tracking
3.3 Phase Stability The above-mentioned requirement for the equal phase should be stated more precisely: the total phase delay over route #1 should be constant, and that over route #2 should be constant, and the two constants should be equal. Could this requirement for stable phase be fulfilled if we use standard components made for satellite communications? Normally, receiving routes in satellite communications are required to keep their phase delays stable enough so that the demodulation of phase shift keying (PSK) will work without error. This requirement is directed, however, to rapid phase fluctuations in the region of the rate of PSK. If phase fluctuations occurred at slowly changing rates, it would not affect PSK demodulations. So, we must watch out for possible slow fluctuation in the total phase delay over the receiving route if we use standard components for satellite communications. The possibility of slow phase fluctuation is tested as follows. Receiving equipment was made, after Figure 3.3, for the Ku band. The RO generates a simulated satellite beacon, and it is divided into two, to be input to the LNA in each route. By watching the phase detection results, we can determine whether the phase fluctuation in question exists. Figure 3.4 shows the result of this test. In this test, the RO generated a simulated Ku-beacon in 12,500 MHz. The phase was detected every second, and the phase data were collected for 20 sec in one session, and the session is cycled every 5 min. The data collected in this way for 2 days is shown in Figure 3.4.
Figure 3.4 Slow phase fluctuations in receiving routes for the Ku-band test case.
Receiving Equipment
17
Here, we observe the phase fluctuating as much as over 180 deg. The observed fluctuation may be attributed to downconverters. A downconverter is actually not as simple as that illustrated in Figure 3.1, but can be more complex if it has two or more conversion stages within it, with two or more frequency synthesizers to make local signals internally. Any phase fluctuations in these synthesizers will add up to yield the total fluctuation. Also, any filter used for selecting the wanted signal has its own phase delay and it may change gradually with temperatures in the long term. The total fluctuations thus originating in the receiving routes are being observed in a differential manner between route #1 and route #2 in Figure 3.3. The observed phase fluctuation is slow enough, so it would not affect PSK demodulations, while it is not negligible at all in our interferometric phase measurement.
3.4 Reference Correction We need to compensate for the phase fluctuations that are occurring in the receiving routes. To do this, we use the reference oscillator shown in Figure 3.3. Suppose that we are receiving a target satellite beacon and measuring its interferometric phase as φS. At the same time, we receive the RO signal, and measure its interferometric phase as φR. If the phase fluctuation occurs to the satellite beacon and to the RO signal in the same manner, then φS − φR will be the correct measurement of interferometric phase that we want to know. This will be true if the satellite beacon and the RO signal are not wide apart in frequency, and if φS and φR are measured at the same time. This process is called the reference correction, and the RO signal is called the reference signal. The LO supplies its signal also to the RO, so that the RO will have its phase locked, because this is better for the phase coherence of the system. If we receive a beacon coming from a satellite, it would be difficult to see whether the reference correction is working well, because the satellite is in motion. So, we consider a test, by slightly modifying the system in Figure 3.3, as illustrated in Figure 3.5. We place a simulating oscillator (SO) that simulates a satellite beacon, and combine its signal with the RO signal. The LO signal is not supplied to this SO, because in reality the satellite beacon’s phase has no relationship with the LO’s phase. The interferometer receives the simulated satellite beacon instead of the true satellite, and to this receiving signal we apply the reference correction using the RO. The simulated satellite is not in motion, so we can clearly observe the performance of reference correction. The test result is shown in Figure 3.6. The simulated beacon and the reference signal were 2 MHz apart in frequency in this case. Here, the phase is stable to within a fraction of a degree. If we look closely at the data, they do not seem to be purely random but include some small systematic undulations.
18
Radio Interferometry and Satellite Tracking
Figure 3.5 Adding an oscillator for the reference correction test. SO: simulating oscillator for satellite beacon.
Figure 3.6 Result of reference correction for the Ku-band test case.
So it would be reasonable to say that the reference correction reduces the phase fluctuation to the order of 1 deg. Ideally speaking, the reference correction should be able to reduce the phase fluctuation to zero. Actually, the fluctuation becomes nearly constant, but not to zero. This result suggests that the phase delay of a receiving route may have frequency dependence, and the difference between the beacon and reference frequencies was the reason for that nonzero constant or bias. Because it is impossible to set the satellite beacon and the reference signal at the same frequency, this kind of constant bias will be unavoidable in interferometric phase measurements. We must remember this kind of bias when applying our interferometer to satellite tracking.
Receiving Equipment
19
To summarize the discussions so far, we can assemble the receiving equipment by using standard units and components for satellite communications, and the use of reference correction stabilizes the system phase to the order of 1 deg.
3.5 Cable Stability Condition If the reference correction is to work properly, we must not forget one condition. In the diagram of Figure 3.3, the reference RO signal is distributed, after being divided, to route #1 and to route #2 by cables (1) and (2). Let us refer to the phase of the reference signal at the end of cable (1) as reference phase (1), and to that at the end of cable (2) as reference phase (2). Because we know that a constant bias can exist in the interferometric phase even after reference correction, it is not a prerequisite to set reference phase (1) and reference phase (2) precisely equal. What is required is that the difference between reference phases (1) and (2) should be constant; in other words, reference phases (1) and (2) should vary equally if they vary at all, and this becomes a problem of cable temperatures. Suppose we have a sample of coaxial cable that is 1m in length. When its temperature goes from 0° to 30°C, which corresponds to a winter-summer temperature variation, the copper line inside the cable becomes longer, through linear thermal expansion, by 0.5 mm. Meanwhile the permittivity of the dielectric fill of polyethylene inside the cable changes with temperature [1]. The permittivity ε then determines the velocity v of signal propagation along the cable as follows:
v =c / ε
(3.2)
Here, c is the velocity of light. The cable’s electrical length is inversely proportional to v, hence being proportional to ε. At 0°C the permittivity is 2.39, which makes the electrical length 1.55 times longer than real. At 30°C the permittivity is 2.37, which makes the electrical length 1.54 times longer. Hence the electrical length becomes 10 mm shorter over that temperature range. As a combined effect, the electrical length will become 9.5 mm shorter. This estimate, though a rough one, can be used for examining the cable temperature condition, as follows. Suppose, as an example, cables (1) and (2) in Figure 3.3 are both 5m long and are being used in a 10m-baseline interferometer. If the reference correction is to work properly, the difference between reference phases (1) and (2) must not change by more than 1 deg. Correspondingly, if the signal wavelength is 25 mm (in the 12-GHz band), the difference between the electrical lengths of cables (1) and (2) must not change by more than 0.07 mm. So, we must keep
20
Radio Interferometry and Satellite Tracking
the temperatures of cables (1) and (2) uniform to within 0.2°C. This would not be too difficult if we place covers over the cables to prevent the direct sunlight from irradiating the cables while allowing the air to flow around the cables so as to equalize their temperatures. Other cables used for transmitting IF signals and distributing local oscillator signals have electrical lengths that may also change with temperatures. The effects of their changing lengths will be removed after the reference correction. So, the possible imbalance in cables (1) and (2) as discussed above is the single source of cable phase errors in our interferometric phase measurement.
3.6 Reference Coupler The reference signal will be coupled to the LNA’s input port normally by using a directional coupler. If our target satellite has sufficient transmission power, then we will choose small-diameter receiving antennas. For a small antenna the LNA and downconverter are often combined into a single low-noise block converter that fits a small feed unit. In such a case, it is not practical to use a directional coupler for coupling the reference signal. A possible substitute is to use a test horn, as illustrated in Figure 3.7. The antenna dish has a 1.2m effective diameter, and it operates in the C band. The horn is attached to the edge of the dish and radiates the reference signal toward the feed unit. The radia-
Figure 3.7 Reference-coupling horn attached to an antenna dish. (Courtesy of NICT.)
Receiving Equipment
21
tion pattern of the feed unit is adjusted slightly wider than normal, so that the feed unit may pick up the reference radiation. This adjustment would cause a slight loss in antenna gain along with an increase in the antenna’s effective noise temperature. Practically, this is not a problem because the phase detection will have a sufficient sensitivity. In the test case shown in Figure 3.7, the coupling loss from the horn to the feed in the C band was 36 dB, and in this case the field strength required of the reference radiation was low enough that we did not need a radio license. The horn support must be rigid enough to prevent any change in the horn-feed distance, because otherwise the reference correction task would not work correctly.
Reference [1] Riddle, B., and J. Baker-Jarvis, “Complex Permittivity Measurements of Common Plastics over Variable Temperatures,” IEEE Trans. on Microwave Theory and Techniques, Vol. 51, No. 3, 2003, pp. 727–733.
4 Phase Detection In this chapter, we discuss how to measure the interferometric phase for a satellite beacon and for a reference correction signal. Before measuring the phase, we need to identify where the target beacon signal is by observing the signal spectrum. So, the input signal is processed by Fourier analysis, and the resulting spectrum is also used for determining the phase. Using this principle, we can create a diagram of a phase-measuring unit. The accuracy of phase measurements depends on the ratio of signal to noise. How to improve the accuracy of measurements by reducing the effect of noise is thus an important topic of this chapter. Our discussion starts with the measurement of beacon signals, and then nonbeacon signals are considered, to widen the tracking capability.
4.1 Direct Phase Measurement A simple idea for measuring the phase is illustrated in Figure 4.1. Consider here a satellite beacon for measurement. A bandpass filter selects the beacon signal x(t) from receiving route #1 and, similarly, y(t) from receiving route #2. Signals x(t) and y(t) have identical waveforms, although they are at different positions along the time axis. A time counter starts at the moment signal x crosses zero from negative to positive, and stops at the moment signal y crosses zero from negative to positive. The time interval thus measured can be converted into phase angle if the signal frequency is known. In this way, the phase difference between signals x and y can be measured directly. This kind of direct measurement might appear to be a clear and practical one, but in reality it has problems. First, any direct current (dc) component existing in the signal or any distortion of the signal waveform will cause an error in zero-cross timing. Second, the bandpass filter for signal selection has 23
24
Radio Interferometry and Satellite Tracking
Figure 4.1 Direct phase measurement by time-interval counting.
a delay time, and this delay time becomes longer if the filter’s passband is set narrower for better signal selection. One cannot assume that this kind of delay time should never change; rather we must regard the filter delay time as an error source in the time-interval measurement. These are the reasons why we do not adopt the idea of direct phase measurement for our interferometer.
4.2 Separate Measurement A different concept can be used for phase measurement in which the phases of x(t) and y(t) are measured separately, and their difference is calculated by subtraction. Consider a beacon signal x with frequency ω and phase φ, as follows:
x (t ) = cos ( wt + φ)
(4.1)
To measure the phase of this signal, we use a circuit like that illustrated in Figure 4.2. We prepare local oscillator signals, cos ωt and −sinwt, of which −sin ωt is made from cos ωt by using a 90-deg phase shifter. We then tune the local frequency ω to the incoming signal frequency. Signal x is then multiplied by the locals, and the results are made into time averages by integrating them over a period of time, to obtain Ix and Qx. The integration time periods are equal for Ix and for Qx, and the period is set long enough compared with the signal period, 2π/ω. The signal x(t) in (4.1) is written as
x (t ) = cos wt cos φ - sin wt sin φ
(4.2)
Phase Detection
25
Figure 4.2 Phase measurement for signal x. INT: time integration.
So, the Ix and Qx obtained after integration are proportional to cos φ and sinf, respectively. This is because the squared terms, cos2 ωt and sin2 ωt, have the same dc components, while the cross-term of cos ωt sin ωt vanishes. That is, Ix and Qx indicate the cosine and sine components of signal x(t); they represent the in-phase component and quadrature-phase component of x(t) with respect to the zero-phase reference of cos ωt. We can now calculate the phase of signal x, in reference to the local signal, as follows:
φx = tan -1 Q x I x
(4.3)
The phase of signal y(t) is measured in the same way by using an identical circuit prepared for y:
φ y = tan -1 Q y I y
(4.4)
Here, phases φx and φy themselves do not have any physical meaning, since the local signal may be at an arbitrary phase. That is, the local signal has no relationship with the incoming satellite beacon. After differencing, the effect of the arbitrary local phase vanishes, and we obtain the interferometric phase:
φ = φx - φ y
(4.5)
For this measurement process to work, the local frequency ω must be tuned right to the frequency of the incoming signal, because otherwise I and Q will both vanish after time integration such that determining the phase becomes impossible. This concept of phase measurement eliminates the problems that accompany the concept of direct measurement that was discussed earlier. The dc components in the incoming signals have no effect on I and Q. Signal distortions will have minimal effects, because harmonic components resulting from the
26
Radio Interferometry and Satellite Tracking
distortion will be nullified after time integration. Narrow bandpass filters are not needed, since the signal to be measured is selected by the tuning of the local frequency.
4.3 Fourier Transform The meaning of I and Q becomes clear if the signal processing in Figure 4.2 is written in terms of complex numbers. If we set
X ( w) = ∫ x (t ) e - j wt dt
(4.6)
then Ix is the real part of X(ω), while Qx is the imaginary part of X(ω), and fx is the argument of X(ω). Similarly, if we set
Y ( w) = ∫ y (t ) e - j wt dt
(4.7)
then Iy, Qy, and φy are, respectively, the real part, imaginary part, and argument of Y(ω). Equations (4.6) and (4.7) were written for one particular frequency ω and for particular signals x and y. If ω is regarded as a variable, then X is the Fourier transform of signal x, and this x is now regarded as the signal received from route #1. As ω varies, the local frequency in Figure 4.2 sweeps over the bandwidth of the received signal, to find a signal to be measured. In the same context, Y is the Fourier transform of y, the signal received from route #2. We can now define our concept of interferometric phase measurement, as illustrated in Figure 4.3. The received signals, x and y, are made into Fourier transforms, X and Y. Observe the power spectrum |X(ω)| to find a peak at some frequency w where a beacon signal exists, and at this ω determine the interferometric phase: φ = φx − φy. Finding the peak corresponds to the right tuning of the local oscillator as mentioned before. Note that arg [X*] = −arg [X]; that is, the argument of a
Figure 4.3 Interferometric phase measurement. FT: Fourier transform; CP: cross-conjugate product.
Phase Detection
27
complex number changes its sign for a complex conjugate. The interferometric phase is therefore calculated as φ = φx - φ y = arg X ( w)Y * ( w)
(4.8)
The cross-term X(ω) Y*(ω) is referred to as a cross-spectrum, and this is the key to the phase measurement. Note that |XY*| provides the power spectrum of the received signal. This is because x(t) and y(t) have identical waveforms, with only their phases being different. We will use this |XY*| for observing the power spectrum, because this is much better than |X | with regard to the signal-tonoise ratio; the reason will become clear later. To summarize, the phase is measured in the following steps: 1. 2. 3. 4.
Make signals x and y into Fourier transforms, X and Y. Set cross-conjugate spectrum XY*. Observe power spectrum |XY*| and find a peak at frequency ω. Determine the interferometric phase as an argument of X(ω)Y*(ω).
Steps 3 and 4 must be done for a satellite beacon signal and for a reference correction signal. The frequency of the reference correction signal must be set so that it will not fall on other existing signals, and this is to be checked in step 3.
4.4 Problem of Image Spectrum The Fourier analysis thus plays the key role in phase measurement. Handling the Fourier spectrum, however, needs care. Consider a signal, for example:
x (t ) = cos ( αt + β )
(4.9)
If we put this signal into the Fourier transform of (4.6), the resulting spectrum X(ω) will look like that shown in Figure 4.4, with two spectral lines at frequencies α and −a. This is because of the relationship of
1 1 cos( αt + β ) = e j ( αt + β ) + e - j ( αt + β ) 2 2
(4.10)
Accordingly, the phase, or the argument of X(ω), shows different values: β at ω = α, and −β at ω = −α. Suppose that the signal has changed its phase slightly, from β to β + ∆β, in (4.9). This change causes the argument of X(ω) to
28
Radio Interferometry and Satellite Tracking
Figure 4.4 Spectrum of a sinusoidal signal.
change by +∆β at ω = α, and by −∆β at ω = −α. That is, the Fourier spectrum shows a nonhomogeneous response with regard to phase when a phase shift occurs in the signal. Now, consider a signal with a bandwidth that has a frequency spanning from zero through B. This is actually for the signals output from the receiving routes after downconversions. Its Fourier spectrum will look like that shown in Figure 4.5. At any frequency component α, there is a corresponding negative frequency component of −α, owing to the relationship of (4.10). So, the spectrum has two parts, (1) and (2) in Figure 4.5, in symmetry with respect to ω = 0. Part (2) in the negative frequency side may be called an image spectrum of part (1) in the positive frequency side. It is the existence of an image spectrum that causes the above-mentioned nonhomogeneous response. The information contained in the signal is represented by part (1) alone, because part (2) simply mirrors part (1). That is, the image spectrum is a redundant part. The redundant part will consume storage memory and process time in data processing, without merit. So, we should consider eliminating the redundant part in signal processing.
4.5 Signal Processing for Phase Measurement On the basis of the discussions above, our phase-measuring unit has a diagram like that illustrated in Figure 4.6. This design follows the standard process of digital sampling with a fast Fourier transform (FFT). Signal processing is based
Figure 4.5 Presence of image spectrum.
Phase Detection
Figure 4.6 Diagram of phase-measuring unit. AD: analog-digital sampling; LPF: lowpass filter; DS: down sampling; INT: time integration. Double lines indicate the flow of complex-number data.
29
30
Radio Interferometry and Satellite Tracking
on the operation of complex numbers, so as to match the Fourier transform that operates for data in complex numbers. In the following, we trace how the processing proceeds while assuming specific design parameters for the unit so as to demonstrate a practical measurement case. The FFT is treated as a given, established technique; its details can be found in a reference, typically [1]. The signal from each receiving route is assumed to have a bandwidth of 20 MHz, which has been downconverted into the span from 0 to 20 MHz. Signal x is from receiving route #1, and y from receiving route #2. We will trace the processing of signal x, which also applies to the processing of signal y. Because the highest frequency component of x is 20 MHz, the sampling rate for x must be at least 40 MHz, so in the present case the sampling rate is set at 40.96 MHz to create a better relationship with the FFT cycle rate. The signal at this stage (a) has a spectrum that looks like Figure 4.7(a). Because of sampling, the spectrum pattern repeats along the frequency axis, as indicated by the broken lines. Here, the spectrum part that spans from −20 to 0 MHz is the image spectrum. Next, in Figure 4.6, a local oscillator generates a signal with a −10-MHz frequency, as being a function e jωt with ω being set at −10 MHz. Signal x and the local signal are multiplied together for mixing. At this stage (b), the signal spectrum becomes like Figure 4.7(b), with a shift of −10 MHz along the frequency axis. When the signal comes out of a lowpass filter with a passband of 10 MHz, its spectrum becomes like that shown in Figure 4.7(c′). Here, it is essential for the filter not to pass any frequencies higher than 10 MHz, so a slight drop-off near inside 10 MHz is tolerated. The filter thus cuts away the image spectrum.
Figure 4.7 Signal spectrums at stages of data processing.
Phase Detection
31
The highest frequency component is now 10 MHz, so that the sampling rate of 20.48 MHz suffices, and the spectrum will look like that shown in Figure 4.7(c). Although the sampling rate is halved, the signal has real and complex parts now, so the amount of information is the same as in the original signal. In Figure 4.6, disregard for the time being the spreader, and assume that the signal goes through from (c) to (d). We collect sample data {xi} over 50 µsec, so as to prepare a set of 1,024 data points. This data set is made into spectrum data {Xi} of 1,024 points by using the FFT. The spectrum {Xi} corresponds to Figure 4.7(c′), which has a span of 20 MHz. Similarly, signal y is converted into spectrum data {Yi}, and finally, cross-spectrum {Zi} = {Xi Y*i} is obtained. The data set {Zi} of 1,024 points is thus obtained every 50 µsec, while the data sets are integrated over a period of time, for example, over 1 sec, for smoothing. In other words, we obtain a vector Z every 50 µsec, and accumulate the vectors over 1 sec. We can now observe the power spectrum by | Zi |, determine where the object signals are, and determine the arguments of Zi for the satellite beacon and for the reference signal, at the repetition cycle of 1 sec. The phases of beacon and reference are thus measured at exactly the same timing, which is ideal for doing reference corrections. The measurement process works if the FFT runs in a repetition period of 50 µsec or shorter. If the FFT is not that fast, the measurement process must run offline. For example, we track a satellite for 1 min and save the data {xi} and {yi}. In the following 9 min, we process the data to obtain {Zi} before again tracking the satellite for 1 min. That is, we track the satellite intermittently every 10 min. This is doable if we know the correct target signal frequency. If we do not know it, we must search for it in the spectrum data, but this would be impractical if the wait time is too long. Two possible means of relief are to reduce the number of data points or to slow the signal processing clock rate, but each would result in a narrower span of spectrum observation. After all, the FFT should run in real time, and this is a matter of a trade-off between processing speed and spectrum observation span. Now, the spreader in Figure 4.6 operates as follows. The vernier local generates a local signal of positive or negative frequency, so as to shift the spectrum slightly along the frequency axis. Downsampling is done to thin out the data points. For example, one data point is picked up out of every four points while three points are discarded, to make 1/4 downsampling. As a result, data {Xi} will show a spectrum that has a span that is reduced by 1/4, that is, 5 MHz. This effect of the spreader corresponds to setting a center frequency and a frequency span on operating a spectrum analyzer. One can omit the spreader upon designing a phase-measuring unit; however, the spreader will be useful for searching for a target signal by fine tuning if we have to track an unknown satellite with an unknown downlink format.
32
Radio Interferometry and Satellite Tracking
The window in Figure 4.6 is to set proper weights for data {xi} and {yi}. If the trends of data xi near x1024 and that near x1 differ too much from each other, the power spectrum will suffer from distortion. To ease this problem, less weight is given to data xi near x1024 and near x1. Window patterns can be chosen from established ones [1]. Setting a window thus modifies slightly the apparent shape of the power spectrum, although it does not affect the phase measurement, as seen in the Beacon Measurement section of Appendix 4A at the end of this chapter.
4.6 Noise Reduction We have so far focused our attention on signals only, which cannot be free from noises. So we need to examine how noise affects our phase measurement. The representative noise in receiving satellite signals is the thermal noise that originates from the first-stage amplifier, as we will see in Chapter 5. It is an additive noise superposed on the satellite signal. Because the Fourier transform process is a linear one, signal and noise are in an additive relationship after the transform process. Suppose we have found, after observing the spectrum {|Zi|}, that a satellite beacon exists in Zi. We can then write
X i = b1 + n1
(4.11)
Y i = b2 + n2
(4.12)
where b1 and b2 are the components attributed to the beacon, and n1 and n2 are noise components. Considering that receiving routes #1 and #2 are of identical design, we assume that 2
2
b1 = b2 = S
n1 = n2 = N
2
2
(4.13) (4.14)
where S and N denote the beacon power and noise power. If we were to determine the argument of Xi, the situation would look like that shown in Figure 4.8(a). Noise n1 adds to signal b1 to cause an error δφ in the argument. If N is smaller than S, we can say that n1’s component orthogonal to b1 causes the error δφ. If we collect many samples of n1 and evaluate their orthogonal components as a root mean square (RMS), it will be equal to N 2 .
Phase Detection
33
Figure 4.8 Phase error caused by noise.
So, referring to Figure 4.8(b), we evaluate the phase measurement error level as follows:
RMS{ δφ} =
N 2S
(4.15)
That is, the error level is inversely proportional to the square root of the signal-to-noise ratio S/N. Let us now consider the determination of the argument of Zi. From (4.11) and (4.12), we can write
Z i = X iY i * = (b1 + n1 )(b2 + n2 ) * = b1b2* + b1n2* + b2*n1 + n1n2*
(4.16)
The first term on the right-hand side is our desired term; the other three are undesired terms. Set the desired D and undesired U as follows:
D = b1b2*
(4.17)
U = b1n2* + b2*n1 + n1n2*
(4.18)
The undesired U adds to desired D to cause an error δφ in the argument, as illustrated in Figure 4.9, which is quite similar to Figure 4.8(a).
Figure 4.9 Phase error caused by undesired U.
34
Radio Interferometry and Satellite Tracking
With regard to our purpose of determining the argument, the desired D may be called a kind of signal, whereas the undesired U is a kind of noise. One can then consider their powers. The power of D is simply 2
2
2
PD = D = b1 b2 = S 2
(4.19)
The power of U is evaluated, as its expected value, as follows: 2
2
2
2
2
2
PU = U = b1 n2 + b2 n1 + n1 n2
2
= SN + SN + N 2 = 2SN + N 2
(4.20)
In this equation, in the right-hand side, there appear to be cross-terms too, for example, b1 n2* b2 n1*. Noises n1 and n2 are statistically independent from each other as random variables, because they originate from different receiving routes. So, this term vanishes because its expected value is zero. Similarly, all other cross-terms vanish to allow this equation to hold. Next, we examine the effect of time integration. While the {Zi} are being integrated, the desired and undesired terms will accumulate in different ways, as illustrated in Figure 4.10. The D terms are constant, so they accumulate linearly, like [D] in Figure 4.10. After k-sample integration, the sum vector has a length k PD , and its power equals k 2PD . The U terms will accumulate in a random manner, like [U] in Figure 4.10. As the samples of U are added one after another, the end point of the sum vector goes away from the start point, gradually with fluctuations, and this is called a random walk. In such a case, it is known that the end point will be away from the start point most likely by k PU after adding k samples. Hence, the power of summed U terms becomes kPU. In this way, k-sample integration makes the desired PD increase to k2 PD, and the undesired PU increase to k PU, while the original values of PD and PU were as given in (4.19) and (4.20). So, the undesired-to-desired power ratio PU / PD after k-sample time integration will become
Figure 4.10 Effects of time integration on desired D and undesired U.
Phase Detection
35
PU k (2SN + N 2 ) 1 2N N 2 = = + 2 PD k 2S 2 k S S
(4.21)
By substituting this PU/PD into the N/S in (4.15), we can evaluate the phase measurement error level, as
RMS{ δφ} =
1 N N2 + k S 2S 2
(4.22)
If S/N is better than a few decibels, this evaluation becomes simply
RMS{ δφ} =
N kS
(4.23)
By comparing this evaluation with (4.15), we find that the measurement error is reduced by a factor of 2 k after integrating k samples. More precisely speaking, the factor 2 owes to making a cross-product of Xi and Yi, and the factor 1 k owes to the integrating. In our present case, k = 20,000 for a 1-sec integration, so the reduction factor is 1/100. This is equivalent to saying that the effective signal-to-noise ratio is improved by 40 dB, which is a substantial improvement. For this reason, we use { Zi }, rather than { Xi } or { Yi }, to observe the signal spectrum.
4.7 Tracking Nonbeacon Signals We have so far assumed that we are tracking a beacon of a satellite. If we try to track an unknown satellite during orbital safety monitoring, we must find out where its beacon is. This would be no easy task if the downlink spectrum spans over hundreds of megahertz. For such a situation we might prefer to track signals from a communication transponder, rather than a beacon. If a communication transponder signal comes into our phase-measuring unit, its response will look like that shown in Figure 4.11, where power spectrum { |Zi| } and phase spectrum {arg Zi} are sketched. The span of spectrum observation is 20 MHz, and this span will perhaps cover a part of one communication channel. We choose a subspan of the spectrum where the signal is at a good level and set it as our target. Over this target span, the phase will show a linear slope against the frequency axis, and this slope depends on the relative distance from two antennas to the satellite.
36
Radio Interferometry and Satellite Tracking
Figure 4.11 Measuring a signal that has a bandwidth.
The slope appears from the following mechanism. The phase φ is related to the above-mentioned relative distance, l: φ=
2π l λ
(4.24)
where λ denotes the wavelength. We know that λ = c/f, with c being the speed of light and f the frequency, so that (4.24) becomes
φ=
2 πf l = wT c
(4.25)
The phase φ thus shows a slope against ω, with the coefficient T = l/c denoting the relative delay time of the signals. Now, referring to Figure 4.11, we average the phases measured over the target span by fitting a line on the slope to determine the value of a phase at the center frequency ωC. This is equivalent to tracking a fictitious beacon existing at the frequency ωC. In the meantime, we place the reference correction signal at a frequency where the satellite signal level is low enough, as suggested in the figure. So, the reference will be placed near but not far from the edge of a communication channel. The target and the reference are set in this way with some degree of freedom, while they must be set in the same span of spectrum observation. Here, we recollect that the observed spectrum is slightly modified by the window applied to data {xi} and {yi}; see Figure 4.6. For the case of beacon
Phase Detection
37
tracking, the window had no effects on phase measurements. If we are measuring a nonbeacon signal, the effect of the window can be neglected, as discussed in the Nonbeacon Measurement section of Appendix 4A. The averaging of the phase as stated above has an effect on error reduction. If we use a single component Zi and its signal-to-noise ratio is S/N, the level of the phase measurement error is evaluated by (4.23). If we use m components of the Zi’s to average the phases and if these components are of uniform S/N, the error level will be reduced by 1 m , because the error components of different Zi’s are mutually independent. So, in this case the phase measurement error level becomes
RMS{ δφ} =
N mkS
(4.26)
For example, if we use a target signal with a 10-MHz bandwidth, then m = 512 and the factor of error reduction owing to averaging is 0.044 or, in other words, the effective S/N improvement is 27 dB. Measuring a signal with bandwidth thus enjoys twofold error reductions: by time integration and by frequency averaging. If the signal-to-noise ratio can be improved that much, we will be able to track weak satellite signals. Suppose there is an unknown satellite in the proximity of our own satellite. The unknown satellite points its beam not to us but toward some different service area, while its sidelobe sends us a weak signal. Our interferometer will be able to detect and track this kind of weak signal. Tracking a nonbeacon signal, however, requires caution. If the spectrum in Figure 4.11 seems to come from one unknown satellite, it is possible that actually some other satellite’s signal comes in and adds to the spectrum in superposition. In such a case, we must search for a frequency at which one satellite emits a signal while the other does not by careful signal monitoring. The spreader in Figure 4.6 will surely assist it. Tracking an unknown satellite may thus require some skill and patience in signal monitoring, while doubtlessly the capability for measuring nonbeacon signals will widen the range of our orbital safety monitoring.
Reference [1] Bracewell, R. N., The Fourier Transform and Its Applications, Boston: McGraw-Hill, 1999.
38
Radio Interferometry and Satellite Tracking
Appendix 4A: Window and Phase Measurement A window is a function of time, w(t), typically with a shape illustrated in Figure 4A.1(a). It is a real function, defined over the time span of sampled data. If the time origin is set at the center of the data span, the function is symmetric with respect to t = 0. Its spectrum, W(ω), is then a real function, symmetric about ω = 0. Its shape will look like that of Figure 4A.1(b), which has a small width along the ω axis. The effect of the window on phase measurements is examined as follows.
4A.1 Beacon Measurement When a signal x(t) is multiplied by the window w(t), the signal spectrum changes into a convolution of W(ω) and X(ω); let the resulting spectrum be denoted by X ′(ω). If the input signal is a beacon with frequency ωB, then X ′(ω) will show the same pattern as W(ω), while its center is placed at ω = ωB. Similarly, signal y(t) after being windowed has a spectrum Y ′ (ω), with X ′ and Y ′ showing identical patterns for the power spectrum. The cross-spectrum is Z ′(ω) = X ′ (ω) Y ′*(ω), with its peak power existing at ω = ωB. So, the phase is measured as being f = arg Z ′ (ωB ). At w = ωB we have
X ′ ( wB ) = W (0) X ( wB )
Y ′ ( wB ) = W (0)Y ( wB )
So, the measured phase is
φ = arg X ′ ( wB )Y ′ * ( wB ) = arg W 2 (0) X ( wB )Y * ( wB ) Since W 2(0) is a real number, we have
Figure 4A.1 Window function (a) and its spectrum (b).
Phase Detection
39
φ = arg X ( wB )Y * ( wB ) = arg Z ( wB ) That is, the window has no effects on the phase measurement.
4A.2 Nonbeacon Measurement If signal x(t) has a bandwidth, the convolution of X(ω) and W(ω) is obtained by the process illustrated in Figure 4A.2. Here, X(ω) and W(ω) have discrete sample values, and the coefficients a, b, and c correspond to those in Figure 4A.1(b). A sample value of X ′ is calculated, for any i, as
X i′ = + cX i - 2 + bX i -1 + aX i + bX i +1 + cX i + 2 +
The convolution is thus a linear combination of {Xi} operating in a sliding manner. Here, for simplicity, we set
X i′ = bX i -1 + aX i + bX i +1
This is to take only three terms; however, the following argument does not lose its validity if we take more terms for convolution. Similarly, for signal y(t), we set
Y i ′= bY i -1 + aY i + bY i +1 The cross-spectrum is then
Z i′ = X iY ′ i ′ * = [bX i -1 + aX i + bX i +1 ] bY i -* 1 + aY i * + bY i +* 1
Now, assume that we are receiving a white signal. When the data of Z ′i are collected in numbers and made into a time average, terms such as b Xi−1 a Y *i or
Figure 4A.2 Xi′ is obtained from { Xi } through convolution.
40
Radio Interferometry and Satellite Tracking
Figure 4A.3 Complex vectors in symmetry.
b Xi−1 b Y *i+1 or any other similar ones will vanish, because, for example, Xi−1 and Yi are statistically independent of each other. As a result, we have
Z i′ = b 2 X i -1Y i -* 1 + a 2 X iY i * + b 2 X i +1Y i +* 1
The terms in the right-hand side, being complex vectors, will look like those shown in Figure 4A.3. Because the signal is white, the magnitudes of Xi−1 Y *i−1, Xi Y *i, and Xi+1 Y *i+1 are identical. The vectors are at equal angular separations, because Z = X Y * has a linear phase slope against frequency, as given by (4.25). So, the vectors are in symmetry with respect to the vector a2 Xi Y *i. Hence, we have, for any i,
arg Z i′ = arg a 2 X iY i * = arg X iY i * = arg Z i This is why the window has no effects on the phase measurement.
5 Signal, Noise, and Precision In the previous chapter we discussed the precision of phase measurement as depending on the signal-to-noise ratio (SNR). To complete the discussion, we need to know the powers of signal and noise. So in this chapter we examine the parameters determining the signal and noise for the practical cases of satellite downlinks. This discussion allows us to estimate how sensitive our interferometer would be for detecting and tracking weak satellite signals.
5.1 Required SNR As mentioned in Chapter 2, the basic model of our interferometer takes the form of Figure 5.1. The interferometer should detect a change in the direction of the target satellite to a resolution of δq = 0.001 deg. This δθ corresponds to a change of δl = 0.18 mm in the relative path length if the baseline length is 10m. This variation δl then causes the interferometric phase to vary by
δφ = 2 π δl λ
(5.1)
where λ is the wavelength of the satellite signal. If we use phase detection as considered in Chapter 4, then from (5.1) and (4.23), we can estimate the minimum SNR required for detecting that change of δl: 2
2
λ kS 1 = = N δφ 2 π δl
(5.2)
We are considering here the effective SNR, because the original S/N is improved to k S/N by integrating k samples, with k acting as an integration gain. 41
42
Radio Interferometry and Satellite Tracking
Figure 5.1 Model interferometer.
We are interested in the C band and the Ku band as being the most congested cases of frequency and orbital uses. For these frequency bands, the evaluation by (5.2) becomes
36.4 dB … for the C band (4 GHz; λ = 75 mm)
(5.3)
26.9 dB … for the Ku band (12 GHz; λ = 25 mm)
(5.4)
These are the minimum required effective SNRs at which our interferometer can operate.
5.2 Signal Power and Noise Power Various parameters determine the SNR in a satellite downlink, as shown in Figure 5.2. Consider first the signal power. Suppose that the satellite is transmitting a beacon with a power PT , and that its radiation is isotropic. The radiated power will then be distributed uniformly over the whole surface of a sphere of radius d, when the radiation has propagated over the distance d. At this distance, place a receiving antenna with aperture area AR. That antenna will then receive a power, PT AR /(4πd 2). The satellite actually has a transmitting antenna with an aperture area AT , so as to radiate the beacon in a beam pointed toward the receiving antenna. Accordingly, the power received by the antenna increases by a gain factor GT as given by
GT = 4 π AT λ2 The receiving power, S, then becomes
(5.5)
Signal, Noise, and Precision
43
Figure 5.2 Parameters determining the downlink quality.
S = PT GT
AR 4 πd 2
(5.6)
The receiving antenna has its own gain fator GR as given by
G R = 4 π AR λ2
(5.7)
We can rewrite (5.6), by using (5.7), into the form of
S = PT GT L G R
(5.8)
with 2
λ L= 4 πd
(5.9)
This L is referred to as the free-space propagation loss, because it depends on the propagation distance and the signal wavelength only. The relationship
44
Radio Interferometry and Satellite Tracking
of (5.8) then tells us how much power is delivered from the satellite to an Earth station in terms of antenna gains and the propagation loss. Note that the areas AT and AR are the effective aperture areas of the antennas, and they are smaller than the areas measured geometrically. If the antenna radiates a microwave beam with its amplitude being distributed uniformly over the aperture, then its effective area equals the geometrical area, but in reality this is not possible. The amplitude must become smaller near the rim of the aperture, and this makes the effective area smaller by a factor of efficiency. The efficiency is usually from 60% to 70%. Consider next the noise power. The primary noise source in a satellite beacon downlink exists in the first-stage amplifier in the receiving equipment, that is, the amplifier that is right next to the receiving antenna’s feed unit. Any amplifier contains some resistive circuit elements, and any resistive circuit element placed at a finite temperature generates a thermal noise. The noises thus generated internally will be output from the amplifier, as being superposed to the output signal. If the first-stage amplifier has a sufficient gain, then we can disregard the noises generated in other later stage amplifiers. So, it is essential to use a first-stage amplifier that generates as little thermal noise internally as possible. In this context the first-stage amplifier is called a low-noise amplifier. There is one more noise source, which exists in the receiving antenna. This noise comes from the antenna’s sidelobe, because it picks up some of the thermal noise radiated from the ground, even if the antenna’s main beam is pointing to the satellite. These kinds of thermal noises are summed up and the total sum is modeled by a hypothetical resistor placed at a temperature T in Figure 5.2. That is, the resistor generates the equivalent thermal noise, and this noise is added to the received signal before being input to the amplifier. Because the hypothetical resistor represents all of the thermal noise sources, the receiving hardware is assumed to be noise free. The temperature T, which is referred to as the noise temperature, is thus for a theoretical temperature, not necessarily equal to the surrounding temperature. Given a noise temperature T [K], the noise power N [W] is calculated as
N = kBTB
(5.10)
where kB is Boltzmann’s constant: 1.38 × 10−23 [J/K], and B [Hz] is the bandwidth that the signal to be received occupies. These are the minimum essentials for considering our interferometer; more information about how signal and noise behave in a satellite link can be obtained from [1], for example.
Signal, Noise, and Precision
45
5.3 Beacon Downlink Budget If we know the powers of the signal and noise, we can estimate their ratio. Estimating the quality of a satellite link in terms of its signal and noise is often referred to as link budgeting. A case of link budgeting for a C-band beacon is shown in Table 5.1. Communication satellites have transponders with powers as high as hundreds of watts, whereas the beacon power is usually lower by orders of magnitude. Suppose the beacon power is 0.1W. The beacon is normally phase modulated so as to carry telemetry information; so a part of its power goes to the modulation sidebands and the residual carrier power works as the beacon. The beacon power is thus reduced slightly, for example, to that in Table 5.1. The transmitting antenna assumes a moderate size. The receiving antenna assumes the size of a VSAT (very small aperture terminal) antenna. The aperture-area efficiency and the noise temperature are those commonly assumed in the C band. The bandwidth is 20 kHz, as the beacon will be found in one frequency cell of the FFT and this cell is used for phase detection. The budget shown in Table 5.1 is thus practical, if not rigorous, for a beacon downlink. Usually in satellite communications we use the carrier-to-noise ratio, C/N, rather than S/N, in link budgets, because the received carrier is still to be input to the demodulation process before obtaining any communication signals. Here we are using the S/N because our desired signal is the beacon itself for our interferometer. The link budget in Table 5.1 takes into account the effective S/N improvement that occurs in the phase-measuring unit. The link budget as estimated in Table 5.1 shows an ample S/N margin. This margin owes to the integration gain in phase detection. Into this margin we can place possible losses, such as rain attenuation, atmospheric attenuation, Table 5.1 C-Band Beacon Downlink Budget (Frequency: 4 GHz; Wavelength: 75 mm) Beacon transmitting power PT −11.0 dBW 0.08 W Transmitting antenna gain GT 34.4 dB Diameter 1.5m, efficiency 0.7 Propagation loss L 196.3 dB Distance 39,000 km Receiving antenna gain GR 36.0 dB Diameter 1.8m, efficiency 0.7 Receiving power S −136.9 dBW Noise N −165.6 dBW Temperature 100 K, bandwidth 20 kHz S/N 28.7 dB Integration gain 43.0 dB 20,000 samples Effective S/N 71.7 dB Required S/N 36.4 dB S/N margin 35.3 dB
46
Radio Interferometry and Satellite Tracking
and receiving antenna polarization losses. One important possibility is the satellite antenna pointing loss. If a target satellite exists in our receiving antenna’s beam while the satellite is not pointing its antenna beam toward us, we must receive the signal from its antenna sidelobe, thus at a low level. The ample S/N margin would allow us to track such a satellite if we knew its beacon frequency, in the scenario of orbital safety monitoring. We are presently assuming a specific length for the interferometer baseline, as assumed in Figure 5.1. What would happen if we change the baseline length, for instance, to a half? The δl in (5.2) then becomes half, so the required effective S/N increases by 6 dB. Accordingly, we will rewrite the required S/N in Table 5.1. The baseline length, which is a basic design parameter, connects with the link budget in this manner. Another case of link budgeting, for a Ku-band beacon, is shown in Table 5.2. This is for a higher frequency, where the aperture-area efficiency tends to become lower, and the noise temperature tends to increase. With these factors taken into account, Table 5.2 is again a practical estimate of the link budget. The S/N margin is even higher compared with the C-band case, because antenna gains are higher for a shorter wavelength.
5.4 Tracking a Weak Signal Suppose we have to track an unknown satellite that does not point its beam toward us. We will then need to search out its spilt-over communication signal by spectrum monitoring, as mentioned in Chapter 4. This kind of signal can be extremely weak if the satellite antenna is duly cutting off its off-axis radiation. In such a case, we cannot establish a link budget in the style of Table 5.1 or 5.2. Then, what degree of weak signal can we find and track with our interferometer?
Figure 5.3 Finding an unknown, weak signal.
Signal, Noise, and Precision
47
Table 5.2 Ku-Band Beacon Downlink Budget (Frequency: 12 GHz; Wavelength: 25 mm) Beacon transmitting power PT −11.0 dBW 0.08W Transmitting antenna gain GT 43.3 dB Diameter 1.5m, efficiency 0.6 Propagation loss L 205.8 dB Distance 39,000 km Receiving antenna gain GR 44.9 dB Diameter 1.8m, efficiency 0.6 Receiving power S −128.7 dBW Noise N −162.6 dBW Temperature 200 K, bandwidth 20 kHz S/N 33.9 dB Integration gain 43.0 dB 20,000 samples Effective S/N 76.9 dB Required S/N 26.9 dB S/N margin 50.0 dB
If we observe the spectrum of a weak signal in the phase-measuring process as given in Chapter 4, it will look like that shown in Figure 5.3. If the signal level is seen above the noise level, for instance, by 10 dB, we can recognize that the signal exists. If the signal has a bandwidth of 10 MHz, we set it as our measurement target, and integrate the data samples along the frequency axis. There are 512 data samples, which yields an integration gain of 27 dB. The effective SNR then improves to 10 + 27 = 37 dB, and this satisfies the required SNRs given as (5.3) and (5.4) for the interferometer to operate. Now, that 10 dB mentioned above is for the SNR, for each frequency cell of the FFT, as obtained after the improvement by time integration. In that case, what was the original SNR before improvement? To estimate it, we refer to (4.21). Note here that the original S/N must have been small; so, we consider the term N 2/S 2 while disregarding the term N/S. That is, the signal-to-noise ratio has changed from an original value S/N to an effective value 2kS 2/N 2 after the processing of the cross-product and k-sample integration. If the effective ratio is 10 dB, the original S/N was −18 dB and this is the minimum required S/N for a signal to be found. Figure 5.4 illustrates such a signal existing at the minimum required level below the noise.
5.5 Estimates in PFD The minimum required level as discussed above can be expressed in a different form. Consider any bandwidth of 4 kHz in the target bandwidth, as illustrated in Figure 5.4. In a C-band case with noise temperature T = 100 K, the noise power existing in this 4-kHz bandwidth is, by (5.10), −172.6 dBW. The minimum required signal power in the same bandwidth is then −172.6 −18 = −190.6 dBW. This is a power coming from the receiving antenna’s effective
48
Radio Interferometry and Satellite Tracking
Figure 5.4 Signal existing at minimum required level.
aperture area, which was 1.78m2. Convert this to the power per unit area of aperture, which makes −193.1 dBW; to be exact, the unit should be written as dBW/m2 per 4 kHz. This is a form called power flux density (PFD), which is used to express the strength of satellite downlink signals as measured on the Earth’s surface. That is, we have figured out the minimum required PFD for a signal to be found by our interferometer. If we find a signal existing as such, and if the signal shows a bandwidth spreading over 10 MHz or more, then we can improve its effective SNR by applying the frequency-axis integration as mentioned earlier, to make the signal usable as a tracking target. Similarly, for the Ku band with T = 200 K, the minimum required PFD is calculated as −189.4 dBW/m2 per 4 kHz. Satellite communications and terrestrial communications sometimes share the same frequency band, and this is the case for the C band and Ku band. To prevent satellite downlinks from interfering with terrestrial links, regulations place a maximum allowable PFD for each frequency band [2], and any satellite downlink must not exceed the maximum PFD. Table 5.3 shows the PFD maximums, along with those minimums we have calculated. From the maximum down to the minimum, there is a wide dynamic range, and any signal coming in the range can be used as a tracking target. The PFD of a communication satellite’s downlink will be lower—but not much lower—than the maximum if it is received in its service area. On the other hand, the PFD of an unknown, spiltover signal can be much lower, by tens of decibels due to the satellite antenna’s Table 5.3 PFD Maximums and Minimums (in dBW/m2 /4 kHz) C Band Ku Band Maximum allowable −142 −138 Minimum required −193 −189
Signal, Noise, and Precision
49
off-axis radiation cutoff, while it is reasonable to expect that the wide dynamic range may well cover such a low-level signal.
References [1] Agrawal, B. N., Design of Geosynchronous Spacecraft, Englewood Cliffs, NJ: Prentice-Hall, 1986, Chap. 7. [2] ITU, Handbook on Satellite Communications, New York: Wiley Interscience, 2002, Chapter 9.
6 Error Factors In Chapters 4 and 5 we discussed the effect of thermally generated noises on phase measurements. These noises give rise to random errors in phase measurements. Meanwhile there are measurement errors of a different nature that are constant rather than random. They originate from baseline errors, phase ambiguity, and atmospheric refractions. These kinds of error factors are addressed in this chapter.
6.1 Baseline Error The baseline of an interferometer is a vector quantity as defined by its length and orientation. The baseline vector is determined in two steps: First, find the reference point for each antenna, and then survey the relative position of the reference points. The baseline vector as determined through this process will have an error of perhaps an order of a millimeter. An error in the baseline vector gives rise to a measurement error, as illustrated in Figure 6.1. Consider first the ideal case with no baseline errors shown in Figure 6.1(a). The angle θ is the satellite’s direction with respect to the baseline. The interferometer detects the relative path length: l = B sin q. Actually the interferometer is to measure the phase delay caused by the relative path length, but here we consider the l to be the variable being measured. If u is a unit vector pointing to the target satellite and B the baseline vector, the relative path is written as l = B ⋅ u, with the dot denoting an inner product. Suppose the baseline vector has an error δB, as in Figure 6.1(b). Depending on the geometry of δB relative to B, the error δB can cause an error in the baseline’s length or its orientation or, in general, to both. As the baseline changes to B + dB, an error in the measurement arises: 51
52
Radio Interferometry and Satellite Tracking
Figure 6.1 Error in baseline vector.
δl = u ⋅ δB
(6.1)
Note that the error δl vanishes if u and δB are orthogonal to each other. Would this error δl vary if the target satellite moves? If the satellite is geostationary, its motion observed at an Earth station appears typically as a variation of ±0.1 deg in the direction angle. If this motion is denoted by ∆u, the variation in the error δl is estimated as follows:
∆ ( δl ) = ∆u ⋅ δB ≤ ∆u ⋅ δB = 0.0017 mm
where |∆u| = sin (0.1 deg) and |δB | = 1 mm are assumed. This variation corresponds to a phase-error variation of 0.02 deg for a Ku-band case with λ = 25 mm, or 0.008 deg for a C-band case with λ = 75 mm, both of which are negligibly small. So, we can assume that a baseline error induces a constant error in phase measurements. We know that a constant bias error is likely to be present in phase measurements even after the process of internal reference correction, as noted in Chapter 3. The error caused by the baseline error is similarly a bias error. Precisely speaking, there is one more possible factor of bias error. As illustrated in Figure 3.3, reference correction signals must be distributed to receiving routes. Here, distribution cables (1) and (2) should have identical electrical lengths, while practically they may have a small difference, and this difference gives rise to a constant bias in phase measurements. The overall phase bias, that is, the sum of those three biases, is an unknown constant. Such a constant of unknown bias exists in every interferometer, and this must be calibrated by using some external reference. This process is often called zero calibration, and its necessity is common to every measurement for satellite tracking. The process of zero calibration will be considered for different cases of orbit estimation in Part III.
Error Factors
53
6.2 Phase Ambiguity If we have measured a phase and its result was, for example, 1 deg, then in reality, it may be that the true phase was 361 deg or 721 deg or any other like value. This fact brings about a problem, as illustrated in Figure 6.2. The target satellite is assumed to be at a direction angle θ, as marked by (a) in Figure 6.2, relative to the perpendicular of the baseline. For simplicity here, θ is assumed small so that the satellite direction is nearly perpendicular to the baseline. If B is the baseline length and λ the wavelength, the interferometric phase will take a value of
φ=
2π 2π B sin θ ≈ Bθ λ λ
(6.2)
Suppose the satellite direction changes hypothetically from θ to q + λ/B, as marked by (b) in Figure 6.2. The interferometric phase given by (6.2) should then change from φ to φ + 2π, but 2π is disregarded in the phase measurement and so the output phase φ remains unchanged. That is to say, the interferometric phase φ cannot point, which is true, but is false between (a) and (b). Similarly, any direction θ + nλ/B with n as an arbitrary integer makes a false direction. There appears thus a cycle of false directions with a period of λ/B. We have no means of excluding the false readings, so there is always a risk of considering a false one to be true, which causes an error. This problem is referred to as phase ambiguity, and every interferometer must face this problem. We can avoid the ambiguity problem if the target satellite exists in a known, finite zone, as illustrated schematically in Figure 6.3. If the period of the ambiguity cycle, λ/B, is greater than the width of the known zone, then we can identify the true direction. This can be the case for a geostationary satellite, because the satellite is normally kept within a longitude zone with a width of 0.2 deg. Let us arrange it so that the ambiguity cycle period will be, with
Figure 6.2 Phase ambiguity problem.
54
Radio Interferometry and Satellite Tracking
Figure 6.3 Eliminating the false directions.
margin, 0.4 deg or greater. If λ = 75 mm (C band; 4 GHz), the baseline length must then be 10.7m or shorter. There is thus an upper limit to the baseline length. The upper limit makes a recommended baseline length, as shorter baselines lead to lower tracking accuracies. Adopting the recommended baseline allows us to forget about the phase ambiguity as long as the satellite stays within the keeping zone. If the possibility exists for the target satellite to go out of its keeping zone, or if we need to use a longer baseline for enhanced accuracy of tracking, then we should eliminate the ambiguity problem by using two baselines as illustrated in Figure 6.4. The longer baseline with antennas A1 and A3 is for precise tracking measurements, although it gives false directions with a small period for the ambiguity cycle. The shorter baseline with antennas A1 and A2 is then used to identify the true direction by virtue of its enlarged period for the ambiguity cycle. The baseline of A1–A2 still has some degree of ambiguity, but this will be resolved by the finite width of the radiation pattern of the antenna. Another idea for eliminating the ambiguity problem is to combine the interferometric measurement with a different kind of tracking measurement. For example, ranging and interferometry could be one possible combination. One
Figure 6.4 Interferometer with two baselines.
Error Factors
55
more idea is to use an interferometer with a mechanically movable baseline. These ideas will be shown in detail in Part III.
6.3 Atmospheric Refraction One more potential error factor exists that is not inherent to the interferometer, but is in its surrounding environment. Microwaves from the satellite propagate through the atmosphere before arriving at the receiving antennas. The density of the atmosphere changes with altitudes, decreasing at higher altitudes. So, the ray of the microwave is refracted as it propagates through the atmosphere, as illustrated in Figure 6.5. Because of the refraction, the elevation angle of the satellite appears slightly higher than the geometrical elevation. This creates a problem because what we need for orbit estimation is the geometrical elevation. The excess elevation due to the refraction can be modeled as a function of elevation angle, and we can refer to a graph shown, for example, in [1] or alternatively use a fitting function:
δE L =
17.6 16 + 930tan(E L )
[deg ]
(6.3)
Here, EL is the elevation angle as observed. Subtracting the δEL from the observed elevation yields the geometrical elevation angle. The excess elevation due to the atmospheric refraction affects the interferometric measurement in the following manner. Consider an interferometer with a baseline vector B (see Figure 6.6). Vector B is decomposed into BA and BT, with BA being aligned with, and BT transverse to, the incoming microwave path. The relative path length measured by the interferometer is then given by
Figure 6.5 Atmospheric refraction.
l = B A cos (E L )
(6.4)
56
Radio Interferometry and Satellite Tracking
since the component BT has no sensitivity to elevation changes. Hence, we have
δl = -B A sin (E L ) δE L
(6.5)
This is how an excess elevation δEL causes an error to the interferometric measurement. One can correct for the error δl by using (6.3) and (6.5). Note, however, that the atmospheric correction may contain some uncertainty. Equation (6.3), or its original graph, is based on a representative model of the atmosphere. Precisely speaking, the atmospheric model should be different for different locations and different seasons, but it is no easy task to create a precise model that takes into account those variable conditions. In reality, we cannot find a precise model in a usable form for elevation refraction correction. Alternatively, what we can find is a precise model that tells us what amount of excess range is produced by the atmospheric refraction; see, for example, [2]. The interferometer is to measure the relative range from two antennas to the satellite; so, theoretically speaking, we could apply the excess-range model to the paths from the antennas to the satellite in a relative manner to correct for the atmospheric refraction. Applying the model, however, requires collecting atmospheric data, such as humidity, pressure, temperature, and so on, while some errors may remain uncorrected depending on conditions. So, it would be practical to use the simple model of (6.3) while allowing for some uncertainty, presumably of the order of 10%. The δl from (6.5) will then contain an uncertain part, and this makes an error factor in regard to atmospheric refraction. The effect of the atmospheric refraction on the interferometer depends on the geometry of the baseline relative to the satellite, as suggested in Figure 6.6. No effects will appear if the baseline is placed transversely to the incoming microwave path. The maximal effect appears if the baseline is placed along the
Figure 6.6 Decomposing the baseline vector.
Error Factors
57
microwave path. For example, at an elevation of 30 deg, δEL becomes 0.032 deg and its potentially uncertain part of 0.003 deg becomes the error factor for the maximal case. This is a small error, but not small enough to be neglected. So, the atmospheric error factor will be considered separately for different cases of interferometer applications in Part III.
6.4 Effect of Rainwater If the atmosphere interests us, what would be the effect of rainfall? When it rains, the water falling onto an antenna dish will make a thin layer of running water. The layer is a dielectric medium, so it causes some phase delay. This effect, however, will not affect the interferometric phase if the delays are equal for two antennas. If we compare an offset-fed parabolic antenna and a center-fed parabolic antenna, both pointing to a geostationary satellite, the offset-fed antenna has its dish placed in a position nearer to the vertical than the center-fed one. So, using offset-fed antennas will be a better choice because the water runs down more quickly, thus creating less unwanted phase delay. Precisely speaking, the thickness of the water layer may not be constant; more likely, it will vary from moment to moment in a fluctuating manner. Accordingly, the interferometric phase will show an error, sometimes a positive one and sometimes negative, thus averaging zero. For this to occur, the antennas should have identical shapes. This suggestion accords with the suggestion we made in Chapter 2 of using identically designed antennas for the accurate determination of the baseline.
References [1] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994, p. 229, Figure 8.2A. [2] Katsougiannopoulos, S., et al., “Tropospheric Refraction Estimation Using Various Models, Radiosonde Measurements and Permanent GPS Data,” in XXIII FIG Congress, Munich, Germany, October 8–13, 2006.
7 Design and Installation The most fundamental parameter to consider when designing an interferometer is no doubt the length of the baseline. Theoretically speaking, longer baselines enable better tracking accuracies; however, practically speaking, they may also carry the risk of causing an unpredictable phase error. Baseline design is thus a sensitive problem. This chapter addresses this problem and suggests a possible solution.
7.1 System Layout If we assume that our target satellite stays within its nominal longitude zone, we can then adopt the recommended baseline length, which is typically 10m for the C band or shorter for the Ku band. The antennas should be identical in size and shape, with VSAT-class antennas being suitable choices. Such an interferometer will be set in the layout suggested earlier in Figure 3.3. It occupies three sites: two antenna sites and one center site for phase measurement. Installing the interferometer system requires special care in terms of the phase balance of the reference distribution cables (1) and (2). Their nominal lengths must be identical and their temperatures must be uniform; that is, thermal phase balancing is required. So, we must avoid a situation in which one cable is shaded from sunlight by a big tree or a building, while the other cable is not, because in such a case the cables may become thermally unbalanced even if the air flows freely around the cables. The center site must supply local reference signals and dc power sources to the antenna sites, and they can be sent through the IF cables by frequency multiplexing.
59
60
Radio Interferometry and Satellite Tracking
If we need a longer baseline in order to enhance tracking precision, the system layout changes from that shown in Figure 3.3 to the one shown in Figure 7.1. There is an additional antenna that has a short baseline for resolving the phase ambiguity. The antennas for the short baseline will be placed as a close pair, so the system practically uses three sites similar to the two-antenna case. Three cables are used for distributing the references, and they must satisfy the same requirement of thermal phase balancing. Phases must be measured for the long baseline and for the short baseline; a single measuring unit can do this by means of alternate switching. The motion of a geostationary satellite as viewed from a ground station is not fast, so the unit measures first the long-baseline phase over a couple of minutes, and then switches to the short baseline to measure the phase over a couple of minutes, thus continuing the switching cycle.
7.2 Reflecting Interferometer Regardless of whether there are two antennas or three, the thermal phase balancing of the reference distribution cables is an essential requirement. This is not difficult to satisfy if the baseline is short enough. For longer baselines, however, the requirement becomes challenging. One cannot predict precisely what would happen to cable temperatures when long cables are routed in troughs or trenches or suspended in the air or handled in any other manner. So, there is a risk that the cables may show some thermal imbalancing when they are installed, but we have no means of measuring the degree of imbalance once the cables are installed. Clearing this problem is crucial to the design and installation of an interferometer.
Figure 7.1 Layout of three-antenna interferometer. LNC: low-noise amplifier and downconverter as a combined unit; PD: power divider.
Design and Installation
61
One idea to overcome the problem is illustrated in Figure 7.2. There is a plane mirror, which reflects the downlink microwave and guides it into a receiving antenna, while the other antenna receives the microwave directly. The receiving antennas can then be placed side by side, so the reference distribution cables become short enough. The problem of possible thermal imbalance thus vanishes. The system layout becomes compact if the antennas and measuring unit are placed at one site. The reflecting system assumes that the mirror and antenna are placed within a near-field distance, so that the diffraction of the microwave beam over that distance may be neglected. That is, the microwave propagation can be regarded as being virtually geometrical. This condition is well satisfied if the mirror-antenna distance is not more than tens of meters. The size of the mirror is determined geometrically to cover the antenna’s aperture. The mirror must cover it enough, because otherwise the receiving gain would suffer a loss and the scattering at the mirror edges would bring unwanted sidelobes. The mirror surface is conductive, so the signal phase changes by π after reflection. Precisely speaking, there may be some effect of wave theory so that the phase may not propagate as described by geometrical optics. This effect of phase discrepancy will be constant for the mirror and antenna placed at fixed positions. So, the phase discrepancy along with that π-change will make a constant phase bias, and this bias can be treated as one of those bias-error factors existing in the interferometer. The plane mirror and the antennas must of course have the same mechanical quality, that is, surface precision and mechanical rigidity. Note that the mirror affects the antenna polarization. If the downlink microwave is circularly polarized, its polarity changes from RHCP (right-hand circular polarization) to LHCP (left-hand circular polarization), or LHCP to RHCP after reflection. If the microwave is linearly polarized, its polarization angle changes after reflection. This must be remembered when installing the antenna.
Figure 7.2 Interferometer with a plane mirror.
62
Radio Interferometry and Satellite Tracking
Now, let us examine the whereabouts of the baseline vector. In Figure 7.3, the mirror produces an image of antenna #2 at #2′, so, the baseline vector is to connect #1 and #2′. (These points are for the antennas’ reference points.) The position of #2′ can be determined by surveying the geometry of antenna #2 and the mirror. Here, we should assume a possible error δθ in surveying the mirror’s pointing orientation. This error then causes an error δB in the baseline vector. The error dB is orthogonal to the line of sight to the satellite, so, it will cause no errors in interferometric phase measurements, as mentioned in Chapter 6. Different versions are possible for the reflecting interferometer, as illustrated in Figure 7.4. A symmetric version is shown in Figure 7.4(a), where the
Figure 7.3 Baseline vector of a reflecting interferometer.
Figure 7.4 Versions of reflecting interferometers.
Design and Installation
63
baseline vector connects the center points of the mirrors, and the antennas have the same polarity. The version shown in Figure 7.4(b) would be suitable if the mirrors are placed on a building’s rooftop, while the center site is placed on the ground for better accessibility. Figure 7.4(c) shows a version that includes a short baseline for ambiguity resolution. Symmetric versions are better choices for the reasons we have already discussed. Figure 7.5 shows an example of a plane mirror with 1.8m sides. A Kuband test baseline was formed in the style of Figure 7.2 by using this mirror and 1.2m-diameter antennas, to make a 40m baseline. Comparing the reflecting route and the direct-receiving route shows that the loss in the receiving gain due to the insertion of the plane mirror was not more than 1 dB. We will see the real use of plane mirrors for satellite tracking in Part III, where they are discussed in more detail.
Figure 7.5 Example of plane mirror (Courtesy of NICT).
Part II Geostationary Satellite Orbit
8 Overview of Part II: Geostationary Satellite Orbit Part II of this book discusses the orbits of geostationary satellites. The concept of a geostationary orbit may appear simple at first sight—the satellite and the Earth go around at the same pace, as illustrated in Figure 8.1, so that the satellite looks motionless when observed from the Earth. This is a simple, static image, but in actuality the orbit of a geostationary satellite is not that simple. The satellite is subjected to perturbing forces, such as the gravity of the Sun and Moon for instance. Accordingly, the orbit changes gradually with time, until it becomes no longer stationary. The satellite must then generate a restoring force, in order to counteract the perturbation and get back to its original stationary orbit. As a result of those perturbing and restoring forces acting on the satellite, the satellite will move relative to the Earth, and only if this motion is made small can the satellite be practically stationary. It is such dynamics that shape the stationary orbit. Our discussion of the dynamics of orbital motions starts with Kepler’s laws in Chapter 9. Usually in textbooks, Kepler’s laws are derived from the fundamental law of universal gravitation, as illustrated in Figure 8.2(a). This derivation is to solve a differential equation named an equation of motion. Solving this equation, however, often relies on mathematical devices of which the physical meaning is not very clear. So, we will choose another way, as shown in Figure 8.2(b). We regard Kepler’s laws as given observational facts and input them to the equation of motion. This allows us to study what kind of force is acting on the satellite. We will of course find out the law of universal gravitation by ourselves, and this allows us to place Kepler’s laws as the basis of our subsequent discussions.
67
68
Radio Interferometry and Satellite Tracking
Figure 8.1 Concept of geostationary orbit.
Figure 8.2 Understanding Kepler’s laws.
Our discussions will be focused on geostationary orbits in Chapter 10 onward. The orbit of a geostationary satellite is practically a near-stationary orbit, which is a near-circular orbit. If an orbit is not circular, then we assume it is elliptical. Actually, there is an idea for treating a near-circular orbit without using an ellipse. The shape of a near-circular orbit is virtually a circle, while only its center is displaced slightly from the Earth’s center. This is an approximation, which works well for practical cases of near-stationary orbits. This idea will help us simplify our subsequent discussions. How the orbit would change when those perturbing and restoring forces mentioned earlier act on the satellite will be discussed in Chapters 11 and 12. Discussed first is the changing of the orbit when the satellite fires a gas jet to generate a restoring force in an impulsive manner. The impulsive orbital change is then expanded to continuous, gradual orbital changes due to perturbing forces. In this way, we derive a theory of orbital perturbations, which is straightforward to follow because we are confining our object to geostationary orbits; otherwise, the theory would become much more complex. The resulting laws of orbital changes then allow us to consider how to keep the satellite stationary, as discussed in Chapter 13. The topics and discussions in Part II will thus make up a concise theory of geostationary orbits, as illustrated in Figure 8.3. The discussions are
Overview of Part II: Geostationary Satellite Orbit
69
Figure 8.3 Topics and discussions in Part II.
self-contained, without the need for external references in principle. If the reader prefers to derive Kepler’s laws in the way used by standard textbooks, [1] or [2] should be consulted for examples. If the reader is interested in a perturbation theory that does not confine itself to geostationary orbits but covers any orbits in general, refer for example to [3] or particularly [4], where an abyss of mathematical analysis awaits. If the reader wants to understand geostationary orbits over a wide range from fundamental dynamics through operational practices of satellite control, refer to [5], which is a classic volume. The last chapter of Part II addresses the problem of overcrowding geostationary satellites. Though the problem is closely related to the regulation of the use of the orbit, it is better understood on the basis of orbital dynamics and station keeping. The background structure of the problem will be described, with a possible solution being suggested, because everyone who plans or operates geostationary satellites cannot disregard this problem.
References [1] Bate, R. R., D. D. Mueller, and J. E. White, Fundamentals of Astrodynamics, New York: Dover, 1971, pp. 11–33. [2] Prussing, J. E., and B. A. Conway, Orbital Mechanics, New York: Oxford University Press, 1993, pp. 3–19. [3] Moulton, F. R., An Introduction to Celestial Mechanics, New York: Dover, 1970. [4] Brouwer, D., and G. M. Clemence, Methods of Celestial Mechanics, New York: Academic Press, 1961. [5] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994.
9 Kepler’s Laws The motion of planets around the sun was formulated by Kepler early in the 17th century. He discovered a set of three laws, which are known as Kepler’s laws. The laws apply as well to the motion of artificial satellites around the Earth, thus giving the sound basis of satellite orbits. The laws are shown one by one in the following sections, with their physical meanings clarified, where the original reference of the laws to “planets” and the “sun” has been changed to “satellites” and the “Earth.”
9.1 Kepler’s First Law A satellite around the Earth follows an elliptical orbit, with its one focus at the center of the Earth.
Figure 9.1 illustrates how to draw an ellipse. A moving point P is at a distance r from O and a distance r′ from O′, where O′ is a fixed point placed at some distance away from the origin O. If P moves while maintaining
r +r′ = L
with L being a constant, the locus of P then makes an ellipse, and its focal points are O and O′. This is often pictured by showing a pencil and a piece of thread of length L. The thread is pinned down at its ends to points O and O′, and the pencil is placed at P. By moving the pencil while keeping the thread stretched, we can draw an ellipse. The thread length L is equal to the major axis of the ellipse.
71
72
Radio Interferometry and Satellite Tracking
Figure 9.1 Drawing an ellipse.
Let us write the satellite position P in polar coordinates of radius r and angle θ, with respect to the Earth placed at the origin O. One can write the relationship for triangle OPO′ as follows: r ′ 2 = r 2 + D 2 - 2rD cos ( π - θ )
where D is the distance OO′. Since r′ = L − r, this equation becomes
2Lr + 2rD cos θ = L2 - D 2
hence, D2 1 - L2 r= D 1+ cos θ L L 2
If we set
a = L 2; e = D L
then we have the equation of an ellipse:
r=
a (1 - e 2 )
1 + e cos θ
(9.1)
Kepler’s Laws
73
where e is called the eccentricity of the ellipse. If e = 0, or namely D = 0, the orbit becomes a circle. If D increases as L is approached, then e approaches 1, and the orbit becomes elongated along its major axis. The eccentricity thus determines the shape of the orbit. The a in (9.1) is called the semimajor axis of the ellipse, and it reduces to the radius of a circle if e = 0. Hence a may be regarded as a generalized radius, and it determines the size of the orbit. The set of (a, e) thus specifies the size and shape of an elliptical orbit.
9.2 Kepler’s Second Law The radius of a satellite sweeps out equal areas in equal times.
Suppose that, in Figure 9.2, we have observed the satellite moving from A to B in a given length of time, and later, moving from C to D in the same length of time. In such a case we have two areas swept out by the moving radius of the satellite. The law states that the two areas will be equal, regardless of where the arcs AB and CD are. This is equal to stating that the rate of area-sweeping per unit time, or the area-sweeping rate, is constant wherever the satellite is in the orbit. So, the satellite moves faster when the radius becomes smaller, and vice versa. The speed of the satellite therefore becomes maximal at the perigee, and minimal at the apogee. If the orbit is circular, then equal areas means equal speeds, so the satellite moves at a constant rate of revolution. In Figure 9.3, line AB containing O is perpendicular to the major axis of the ellipse, and it divides the ellipse into areas S1 and S2. The time needed for the satellite to move from A to B via apogee C is proportional to area S1. Similarly, that time from B to A via perigee D is proportional to area S2. The satellite then spends more time in the S1 side than in the S2 side, and the former-to-latter proportion increases rapidly with the eccentricity of the orbit. Motions in elliptical orbits have this kind of dynamism.
Figure 9.2 Areas swept out by the satellite’s moving radius.
74
Radio Interferometry and Satellite Tracking
Figure 9.3 Apogee-side area S1 and perigee-side area S2.
9.3 Kepler’s Third Law The square of the orbital period is proportional to the cube of the semimajor axis.
The period here refers to the time required for the satellite to complete one revolution in the orbit. While the first and second laws refer to the motion of one satellite in its orbit, the third law refers to different orbits made by different satellites and states the relationship between these orbits. The third law states implicitly that the orbital period does not depend on the eccentricity. So, the orbits drawn in Figure 9.4 must have equal periods. This law is sometimes referred to as the law of power 3/2, because this is how periods are dependent on semimajor axes.
Figure 9.4 Orbits with equal periods.
Kepler’s Laws
75
9.4 Physical Meanings Kepler’s laws describe observations about orbital motions. Here, describe signifies the following. Suppose we do some experiments in our laboratory and collect a set of measurement data. We would then try fitting a curve to the data set, for example, a curve y = ax2 + bx + c, to determine the parameters a, b, and c. Kepler did the same kind of fitting of a curve. He found that an ellipse fits exactly the observed facts if the parameters, including semimajor axis and eccentricity, are chosen correctly. But why it should be an ellipse, why a constant area-sweeping rate, were not part of the question; the laws just describe facts. This is because the law of motion, namely, the equation of motion, was not known at the time. We now know the equation of motion. So we can input Kepler’s laws into the equation of motion and see what those three laws would mean in terms of dynamics. In other words, we will examine what kind of force is acting on the satellite if its motion obeys Kepler’s laws. To do that, we must prepare an equation of motion written in polar coordinates. In Figure 9.5, there is a satellite at r and θ, and some force F yet unknown is acting on it in two components: Fr along the radius and Fθ orthogonal to the radius. Figure 9.6 is for the same satellite, with x being its position vector. A unit vector I is placed at the origin O, and it always points to the satellite. When the satellite moves, the vector rotates around O to track the changing direction of the satellite. Another unit vector J, orthogonal to I, is placed at O. When I rotates, J also rotates so that the two vectors will always be orthogonal to each other. Suppose the satellite has changed its position by ∆x in a unit time, and correspondingly its polar angle has varied by ∆θ. This motion causes the vector I to change by ∆I, which is parallel to J. At the same time J will change by ∆J, which is oppositely parallel to I. From these considerations we can set
Figure 9.5 Force F in two components.
76
Radio Interferometry and Satellite Tracking
Figure 9.6 I and J making a tracking frame.
I = θ J, J = - θ I
(9.2)
I = θJ + θ J = θJ - θ 2 I
(9.3)
and, hence,
Since x = r I , we can write x = r I + 2r I + r I
(9.4)
Substituting (9.2) and (9.3) into (9.4) yields
(
) (
)
x = I r - r θ 2 + J 2r θ + r θ The equation of motion then takes the form F = m x
(
)
(
)
= m I r - r θ 2 + m J 2r θ + r θ
where m is the mass of the satellite. Separating the components for I and J yields
(
)
Fr = m r - r θ 2
1d 2 Fθ = m 2r θ + r θ = m r θ r dt
(
)
(9.5)
( )
These are the equations of motion written in polar coordinates.
(9.6)
Kepler’s Laws
77
We are now ready to input Kepler’s laws to the equations of motion, to find Fr and Fθ. First, refer to the second law. Suppose that, in Figure 9.7, the satellite has moved from A to B in a unit time, having caused a change ∆r in radius and a change ∆θ in polar angle. The area swept out by the satellite radius during that time is nearly equal to the area of triangle OAB. The height of B from OA equals (r+∆r) ∆θ ≅ r ∆θ, so the triangle area is (r2∆q)/2. If we set C = r 2 θ
(9.7)
then it equals twice the area-sweeping rate. The second law states that C is a constant; hence, it follows from (9.6) that Fθ = 0. That is, the force is acting on the satellite along the radius, either toward the Earth or away from the Earth. We next refer to the first law, and rewrite (9.1) as follows: p 1 + e cos θ
(9.8)
p = a (1 - e 2 )
(9.9)
r=
with From (9.8) we have
1 + e cos θ = p r
Differentiating both sides with respect to time yields
-e sin θ ⋅ θ = - pr r 2
hence,
r =
Figure 9.7 Area swept out in a unit time.
er 2 sin θ ⋅ θ p
78
Radio Interferometry and Satellite Tracking
We can eliminate θ by using (9.7): r =
e C sin θ p
Differentiating both sides one more time yields r =
e C cos θ ⋅ θ p
Substitute this into the equation of motion (9.5) as follows:
e Fr = m C cos θ ⋅ θ - r θ 2 p
Here, use cosθ = (p/r−1)/e from (9.8), and eliminate θ again. After arranging the terms, we find that Fr = -m
C2 pr 2
(9.10)
So, the force pulls the satellite toward the Earth, and it obeys an inversesquare law. If the force always points to the Earth as the satellite moves, then we can only think that the pulling force comes from the Earth. The above derivation so far is for one satellite revolving around the Earth. We now refer to the third law, which states that P 2 = K ⋅a3
(9.11)
holds for every satellite orbit around the Earth, with P being the orbital period and K a constant of proportionality. Referring to Figure 9.8, the area of an ellipse is πab, where b is the semiminor axis given by b = a 1 - e 2 , or b = ap by (9.9). So, the constant C, which was twice the area-sweeping rate, is calculated as
C =2
a ap p πab = 2π = 2π 3 P K Ka
Kepler’s Laws
79
Figure 9.8 Semimajor axis a and semiminor axis b.
With this substitution, (9.10) becomes Fr = -m
4 π2 1 K r2
By setting the constant part as
µ = 4 π2 K
(9.12)
µ r2
(9.13)
we have
Fr = -m
This is the equation for every satellite around the Earth. Now, if the Earth pulls a satellite with a force Fr, then the satellite must pull the Earth with the same force. This must be expressed in the form
Fr = - M
µ′ r2
where M is the mass of the Earth and µ′ is some other constant. This relationship is possible only if the force is expressed in the form of
Fr = -G
Mm r2
with G being a constant that does not depend on M or m. Note that this equation is exactly the law of universal gravitation. The constant µ in (9.13) equals GM, and to determine its value, we should observe a satellite orbit and measure
80
Radio Interferometry and Satellite Tracking
its period P and semimajor axis a. Then (9.11) and (9.12) will show that µ = 398,600 km3/s2. Correspondingly, the third law is stated more specifically as
P = 2π a 3 µ
(9.14)
9.5 Significance of Kepler’s Laws We have thus learned that the force acting on the satellite was the force of universal gravitation. Then at this stage we can consider a more fundamental problem: What should be the orbit of a satellite if it is pulled by the gravitation of the Earth? This is referred to as a two-body problem. It is now clear that Kepler’s three laws give us an exact solution to the two-body problem. This is why we can rely on the three laws as the basis for studying satellite orbits. Actually, the satellite motion may slightly differ from the two-body problem if perturbation is present, for example by the Moon’s gravity. Still in such a situation Kepler’s solution is effective if small corrections are taken into account for the perturbation, as we will see later. Kepler’s laws are able to visualize satellite motions by describing the geometry of the orbit and the variation pattern of orbital velocities, as we have seen. The set of semimajor axis, eccentricity, and other angular parameters that determine the orientation of the orbital ellipse in space is referred to as Keplerian orbital elements, and this set is used in daily operation of artificial satellites, despite Kepler’s laws having been discovered so long ago. Sometimes the semimajor axis is replaced by the orbital period through the relationship of (9.14), or sometimes the set of (a, e) is replaced by the perigee and apogee heights, to express the same contents as Keplerian elements.
Figure 9.9 Standing position of Kepler’s laws.
Kepler’s Laws
81
Figure 9.9 illustrates the standing position of Kepler’s laws. There is the fundamental layer that contains the equation of motion and the law of universal gravitation, and from this layer Kepler’s laws are derived. Kepler’s laws then describe what should be observed. On studying the orbits of satellites, we would normally refer to Kepler’s laws, rather than to the fundamental layer. This is analogous to studying electric circuits—normally we refer to Ohm’s law or Kirchhoff ’s law rather than to the fundamental equations, even if we know that everything in electricity and magnetism obeys the fundamental equations.
10 Near-Stationary Orbit Geostationary satellites are placed in particular orbits, which are circular orbits right above the equator at a specific altitude. An ideally stationary satellite would not move at all when observed from the ground; practically, however, satellites undergo some motion around their supposed stationary position, thus the term near-stationary satellite. Understanding the motion of near-stationary satellites is essential for an understanding of geostationary orbits. In this chapter we will study the motion of near-stationary satellites on the basis of Kepler’s laws.
10.1 Geostationary and Near-Stationary Orbits A satellite is said to be geostationary if its position relative to the solid Earth is fixed without motion. More precisely speaking, a geostationary satellite has an invariant position in the Earth-fixed, rotating coordinate frame. Three conditions must be satisfied for a satellite to become geostationary. First, its orbital revolution must be synchronized with the Earth’s rotation. The Earth rotates relative to the inertial space once in 86,164.1 sec, and this period must be the period of the orbit. Kepler’s third law [see (9.14) in Chapter 9] then determines the semimajor axis of the orbit to be a = 42,164.2 km. Second, because the Earth rotates at a constant rate, the satellite also must revolve around the Earth at a constant angular rate. Kepler’s second law then requires the orbit to be circular, with e = 0. The above-determined a then becomes the radius of the circular orbit, which is referred to as the synchronous radius. These two conditions make the satellite stationary in longitude. The third condition is to make the satellite stationary in latitude, by requiring the orbit to lie in the equatorial plane. That is, the inclination i—the angle of the orbital plane against the equa83
84
Radio Interferometry and Satellite Tracking
torial plane—must be zero. A satellite orbit satisfying these three conditions has only one parameter left for free choice: the satellite’s longitude measured along the equator, and this is called a stationary longitude. If a stationary longitude is given to a satellite, then its position is specified in three dimensions with its altitude being 35,786.0 km (i.e., a minus Earth’s radius) right above the equator at the given longitude. This position is referred to as the satellite’s geostationary position, or simply stationary position. The above discussion is for an ideally stationary satellite. Practically speaking, it is not easy to keep the three conditions perfectly all the time. So, tolerating small deviations from the ideal conditions is a common practice if it does not deteriorate the supposed mission of the satellite. Accordingly, the satellite goes into a near-stationary orbit and moves away slightly from its nominal stationary position and moves about the position. To understand this kind of orbital motion is to understand the nature of geostationary orbit. The motion breaks down into different kinds, of which some are constrained in the orbital plane and the other perpendicular to the orbital plane. We will analyze these kinds of motions separately in the following sections in order to clarify the motion of near-stationary satellites.
10.2 Orbit with Small Eccentricity We will start with the motion constrained in the orbital plane. So assume for the time being that the orbital inclination is zero. Consider what would happen if the eccentricity e differs from zero, while assuming the semimajor axis a to be geostationary. If e differs from zero, it is actually not unlimited. One can study the distribution of e for operational geostationary satellites, by referring to satellite orbital data that have been made public [1], and a result shown in Figure 10.1 tells us that e is as small as 0.001 or less. If e is this small, we can neglect e2 in the formula for an ellipse given by (9.1), and rewrite the formula as follows:
r = a (1 - e cos θ )
(10.1)
Neglecting the term ae 2 causes an error not exceeding 40m in satellite position. The presence of nonzero e in (10.1) makes the radius r decrease by ae at θ = 0, and increase by ae at q = π; these correspond to (1) and (3) in Figure 10.2. The radius neither increases nor decreases at θ = π/2 or θ = 3π/2; these are represented by (2) and (4) in Figure 10.2. One can then presume that the orbit has a circular shape while its center is displaced by ae. Let us show this by using Figure 10.3. A circle with radius a has its center at O′ which is away from the origin O by a small distance d. The radius r at an angle θ is then given by
Near-Stationary Orbit
85
Figure 10.1 Distribution of eccentricity, in the year 2010.
Figure 10.2 Orbit with a small eccentricity. Broken line: ideal orbit with e = 0.
d r ≈ a 1 - cos θ ′ a
(10.2)
where θ′= θ − ε, with ε being a small angle subtended by d. By approximating that
cos θ ′ = cos ( θ - ε) ≈ cos θ + ε sin θ
one can rewrite (10.2) as
86
Radio Interferometry and Satellite Tracking
Figure 10.3 Displaced circle representing the orbit.
d d r = a 1 - cos θ - ε sin θ a a
Since |ε| < d/a, the third term on the right-hand side is small—of the order of (d/a)2—so it can be neglected. Hence, the radius is given by
r = a - d cos θ
(10.3)
Equations (10.1) and (10.3) become identical if d = ae. That is, the circle centered at O′ with radius a represents the shape of the orbit. In subsequent discussions and chapters, we will use this kind of displaced circle to represent the orbit, with a denoting the circle’s radius and d = ae the displacement. Note that this approximation is valid for small-eccentricity orbits, and a is for the semimajor axis in usual terminology.
10.3 Motion Due to Small Eccentricity We need to find the revolution angle θ as a function of time in order to determine the orbital motion. Refer to Figure 10.2 and draw the same orbit once more to make Figure 10.4, where shaded areas (1) through (4) are the areas swept out by the radius in unit times. Kepler’s second law then states that these areas are all equal. So, owing to the variation in radius by (10.3), the rate of orbital revolution is faster at (1) and slower at (3) compared with that of the ideally stationary orbit, whereas at (2) and (4) the rate is neither faster nor slower.
Near-Stationary Orbit
87
Figure 10.4 Applying Kepler’s second law.
The revolution rate θ will then vary, over one revolution of the satellite, as illustrated in Figure 10.5. We presume that this variation is sinusoidal, because there was a sinusoidal term in (10.3). The revolution rate varies around W, the Earth’s rotation rate (7.292115 × 10−5 rad/s), at which an ideally stationary satellite revolves. Thus we set, as a trial,
θ = W + A cos Wt
(10.4)
with A being a constant. Turn to Figure 10.4 again, and let l1 ... l4 denote the arc lengths of the shaded areas (1) through (4). If we compare (1) with (2) or (4), we find that the radius is becoming shorter by the factor of (a−d )/a, so l1 must be longer than l2 or l4 by the factor of a/(a−d ). Hence, the revolution rate at (1) is faster than that of Ω by the factor of
Figure 10.5 Variation in revolution rate.
88
Radio Interferometry and Satellite Tracking
a a 2d ≈1+ a -d a -d a Similarly, the revolution rate at (3) is slower than that of Ω by the factor of
a -d a -d 2d ≈1a a a Hence, A = 2Ω d/a is suggested in (10.4), so we write
2d θ = W + W cos Wt a
(10.5)
The solution for θ should then be
θ = Wt +
2d sin Wt a
(10.6)
This is, however, a result based on a presumption. To prove it, we will show that the motion of (r, θ) satisfies Kepler’s second law. Let us write the radius r, by using (10.6) and (10.3), as follows:
2d r = a - d cos Wt + sin Wt a Expanding the cosine term in the right-hand side yields
2 d d r = a 1 - cos Wt + 2 sin 2 Wt a a
The small term of (d/a)2 can be neglected, so we have
r = a - d cos Wt
(10.7)
We can now write the area-sweeping rate times two, by using (10.7) and (10.5), as follows:
Near-Stationary Orbit
89
2 2d r 2 θ = (a - d cos Wt ) W 1 + cos Wt a
Arranging the terms yields
2 3 d d 2 2 r θ = W a 1 - 3 cos Wt + 2 cos 3 Wt ≈ W a 2 a a 2
Here, we neglect the terms of (d/a)2 and (d/a)3 as being small enough. The area-sweeping rate then turns out to be constant, satisfying Kepler’s second law. As a result, we have (10.6) and (10.7) to describe the satellite motion with a small eccentricity.
10.4 Motion Due to Nonstationary Radius Next, we set the eccentricity to zero, while allowing the orbital radius a to differ from the stationary radius slightly by ∆a. Then, according to Kepler’s third law [see (9.14) in Chapter 9], the orbital period P changes by ∆P while satisfying the differential relationship of ∆P 3 ∆a = P 2 a
(10.8)
This change of ∆P causes the orbital revolution rate to change from its original Ω = 2π/P to a different Ω′, as
W′ =
2π 2π ∆P ∆P ≈ W 1 = W P + ∆P P P P
So, from (10.8) we have
W′ = W -
3 ∆a W 2 a
(10.9)
Now, the orbit of an ideally stationary satellite is written simply by setting d = 0 in (10.7) and (10.6):
r =a
(10.10)
90
Radio Interferometry and Satellite Tracking
θ = Wt
(10.11)
If a changes to a + ∆a, and Ω changes to Ω′ of (10.9), the orbit then changes to
r = a + ∆a
θ = Wt -
3 ∆a Wt 2 a
(10.12)
(10.13)
These equations describe the satellite motion when the orbit has a slightly off-stationary radius.
10.5 Motions in an Orbital Plane Two kinds of motions are thus possible in the orbital plane: the motion obeying (10.6) and (10.7), and the motion obeying (10.12) and (10.13). Any superposition of the two kinds can be the orbital motion of the satellite:
r = a + ∆a - d cos (Wt - α)
θ = θ 0 + Wt -
3 ∆a 2d Wt + sin (Wt - α) 2 a a
(10.14)
(10.15)
Here, an arbitrary constant α is introduced; this is related to the displacement of the orbital circle. In Figures 10.2 through 10.4 we have drawn the circle as being displaced to the leftward direction, while actually the direction may be any in the orbital plane. That is, the displacement d should be a vector, and its orientation is to determine the constant α. Another constant θ0 is introduced because the choice of the origin of time t should be arbitrary. Note here that we need a reference direction from which the revolution angle θ is to be measured. We assume, for the time being, simply that the reference direction exists somewhere until we give its proper definition later. A question may arise in the above discussion of superposition: In (10.6) and (10.7), what if Ω changes to Ω′ = Ω + ∆Ω with d not being zero? Look at the term, for example, (2d/a) sin Ωt in (10.6). This term should change to
Near-Stationary Orbit
91
2d 2d sin (Wt + t ∆W ) ≈ [sin(Wt ) + t ∆W cos(Wt )] a a
Here, t ∆Ω must be small enough, because otherwise the satellite will drift away from its nominal stationary position. So, (d/a) × t ∆Ω is small to a higher order and so can be neglected. This allows us to study the two kinds of motions separately and then combine them by superposition. Neglecting higher order small terms this way is thus a key to the orbital analysis in the present chapter.
10.6 Motion Perpendicular to an Orbital Plane Now that we have clarified the motion in the orbital plane, we turn to the motion perpendicular to the orbital plane. Assume that the orbit has a small inclination i, while the satellite is stationary in longitude; that is, a equals the stationary radius and e = 0. In Figure 10.6, the surface of the paper is for the equatorial plane, and the north is toward this side from the surface. The orbital plane intersects the equatorial plane at the line that passes the Earth’s center O, and the orbit inclines in such a way that the part marked “+” comes to the north, or to this side. As the satellite revolves around O, it goes periodically to the north and to the south of the equatorial plane. If z denotes the satellite’s displacement from the equatorial plane to the north, it varies periodically to positive and to negative. This is a sinusoidal motion if the inclination is small, and is written as
z = c sin ( Wt - β )
(10.16)
Here, c equals ai, and the constant β depends on the orientation of the line of intersection in Figure 10.6.
Figure 10.6 Orbital plane intersecting the equatorial plane.
92
Radio Interferometry and Satellite Tracking
Giving an inclination to a geostationary orbit sometimes causes the satellite to plot a ground locus in a particular shape, like a figure “8,” as illustrated in Figure 10.7. How such a motion occurs is explained in Figure 10.8, where the orbit is projected onto the equatorial plane. When the satellite is near (1) in Figure 10.8, its radius is becoming contracted by the projection factor of cos i, while its velocity vector suffers no contraction since it is parallel to the equatorial plane. So, its revolution rate appears faster than the Earth’s rotation; this is for (1) in Figure 10.7. When the satellite comes near (2) in Figure 10.8, the velocity suffers the projection contraction while the radius does not; so, the revolution rate appears slower than the Earth’s rotation, and this is for (2) in Figure 10.7. The same argument applies to (3) and (4), with north–south symmetry. In this way, the satellite’s longitude shows two cycles of oscillation when the satellite completes one revolution, and this is why the figure 8-like locus appears. This reasoning suggests that the figure 8-like locus becomes visible if the inclination i is large enough to make cos i differ distinctly from 1. Actually, the longitudinal
Figure 10.7 Ground locus of satellite when inclination is large.
Figure 10.8 Explaining the figure 8–like motion.
Near-Stationary Orbit
93
width of the figure 8-like locus depends on the inclination, as shown in Table 10.1. The width appears proportional to i2 for the inclinations shown in the table (see Appendix 10A at the end of the chapter for the reason). The figure 8-like motion is thus visible only for large-inclination satellites, presumably retired satellites or defunct satellites. The motion of a satellite under normal control is therefore described by (10.14) and (10.15), with (10.16) in addition.
10.7 Relative Position Coordinates The position of an ideally stationary satellite is at
r = a ; θ = Wt ; z = 0
as given by (10.10), (10.11), and (10.16) with i = 0. This position can be conveniently used as a reference point for describing the motion of the satellite. As illustrated in Figure 10.9, the satellite position is measured relative to the reference point O, with coordinate axes R and L pointing to the radial and longitudinal directions, respectively. The R-L axes make an Earth-fixed frame that rotates with the Earth. One more coordinate axis Z points to the north, or to this side of the surface of the paper. If the satellite is not far away from the reference point, its motion is written in the R-L-Z frame, from (10.14), (10.15), and (10.16), as
R = ∆a - ae cos (Wt - α)
L = L0 -
3 ∆a Wt + 2ae sin (Wt - α) 2
Z = ai sin ( Wt - β )
Table 10.1 Visibility of Figure 8-Like Motion Inclination Width of Figure 8 (deg) Locus (deg) 0.5 0.002 1 0.009 2 0.035 4 0.14 8 0.56
(10.17)
(10.18)
(10.19)
94
Radio Interferometry and Satellite Tracking
Figure 10.9 Relative coordinates R-L for describing the satellite motion. The Z-axis, although not shown here, points toward this side. Small eccentricity produces an elliptical motion in R-L plane.
where L0 = a θ0, and d = ae and c = ai were used. We have already seen the two kinds of motions in the orbital plane, while they become more clearly visible in the R-L coordinates. One kind is an elliptical motion produced by the cosine and sine terms in (10.17) and (10.18), as illustrated in Figure 10.9. The ellipse is elongated to double along the L-axis, and the satellite moves toward +L when R is negative. The other is a linear, drifting motion along the L-axis, as illustrated in Figure 10.10. The drift occurs toward −L if R is positive, or toward +L if R is negative. Combining these two kinds of motions by superposition makes the in-plane satellite motion in general. As seen from the forms of (10.17), (10.18), and (10.19), the motion in Z is independent of the motions in R and in L. To summarize, we have established two sets of formulations for describing the orbital motion of near-stationary satellites. One is given by (10.14), (10.15), and (10.16) in normal coordinates, and the other by (10.17), (10.18), and (10.19) in relative coordinates. Each has its own use; for example, the former
Figure 10.10 Off-stationary radius causes a linear drift motion in the R-L plane.
Near-Stationary Orbit
95
is suitable for considering the strategy of orbital station keeping, whereas the latter is suitable for analyzing the variations in range, azimuth, or elevation of the satellite observed at an Earth station.
Reference [1] “Space Track,” http://www.space-track.org/perl/login.pl.
Appendix 10A: Width of Figure 8-Like Locus Suppose that, in Figure 10A.1, satellite A is in a circular orbit. The orbit has a unit radius and lies in the x-y plane. Satellite B is also in a circular orbit with unit radius, while it has an inclination i; the inclined orbit is projected onto the x-y plane in Figure 10A.1. If satellites A and B depart at the same time from the x-axis, they will show a difference ε in their revolution angles as observed on the x-y plane. When A has traveled to the angle θ, B has not yet gone that far. Here, BC appears shorter than AC, by the projection factor of cos i. So,
AB = sin θ (1 - cos i )
This AB, or equally AB ′, subtends an angle ε at O. If i is small, the angle is approximately
Figure 10A.1 Finding the width of the figure 8-like locus.
96
Radio Interferometry and Satellite Tracking
ε = AB ′ = AB cos θ = sin θ cos θ (1 - cos i )
If i is small, the cosine is approximately cos i = 1 - i 2 2
So we have
ε = sin θ cos θ
i2 i2 = sin 2 θ 2 4
The angle ε thus shows a peak-to-peak variation width of i2/2, and this becomes the width of the figure 8–like locus. The exact solution of ε, not relying on approximations, is given by
(1 - cos i ) sin θ ε = θ - tan -1 cos θ
The approximate and exact solutions provide the same result within the significant digits shown in Table 10.1. So, the approximate is accurate enough for those inclinations shown in the table.
11 Changing the Orbit We have so far assumed that the Earth pulls our satellite with the force of the inverse-square law and that this is the only force acting on the satellite. That is, we have assumed a two-body problem, and under this assumption the satellite orbit does not change with time. The size and shape of the orbit, as well as the orbital plane orientation in the inertial space, are all invariant. If, on the other hand, an orbit shows a change, then it means some extra force other than the two-body force is acting on the satellite. We now study how this kind of extra force gives rise to orbital changes. In this chapter, the extra force is assumed to be acting on the satellite for a short duration of time to cause an instantaneous change in the satellite’s velocity. The model we use is that of an orbital maneuver that happens when a gas-jet thruster is used by a satellite to generate a velocity change. In the following we will see, in the context of near-stationary orbits, how an orbit can be changed to a desired orbit through such a maneuver.
11.1 Orbital Energy Let us consider first the energy of an orbit, because that has a close relationship to the size of the orbit. The energy of an orbiting satellite is written, in terms of kinetic energy and potential energy, as
E=
v2 µ - 2 r
(11.1)
where v is the velocity and r is the radius. Note that the mass of the satellite, usually denoted by m, is omitted. The E from this equation is for the energy 97
98
Radio Interferometry and Satellite Tracking
per unit mass of the satellite, and if the satellite mass is m kilograms then its energy should be times m. This idea comes from the fact that a 1-kg satellite and a 1,000-kg satellite will show the same orbital motion given the same initial condition. Omitting m this way is a common practice when discussing satellite orbits, and it is not only for energy but also for force, momentum, and angular momentum. The energy in the form of (11.1) thus denotes the energy associated with the orbit, rather than that associated with a particular satellite. Now, the motion of a near-stationary satellite was given in Chapter 10 by (10.7) and (10.5) as follows:
r = a - d cos Wt 2d W cos Wt θ = W + a
One can evaluate the orbital energy at any moment of time, since it is conserved. We evaluate it at the moment such that Ωt = π/2, and at this moment the radius becomes a, while the velocity v is found to be v 2 = (r )2 + (r θ )2 = (Wd )2 + (Wa )2
d2 = W 2a 2 1 + 2 ≈ W 2a 2 a
The angular rate Ω is related to the orbital period P, and the period is related to Kepler’s third law, (9.14), as follows:
W=
2π µ = P a3
(11.2)
µ a
(11.3)
Hence, the velocity satisfies v2 =
The orbital energy from (11.1) therefore becomes
E=
µ µ µ - =2a a 2a
(11.4)
Changing the Orbit
99
That is, the orbital energy is determined by the radius a alone, without depending on the eccentricity. This is an important property of the orbital energy.1
11.2 In-Plane Orbital Changes Suppose, in Figure 11.1, that a satellite is in a circular orbit with radius a and orbital velocity v. If the velocity increases instantaneously by ∆v at the moment the satellite passes point P, what would happen to the orbit? The ∆v is aligned to the tangential direction, and its magnitude is small compared to that of v. The ∆v makes the kinetic energy increase by
v2 ∆E = ∆ = v ∆v 2
(11.5)
This increase becomes the increase in the orbital energy, because no change occurs in the potential energy at the moment the ∆v has occurred. If the orbital energy E increases, then according to (11.4) the orbital radius a must also increase, while satisfying the relationship of
µ - µ ∆E = ∆ = 2 ∆a 2a 2a
(11.6)
By equating (11.5) and (11.6), and using (11.3), we have
Figure 11.1 Effect of tangential ∆v. Broken line: new orbit after change. 1. We found this property for small-eccentricity orbits, although it is actually known that the property is for any eccentricities of elliptical orbits.
100
Radio Interferometry and Satellite Tracking
∆a = 2a
∆v v
(11.7)
So, the new orbit after the velocity increase has an orbital radius of a + ∆a. Here, the orbital radius refers to the radius of the displaced orbital circle that expresses a near-circular orbit, as discussed in the previous chapter. The new orbit, drawn as a broken line in Figure 11.1, must pass the point P. So, the center of the orbital circle must move from the origin O to O′, with O′O being equal to
d = 2a
∆v v
(11.8)
The new velocity at P, namely, v + ∆v, must be orthogonal to the new radius O′P. So, O′ exists on the line PO. Let us move on to Figure 11.2, where the satellite is in a circular orbit with radius a. A velocity change ∆v is applied to the satellite at the moment it passes P, while in this case the ∆v points in the radial direction. The velocity v1 after the change has virtually the same magnitude as the original v if ∆v is small. So, no change occurs in the orbital energy and, hence, no change in the orbital radius a. Meanwhile, the direction of v1 is deflected from that of the original v by a small angle ∆v /v. To match this deflection, the orbital circle must rotate around P by the angle ∆v /v. Consequently, the center of the circle moves from O to O′, until PO′ becomes orthogonal to the velocity v1. Hence OO′ equals
d =a
∆v v
with the direction of OO′ being orthogonal to PO.
Figure 11.2 Effect of radial ∆v.
(11.9)
Changing the Orbit
101
11.3 In-Plane Orbital Maneuver We have observed that the tangential ∆v is able to change both the radius and eccentricity. Hence, it is from the tangential ∆v that a practical strategy of orbital maneuvering develops, as discussed next. Figure 11.3 shows a maneuver that uses two ∆v’s for changing the orbital radius. The first ∆v takes place at time t, as marked with (1), and the second ∆v takes place at t + P/2 as marked with (2), where P is the orbital period (or 23 hours, 56 minutes, 4 seconds). The first ∆v makes the orbital radius increase by 2a∆v /v, and makes the center move from O to O′. The second ∆v makes the orbital radius increase once more by 2a∆v/v, while making the center move from O′ back to O. As a result, the maneuver makes the radius increase by 4a∆v /v, while leaving the eccentricity unchanged. The choice of the maneuver start time t does not affect the result. The radius can be decreased if the two ∆v’s are negative. Figure 11.4 shows a maneuver for changing the eccentricity. First at time t, as marked with (1), a positive ∆v makes the orbital radius increase by 2a∆v
Figure 11.3 Radius-changing maneuver.
Figure 11.4 Eccentricity-changing maneuver.
102
Radio Interferometry and Satellite Tracking
/v, and makes the center move from O towards O′ by aDv/v. The second ∆v, which is negative and taking place at t + P/2 as marked with (2), makes the orbital radius decrease by 2a∆v /v, while making the center move toward O′ once more by a∆v /v. As a result, the maneuver makes the orbital-circle center move by 2a∆v /v from O to O′, while leaving the radius unchanged. The line OO′ can be set to any desired direction by choosing the maneuver start time t. Figure 11.5 shows a modified version, where the negative ∆v comes first and the positive ∆v comes next, to yield the same result. Thus, there are two kinds of maneuvers; one is for changing the radius alone, with the parameters being
+∆v a , + ∆v a , unspecified t and the other for changing the eccentricity alone, with
+∆vb , - ∆vb , specified t
Then by the principle of superposition, a two-impulse maneuver with parameters
∆v a + ∆vb , ∆vb - ∆vb , specified t
changes an orbit to any targeted orbit with desired radius and eccentricity. This maneuver is doable if the satellite has thrusters facing the east and the west (i.e., no radial thrusters are required) and this is actually the way in which satellites control their orbits.
Figure 11.5 Eccentricity-changing maneuver—modified version.
Changing the Orbit
103
11.4 Inclination Maneuver One more type of orbital maneuver is designed to change the inclination of the orbital plane, as illustrated in Figure 11.6. A satellite is in a circular orbit with a velocity v, and the orbit lies in the equatorial plane. At the moment the satellite passes point P, a ∆v pointing to the north is applied. This makes the orbital plane incline from the equatorial plane, by an angle
i = ∆v v
(11.10)
Since the new velocity v1 has virtually the same magnitude as the original v, the orbital energy remains unchanged, and so does the orbital radius. Figure 11.7 illustrates a different view of the inclination maneuver. A satellite revolving around O at a velocity v has an angular momentum of
H = av
Here, a is the orbital radius, and H as a vector is perpendicular to the orbital plane. Applying the ∆v is equal to applying an impulsive torque T to the orbital plane, as illustrated in the figure, and it causes a change ∆H in the angular momentum. This makes the vector H become oblique as H1. To this H1 the orbital plane must be perpendicular. So, the orbital plane becomes inclined by
Figure 11.6 Inclination maneuver.
Figure 11.7 Inclination maneuver from a different view.
104
Radio Interferometry and Satellite Tracking
i = ∆H H
Since ∆H = a ∆v, we have the same result as i = ∆v / v. The orientation of H as a vector is important because it determines the orientation of the orbital plane. Let us then consider a unit vector made from H, and project the unit vector onto the equatorial plane. This is illustrated in Figure 11.8, where the projected vector lies as u in the equatorial x-y plane, with the magnitude of u measuring the angle of inclination. Any orbital plane is represented in this way by using a u vector. Suppose, in Figure 11.8, that u is for the orbital plane of our satellite, and uT is for a target orbital plane that should be reached next. Then we need a change ∆u to occur, and this occurs if an impulsive torque T acts on the orbital plane, as illustrated in Figure 11.9. For this torque to act, the satellite must generate a ∆v at the moment it passes either P or Q; P is 90 deg ahead of the direction of the desired ∆u, and Q is opposite P. If at P, the ∆v points to this side of the paper surface, or to the north, while at Q it points to the south. The
Figure 11.8 Projected unit vector u to represent orbital plane.
Figure 11.9 Maneuver takes place when satellite is at P or Q.
Changing the Orbit
105
target is then reached if the magnitude of ∆v is set to ∆v = v | ∆u |. In this way, a satellite changes its orbital plane to a targeted orbital plane through a singleimpulse maneuver by using a thruster facing the north or south.
12 Orbital Perturbations We saw in the previous chapter that an extra force impulsively acting on the satellite changes the satellite’s orbit. In this chapter we discuss various types of extra forces that are not generated by the satellite itself, but originate from various sources existing in the space environment. These forces are small in magnitude, and they act on the satellite continuously over long periods. The resulting orbital changes occur gradually, at a slow pace as time passes; such changes are called perturbations. Perturbations are small at first, and some types of perturbations grow larger with time, so that the orbit finally becomes nonstationary. This is a serious problem in the orbital operation of satellites. We analyze the problem in this chapter and determine the effects of such perturbations.
12.1 Perturbing Forces The forces that cause perturbations to the satellite orbit, or the perturbing forces as we will refer to them, depend on the type of orbit. For a geostationary orbit, four major forces act on the satellite: • A force resulting from the nonspherical shape of the Earth; • A force caused by solar radiation pressure; • The gravity force of the sun; • The gravity force of the moon. These forces are extremely small compared with the two-body gravitational force of the Earth. So their effect on a satellite’s orbit are small at first. In some cases, however, the small changes accumulate as time passes, to grow 107
108
Radio Interferometry and Satellite Tracking
larger and become visible. In other cases, the small changes partially add to each other and partially cancel each other and, hence, do not grow larger as time passes. We focus our interest here on those perturbations that grow larger with time. We refer to these as long-term perturbations. In the following sections, we analyze the long-term perturbations caused by the four major forces listed above. The forces produce perturbations in different mechanisms, so we will analyze them separately one by one. This discussion may appear lengthy at first sight, but using the simple diagrams from Chapter 11 will help us develop a concise, straightforward theory of perturbations.
12.2 Nonspherical Shape of the Earth If we cut the Earth into halves at its equator, the cross section looks almost circular. Precisely speaking, it is not circular but slightly elliptical, with its maximum radius and minimum radius differing by no more than 140m. Suppose our stationary satellite is placed in the geometry illustrated in Figure 12.1. Here, the elliptical shape of the equatorial cross section is exaggerated. If we assume a two-body problem, the satellite would be pulled toward the Earth’s center O, but actually the pulling force is slightly deflected owing to the extra mass existing near the bulging part A. So, the force acting on the satellite has a small, accelerating component F toward the tangential direction. We will show that this force F causes long-term perturbations in the satellite’s longitude. The force F vanishes, owing to symmetry, if the satellite is due above the bulging part (A or A′) or if it is away from the bulging part by 90 deg (B or B′). On the world map, B and B′ are located in the Indian Ocean and the Eastern Pacific Ocean. The tangential force F thus depends on the stationary longitude λ of the satellite, and it is known as F(λ) and is plotted in Figure 12.2.
Figure 12.1 Equatorial section of the Earth makes an ellipse.
Orbital Perturbations
109
Figure 12.2 Tangential force as a function of longitude.
The force F acting on the satellite is small wherever the longitude is, so the force would not cause any significant change to the orbit during one orbital revolution or so of the satellite. One can then assume that a constant F acts on the satellite during one revolution. This approximation allows us to analyze the orbital change as follows. The force F acting on the satellite over a short period ∆t causes its velocity to increase by ∆v = F∆t. Then by (11.7), the orbital radius a will increase slightly by
∆a =
2a 2a ∆v = F ∆t v v
(12.1)
According to Kepler’s third law, or by (10.9), the orbital revolution rate Ω changes by
∆W = -
3W ∆a 2 a
(12.2)
This ∆Ω becomes the change in the satellite’s drift rate, that is, the rate at which the satellite’s longitude λ drifts with time. This is simply written as
∆λ = ∆W So, from (12.1) and (12.2) we have
110
Radio Interferometry and Satellite Tracking
∆λ = -
3W F ∆t v
Since v = aΩ, we have finally
λ = - 3 F ( λ) a
(12.3)
Here, the force F is no longer constant, because l will drift slowly. This is the equation that determines the drift motion of λ in a long-term perturbation in longitude. Meanwhile the tangential force F has the effect of moving the center of the orbital circle, as illustrated in Figure 12.3. The ∆v at 1 makes the center move as indicated by the (1) in Figure 12.3, and similarly that at 2 as (2), that at 3 as (3), and so on. These movements of the center will cancel each other when the satellite completes one revolution. So, there is no long-term increase in the orbital eccentricity; only some oscillatory variation is present. Let us turn to (12.3) for the perturbation in λ. Setting
G ( λ) = -3F ( λ) a
simplifies the equation, as
λ = G ( λ)
Figure 12.3 Eccentricity perturbation does not grow larger.
(12.4)
Orbital Perturbations
111
Here, it looks as if λ is accelerated by some hypothetical force G. This G as a function of λ is shown in Figure 12.4. Though Figures 12.2 and 12.4 look similar, they have different physical meanings.
12.3 Patterns of Longitudinal Drift If force G is a function of position λ, it has a potential U such that
G = - ∂U ∂λ
From G(λ) in Figure 12.4 we can derive the potential U(λ), as plotted in Figure 12.5. The potential curve has two peaks and two bottoms, and they correspond to the places in Figure 12.1 with the same markings. If a satellite is placed at B in Figure 12.5, it will not move because it is at the bottom of the potential curve. If the satellite is placed somewhere away from B, for example, at (1), then it will start moving toward B, pass B, and reach (1′). Then it turns back and henceforth will come and go between (1) and (1′) in an oscillating manner. Similar motion will occur, for example, between (2) and (2′). Consider an oscillatory motion with a small amplitude about point B. If the amplitude is small enough, then in Figure 12.4, G(λ) varies linearly with λ about B. Accordingly, the motion becomes a harmonic oscillation. One can then examine the period of the oscillation by measuring the slope of G(λ) against λ at B; the result becomes 740 days. In relation to this type of long-term motion, one opinion is that the oscillatory motion would attenuate gradually such that the longitude would finally
Figure 12.4 Hypothetical force G as a function of longitude.
112
Radio Interferometry and Satellite Tracking
Figure 12.5 Potential U of force G.
converge to B or B′. Drifting satellites would thus finally accumulate near B or B′. This opinion may come from the analogy that oscillating motions usually have damping factors if they are small; for example, in an electric resonance circuit there is some resistance that attenuates the oscillation, and in a mechanical vibration with a spring and a mass there is some friction to damp the oscillation. This is, however, not the case for orbital longitudinal oscillation; there is no damping in the oscillatory motion. The coming and going motion in longitude will never stop. Suppose the satellite is placed at (3) in Figure 12.5. It will pass B, get over the peak A′, and pass B′ to reach (3′). Then it turns back and will henceforth come and go between (3) and (3′) in near-round trips. This kind of near-round motion occurs if the starting point is in the region between 143 and 180 deg east. This region exists because peak A is higher than A′, or in other words, A–A′and B–B′ in Figure 12.1 are not in perfect symmetry. For the Earth’s gravitational field, it is not as simple to show neat symmetry. A model of the Earth’s gravitational field, called the gravity model, requires a number of coefficients for spherical harmonic expansion. Gravity models were developed one after another in pursuit of better precision, with the pursuit starting from before the 1970s and continuing today. For our case of geostationary satellites, models in the 1980s are precise enough [1]. From this model the functions F(λ) and G(λ) were set. Using the model together with numerical integration makes it possible to calculate the exact orbital motion with perturbation. Figure 12.6 shows the applicable numerical calculations plotted against time in days. The numerical results agree with the qualitative discussions given earlier for the cases of (1)–(1′), (2)–(2′), and (3)–(3′).
Orbital Perturbations
113
Figure 12.6 Perturbations in longitude, by numerical integration.
12.4 Solar Radiation Pressure Suppose, in Figure 12.7, that a satellite is in a circular orbit with radius a and velocity v. The sun exists at the right-hand side, and its light is incident in parallel rays on the satellite to generate a pressure force F. For the time being, we assume that the force F is constant in magnitude and direction. The effect of this force is to cause slow variations in the orbital eccentricity, or the perturbation of the orbital eccentricity. Our task is to examine how the center of the orbital circle moves.
Figure 12.7 Solar radiation pressure and radial velocity.
114
Radio Interferometry and Satellite Tracking
The force F acting over a period ∆t causes a velocity increase of F∆t. If the satellite is at (1) in Figure 12.7, this increase has a radial component:
∆v = F ∆t cos θ
Note that the revolution angle θ is measured in reference to the direction toward which F is acting. This ∆v causes no changes in the orbital radius. The ∆v causes, according to (11.9), the orbital circle center to move, from O to O1, by
d =a
∆v aF = cos θ ∆t v v
So, the center will move at the rate of
d = K cos θ
with
K =
aF v
If we consider this rate of motion as a vector, it corresponds to (1) in Figure 12.8. The vector has its end point on a circle, whose diameter OP has a length K. As the satellite passes to (2), (3), and (4) in Figure 12.7, the vector end point moves to (2), (3), and (4) in Figure 12.8. Note that the sign of ∆v changes from positive to negative during the motion from (2) to (3) in Figure 12.7. When the satellite travels half an orbit, the vector end point traces a circle. The vector in Figure 12.8 can be decomposed into a constant vector and a rotary vector, as illustrated in Figure 12.9. The constant vector has a length K/2,
Figure 12.8 Vector representing the rate of motion.
Orbital Perturbations
115
Figure 12.9 Decomposing into constant and rotary vectors.
and its end point becomes the origin of the rotary vector that traces circularly through (1) to (4). The constant vector yields a steady motion of the center at the rate of K/2, thus yielding a long-term perturbation. The rotary vector will not yield such a long-term effect, because the vectors cancel each other during one revolution; it merely yields some periodic motion. Let us turn to Figure 12.10, to consider the tangential velocity. If the satellite is at (1) and the force F acts on it during ∆t, the increase in its tangential velocity will be ∆v = F ∆t cos θ
Note that angle θ is measured in reference to the direction orthogonal to F. This ∆v causes, by (11.7), a change in the orbital radius:
∆a =
2a 2a ∆v = F ∆t cos θ v v
Figure 12.10 Solar radiation pressure and tangential velocity.
116
Radio Interferometry and Satellite Tracking
When this change is integrated over one revolution, it becomes zero because cos θ is periodic. That is, the orbital radius does not change in the long term. Meanwhile, by (11.8), the ∆v causes the center to move, from O to O1 in Figure 12.10, by
d = 2a
∆v 2aF = cos θ ∆t v v
So, the center will move at the rate of
d = 2K cos θ
The rate of motion of the center as mentioned above is represented by a vector corresponding to (1) in Figure 12.11. The vector end point is on a circle, whose diameter OP has a length 2K. As the satellite moves through (2), (3), and (4) in Figure 12.10, the vector end point moves through (2), (3), and (4) in Figure 12.11. When the satellite travels half of an orbit, the vector end point traces a circle. Here again the vector is decomposed into a constant vector and a rotary vector. The constant vector has a length K, and this yields the long-term perturbation. Thus, there are two long-term motions, with K/2 and K being their moving rates, and they add to each other. So, the combined motion has the rate of (3/2)K or
3aF d = 2v
(12.5)
This motion is directed orthogonally to the direction toward which the force F is acting.
Figure 12.11 Vector representing the rate of motion.
Orbital Perturbations
117
12.5 Position of the Sun We now need to describe the exact position of the sun. Its position is measured with x-y-z coordinates that make an inertial frame, as illustrated in Figure 12.12. Here, S is the sun, and O is the center of the Earth. The z-axis contains the north pole of the Earth, so the x-y plane is the equatorial plane. Defining the orientation of the x-axis requires a reference direction, and it is set as follows. If we stand at O and observe the sun, then it moves around in 1 year to trace a circle with radius R. In reality, however, it is the Earth that goes around the sun, but here we describe what is observed. In Figure 12.12, arc AB is a quarter of the orbital circle of the sun. The orbital plane that contains the orbital circle has a fixed orientation in the x-y-z frame, and this plane is inclined from the equatorial plane by an angle of δ0 = 23.4 deg. The orbital plane is shown as if it is a solid plane so that its inclined geometry may be seen clearly. Now, the orbital plane of the Sun and the equatorial plane cross each other at line OA, and in this line we set the x-axis. The moment when the sun crosses the equatorial plane going from the south to the north is referred to as the vernal equinox1. The x-axis therefore points to the sun at the moment of the vernal equinox, and this is the standard definition for the x-axis. In Chapter 10 we mentioned the need to establish a reference direction, but left it for later; now this is done. This
Figure 12.12 Position and motion of the sun.
1. The vernal equinox is an instantaneous event. The day that contains this event is called the day of the vernal equinox. What we are referring to is the instantaneous event.
118
Radio Interferometry and Satellite Tracking
definition of the x-y axes is applied to all figures that appear subsequently in this chapter, unless otherwise specified. The position of the sun is alternatively measured by two angles that appear in Figure 12.12. One is α, the azimuth angle measured along the equatorial plane, which is called right ascension. The other is δ, the angle of separation from the equatorial plane, called declination. Here, the solid orbital plane is partially removed in order to show the geometry of α and δ. The sun leaves point A of the x-axis at the moment of the vernal equinox, and travels in time t to the revolution angle of Ψt. Three lengths are marked with “x” in the figure, and they satisfy the following relationships to Ψt :
cos δ cos α = cos Ψt
(12.6)
cos δ sin α = sin Ψt cos δ0
(12.7)
sin δ = sin Ψt sin δ0
(12.8)
These equations tell us how α and δ vary as the sun moves.
12.6 Long-Term Effect We can now analyze more precisely the effect of the radiation force F. If the sun is away from the equatorial plane, namely, if δ > 0, we must consider the component of the pressure force along the equatorial plane. Hence, the magnitude of F should be in the form of
F = F0 cos δ; F0 = C
A M
(12.9)
Here, C = 4.56 × 10−6 [N/m2] is the constant for the flux density of sunlight, and A and M are, respectively, the cross-sectional area and the mass of the satellite. If the satellite is a black body, then A equals its geometrical cross section. If the satellite reflects some of the incident light, then A becomes effectively larger, while never exceeding two times larger. The effective A thus depends on the shape and material of the surface of the satellite. This dependence may be complex for a large satellite with many antennas and appendages, and in such a case the evaluation of effective A may have some error. Precisely speaking, the effective A may vary when the direction of the incident light changes, but here we approximate A as being constant. If the reflection is specular, the
Orbital Perturbations
119
effective direction of force may change, but we assume for simplicity that the force is aligned with the direction of the sunlight. From (12.5) and (12.9), the rate of motion of the center is written as d = K cos δ
(12.10)
Here, the constant K has been reset to
K =
3aF0 3F0 = 2v 2W
If the rate of motion from (12.10) is regarded as a vector, its direction depends on α, the right ascension of the sun, as illustrated in Figure 12.13. By writing the vector in x and y components, and using (12.6) and (12.7), we have
dx = K cos δ sin α = K cos δ0 sin Ψt dy = -K cos δ cos α = -K cos Ψt
These equations tell us how the center would move in a long term. If the center was at the origin at the vernal equinox, namely, t = 0, it then moves as follows:
d x = d 0 cos δ0 (1 - cos Ψt )
(12.11)
d y = -d 0 sin Ψt
(12.12)
where
Figure 12.13 Rate of motion in reference to the sun.
120
Radio Interferometry and Satellite Tracking
d0 =
K 3 A = C Ψ 2 ΨW M
(12.13)
The motion of the center thus traces out an ellipse in 1 year, as illustrated in Figure 12.14. Set the constants as Ψ = 1.99 × 10−7 rad/s, Ω = 7.29×10−5 rad/s, and the satellite parameter as, for example, A /M = 0.01 m2/kg. The ellipse size then becomes as 2d0 = 9.4 km. Its minor axis is shorter than its major axis by the factor of cos δ0 = 0.92. If we assume δ0 = 0 for simplicity, the motion of the center becomes circular, and this would do as well for a preliminary study with moderate accuracy. The long-term behavior of orbital eccentricity is thus simple if the satellite parameter A /M is given properly.
12.7 Gravity of the Sun When gravity forces come from the sun and the moon to act on a satellite, the forces cause gradual changes in the inclination of the orbital plane. This is called the perturbation of the orbital plane. The mechanism of this perturbation is, in principle, the same for the sun and for the moon, only they cause different magnitudes of perturbation because they are different in mass and different in distance from the Earth. Let us consider first the gravity of the sun. Set its position as illustrated in Figure 12.15, where S is the sun and O is the Earth’s center. In this Figure the x-axis is temporarily set so that the sun will exist in the x-z plane. This is different from Figure 12.12, and is done this way to make our discussion easier. Our satellite in Figure 12.15 is at some position (x, y) in the x-y plane. The satellite is pulled by gravity toward the sun, and this pulling force has a z-component F. This F is the perturbing force that acts on the orbital plane.
Figure 12.14 Yearly perturbation of orbital eccentricity.
Orbital Perturbations
121
Figure 12.15 Position of the Sun in a temporary coordinate frame.
Let us denote the position of the sun by x = X and z = Z. The perturbing force F is then given by
F =
Z µ ⋅ ( X - x )2 + y 2 + Z 2 ( X - x )2 + y 2 + Z 2
(12.14)
Here, the constant µ is for the universal gravity constant times the mass of the Sun. The distance from O to S is R = X 2 + Z 2 , so (12.14) is rewritten as follows: µZ ( X + Z - 2xX + x 2 + y 2 )3/2 µZ = 2 (R - 2xX + x 2 + y 2 )3/2 1 µZ = 3 ⋅ 3/2 R 2xX x 2 y 2 1 + + R2 R 2 R 2
F =
2
2
Since R is much larger than x or y, we neglect the second-order terms of x/R and y/R, hence obtaining
µZ 2xX F = 3 ⋅ 1 - 2 R R As a result we have
-3/2
≈
µZ 3xX ⋅ 1 + 2 R3 R
122
Radio Interferometry and Satellite Tracking
F =
µZ µZ 3 Xx + 3 ⋅ 2 R3 R R
(12.15)
Let us write the first term on the right-hand side as F0 =
µZ R3
(12.16)
This F0 is for the value of F when it acts on O. If F and F0 are not equal, a torque arises that operates to rotate the orbital plane. Consider one component of this torque: T = (F - F0 ) ⋅ x
This is a torque trying to rotate the orbital plane about the y-axis, as shown in Figure 12.15. If our satellite is in a circular orbit with radius a, and its angular rate of motion is Ω, one can write x = a cos Ωt. Then from (12.15) and (12.16), the torque becomes
T =
µZ 3 X 2 µZ 3 X 2 2 x = 3 2 a cos Wt R3 R2 R R
We average this torque over one orbital revolution. The factor cos2Ωt then becomes 1/2, and by using the relationships Z = R sin δ, X = R cosd, we have
T =
3µ 2 a sin δ cos δ 2R 3
(12.17)
This is the torque that causes long-term perturbations. We should also consider the torque about the x-axis, but the torque vanishes, owing to symmetry, after taking an average over one revolution.
12.8 Tilting of the Orbital Plane If a satellite goes round the Earth, it has an angular momentum H, which is orthogonal to the orbital plane, as noted in the previous chapter. If a torque T acts on this orbital plane over ∆t, it gives an increment of ∆H = T∆t to H, as illustrated in Figure 12.16. Accordingly, the orbital plane tilts by an angle ∆H /H in ∆t. If the torque continues to act, the orbital plane continues its tilt-
Orbital Perturbations
123
Figure 12.16 Tilting motion of an orbital plane.
ing motion at an angular rate T/H. To this orbital plane we attach a unit vector orthogonally, as illustrated in the figure. By observing how the unit vector changes, we can describe the tilting motion. This is illustrated in Figure 12.17, where the unit vector is projected onto the equatorial x-y plane. The projected unit vector should appear somewhere as u, but here its changing rate u is shown instead. This u represents the angular rate of the orbital plane’s tilting motion. From (12.17) and H = a2Ω, we can write
u =
T = L sin δ cos δ H
(12.18)
3µ 2R 3 W
(12.19)
with
L=
Here, we reset our temporary coordinate frame to the original definition so that the x-axis will point to the sun at the vernal equinox. Regard u in (12.18)
Figure 12.17 The orbital plane has a rate of tilting motion.
124
Radio Interferometry and Satellite Tracking
as a vector, and see its direction in Figure 12.17. Then its x and y components are
ux = L sin δ cos δ sin α u y = -L sin δ cos δ cos α Hence, by using (12.6) through (12.8), we have
1 cos 2 Ψt ux = L sin δ0 cos δ0 2 2
u y = -L sin δ0
sin 2 Ψt 2
(12.20)
(12.21)
These equations tell us how u would change in the long term. Assume u = 0 at the vernal equinox, namely, t = 0. Then we find two kinds of changes, or motions. One is from (12.20); its first term yields a motion
u x = (L 2 ) sin δ0 cos δ0 × t
(12.22)
This is a linear motion, as marked with a (1) in Figure 12.18, which means a steady increase in orbital inclination. Set the constants as R = 1.50 × 108 km and µ = 1.33 × 1011 km3/s2. The increase in the inclination then becomes 0.27 deg per year, and this is the long-term perturbation by the sun. Meanwhile the second term in (12.20) together with (12.21) yields a periodic motion:
Figure 12.18 Orbital plane is tilting in linear and elliptical motions.
Orbital Perturbations
u x = -u 0 cos δ0 sin 2 Ψt u y = u 0 (cos 2 Ψt - 1)
125
where u0 =
L sin δ0 4Ψ
This motion traces out an ellipse in half a year, as marked with a (2) in Figure 12.18. If motions (1) and (2) in the figure are generated for 1 year and they are combined by superposition, its locus becomes as illustrated in Figure 12.19. Here, u1 denotes the long-term motion per year, while u0 is for the lateral width of periodic motion, and their ratio is calculated as follows (Y is for 1 year): 2u 0 u1 =
=
2u 0 (L 2) sin δ0 cos δ0 × Y 1 1 = = 0.17 ΨY cos δ0 2 π cos δ0
(12.23)
12.9 Gravity of the Moon The gravity of the moon can be analyzed by the same procedure used above. In Figure 12.12, S is regarded as the moon, which moves around the orbital circle in 1 month. Set the constants for the moon as R = 3.84 × 105 km, Ψ = 2.66 × 10−6 rad/s, and µ = 4.90 × 103 km3/s2. The resulting pattern of perturbation is the same as that shown in Figure 12.19, while the pattern is now for the period
Figure 12.19 Perturbation of the orbital plane due to the Sun, for 1 year.
126
Radio Interferometry and Satellite Tracking
of 1 month, which begins at the moment when the moon crosses the equatorial plane from the south to the north. The rate of increase of the inclination from (12.22) is 0.043 deg per month, or 0.58 deg per year. The per-year rate of increase is thus about two times larger for the moon than for the sun. Although the moon’s mass is much smaller, it is much closer to the Earth, and owing to this closeness, the constant L from (12.19) becomes larger for the moon. Precisely speaking, the case for the moon is a little more complicated. In Figure 12.12, the orbital plane is crossing the equatorial plane at line OA; this line is called the line of node. For the case of the sun, the line of node was always in the x-axis. For the case of the moon, however, the line of node may move away from the x-axis, for example, like OA′ in Figure 12.12. The angle between line OA′ and the x-axis, that is, the angle of node, varies maximally to ±13 deg. This is a periodic variation, with its period being 18.6 years. Correspondingly, the pattern of perturbation changes from Figure 12.19 to Figure 12.20. The pattern is rotated around O, and the angle of rotation is equal to the angle of node. As the angle of node changes to positive and then to negative, the pattern is sometimes like that of (1) and sometimes like that of (2). If we refer to Figure 12.12 once again, the angle of inclination δ0 is no longer constant for the moon, even though it was constant for the case of the sun. The angle δ0 varies between 18.3 and 28.5 deg, while averaging 23.4 deg in the long term. This is a periodic variation, with its period being the same 18.6 years. Consequently, the rate of increase of the inclination from (12.22) varies between 0.48 and 0.67 deg per year, while being centered at 0.58 deg per year. Hence, in Figure 12.20, the size marked with an asterisk (∗) becomes variable. Also, the ratio of the lateral width of periodic motion becomes variable, as suggested by (12.23). In short, the moon’s orbital plane varies its orientation in the inertial space, and for this reason the perturbation pattern is modulated from Figure 12.19 to Figure 12.20.
Figure 12.20 Perturbation of the orbital plane due to the Moon, for 1 month.
Orbital Perturbations
Figure 12.21 Combined sun-moon perturbation for different years.
127
128
Radio Interferometry and Satellite Tracking
12.10 Sun-Moon Combined Effect The combined perturbation due to both the sun and the moon is known if the separate perturbations are made into superposition; that is, if the effects of the separate perturbations are added together. The rate of increase of the inclination then becomes between 0.75 and 0.94 deg per year, or 0.85 deg per year on average. If the perturbation patterns for the sun (Figure 12.19) and for the moon (Figure 12.20) are prepared for 1 year and they are made into superposition, then it makes the patterns shown in Figure 12.21. Each pattern shows large undulations and ripple-like smaller undulations. Large undulations are due to the sun, with a half-year period, and ripple undulations are due to the moon, with a half-month period. The moon’s perturbation in Figure 12.20 shows variable directions like (1) or (2), and this is the reason why the patterns in Figure 12.21 show different directions in different years. The patterns for 2000 and 2020 are nearly identical; this is because the angle of inclination and the angle of node for the moon vary with a period of 18.6 years. The perturbations shown in Figure 12.21 were calculated by the numerical integration of orbital motion, starting at the day of the vernal equinox. The perturbation patterns we have derived by theory are thus in agreement with the numerical results. If the combined sun-moon perturbation is observed over a very long term, which means sufficiently longer than 18.6 years, it shows on average a steady trend of drift toward the positive direction of the x-axis, at the rate of 0.85 deg per year. The preceding discussion would suggest that the orbital inclination increases boundlessly as time passes, but this is not the case. There is one more extra force that begins to act on the satellite when its orbital inclination becomes larger. This force comes from the shape of the Earth. The Earth’s shape is oblate, so that its polar radius is shorter than its equatorial radius. If the satellite stays in or near the equatorial plane, then because of symmetry, the oblateness does not produce any extra force. If the satellite goes away from the equatorial plane, the extra force begins to modulate the inclination perturbation. As a result, the increasing trend of the inclination will stop at a maximum of 14.8 deg, and then turns back to decrease, finally reaching 0 after 54 years, and this pattern recurs in cycles [2]. For this reason the present analysis limits its effectiveness to within a few to several degrees of orbital inclination, which is effective enough for the orbits of the near-stationary, operational satellites in which we are interested.
Orbital Perturbations
129
References [1] Lerch, F. J., S. M. Klosko, G. B. Patel, and C. A. Wagner, “A Gravity Model for Crustal Dynamics (GEM-L2),” J. of Geophysical Research, Vol. 90, No. B11, 1985, pp. 9301–9311. [2] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994, pp. 87–88.
13 Station Keeping If a satellite is initially placed in a stationary orbit, it will sooner or later start moving about and moving away from its nominal stationary position, because perturbations make the orbit gradually change. So, we must make orbital corrections on a regular basis to keep the satellite stationary. This process is called station keeping. A common practice of station keeping is to determine a fixed boundary for the satellite’s position, with boundary lines set at 0.1 deg in longitude and latitude relative to the intended stationary position. A satellite staying inside the boundary is regarded practically as stationary. In this chapter we discuss the station-keeping method, making use of the formulations of orbital maneuvers and perturbations that we established in earlier chapters. Perturbations are different for longitudinal and latitudinal motions, so, station keeping is considered separately as east-west (EW) keeping and north-south (NS) keeping. Our discussions will clarify how the satellite moves under station keeping and also determine the cost of ∆v for performing the station-keeping function.
13.1 EW Keeping for Drift-Rate Control If a satellite is regarded as near stationary, its longitude λ would vary by not more than a fraction of a degree from its nominal stationary longitude. So, in Figure 12.2 in Chapter 12, we can set the force F(λ) to a constant F at the nominal longitude of the satellite. Assume for the time being that F > 0. Then according to (12.1), the orbital radius a will increase at the rate of ∆a/∆t = 2aF/v. That is,
131
132
Radio Interferometry and Satellite Tracking
the radius a increases linearly with time, as illustrated in Figure 13.1. If F(l) is constant, G(λ) is also constant and here G < 0. Then from (12.4), λ = G yields a free-fall motion in λ, and it plots a parabola as marked with a (1) in Figure 13.1. We must keep l inside the boundaries of λ1 and λ2, where λ1 and 2 are usually separated by 0.2 deg minus some small allowance for guard bands. So, before λ comes near λ1, we need an orbital correction, #1. The correction is to decrease the radius a, because it has increased too much, and this correction can be done by means of the radius-changing maneuver described in Chapter 11. If the correction is done properly, the longitudinal drift rate λ , or simply the drift rate, changes its sign from negative to positive. The drifting λ will then plot the parabola marked by the (2) during the time until the next correction, correction #2, becomes necessary. The change from (1) to (2) is analogous to the motion of a free-falling ball when it bounces on the floor. The next correction, correction #2, takes place in the same way as #1, and this process is iterated regularly. The radius a will thus increase and decrease periodically, while it averages the stationary orbital radius, aS. If the value of F is negative, the curves in Figure 13.1 will be upside down. The parameters of station keeping are determined as follows. In Figure 13.1, T is the period between maneuvers. The parabola has the shape of free-falling motion as given by λ = Gt 2/2, so its segment (2) has a height |G |(T/2)2 /2. This equals W, the longitudinal width that the satellite occupies during the drift. That is,
W = GT 2 8
Figure 13.1 Controlling the longitudinal drift rate where a is the orbital radius; λ is the satellite longitude; and t represents time.
Station Keeping
133
where G represents the absolute value of G. Shorter maneuvering periods make the width W narrower. The velocity change ∆v required for a maneuver is equal to FT. That is, the orbital correction offsets the effect of force F as being accumulated over the period T. Choose an example of λ = 117 deg east, at which F takes its maximal value of 66 × 10−9 m/s2, with G = 0.0020 deg/day2. Then T = 20 days makes W = 0.1 deg, which fits inside the boundaries with a margin. The required ∆v for each maneuver is 0.11 m/s2. We are interested in the ∆v required per year, because it is a basic parameter for determining the budget for satellite propellant consumption. The per-year ∆v is estimated simply as F times a year, or 2.1 m/s. This is the maximal estimate of ∆v required for driftrate control.
13.2 EW Keeping for Eccentricity Control If a satellite is initially placed in a circular orbit, the orbit will soon become noncircular, because it is perturbed by the solar radiation pressure force, as described in Chapter 12. If the orbit has an eccentricity e, then according to (10.18), the satellite position oscillates about its nominal stationary position to ±2ae along the tangential direction. Correspondingly, the longitude of the satellite oscillates to ±2e radians. This oscillation may cause a problem in EW keeping, as discussed next. Communication satellites tend to have large cross-sectional areas, because they need a lot of power for the mission equipment and, hence, need a wide area for the solar array. For example, a cross-sectional area of 120 m2 with a mass of 2,000 kg makes an area-to-mass ratio of A/M = 0.06 m2/kg. Refer to Figure 12.14, and set here δ0 = 0 for simplicity. The orbital circle center will move away from O maximally to 56 km, thus giving a maximal eccentricity of 0.0013. The longitude then oscillates with an amplitude of 0.15 deg, or a peakto-peak width of 0.3 deg. This exceeds the standard longitude-keeping width of 0.2 deg. That is, station keeping is impossible here. A simple idea for solving this problem is illustrated in Figure 13.2. In this figure, the x-y plane represents the equatorial plane, with the x-axis pointing to the vernal equinox. This setting of x-y coordinate axes applies to all figures that appear subsequently in this chapter. Now, the drift motion of the orbital circle center starts not from O but from somewhere else, so that its yearly locus (here approximated as a circle) will have its center at O. The eccentricity then becomes constant, while being reducing to a half. The longitude oscillation becomes 0.15 deg peak-to-peak, and this would be adequate for station keeping. The ratio A/M, however, may become even larger as the design of communication satellites develops. So, we need more active control of the eccentricity.
134
Radio Interferometry and Satellite Tracking
Figure 13.2 Halving the eccentricity.
The control of eccentricity is schematically described as follows. In Figure 13.3(a), the circle with radius d0 represents the above-mentioned yearly drift motion of the orbital circle center. We modify this yearly motion as follows. Divide the yearly circle equally into n segments: (1), (2), (3), …, (n). If n is large enough, each segment approximates a line. Now, look at the motion that makes segment (1). Let this motion start from point 1s, which is given in Figure 13.3(b). The motion will then draw the line segment 1s–1e. If point 1s is chosen right, this line segment has its middle point at O. That is, we shift segment (1) to segment 1s–1e to be centered at O. Similarly, we shift segment (2) to segment 2s–2e, segment (3) to segment 3s–3e, and so forth. This is done by a series of orbital corrections, as follows. When the center has traveled along 1s–1e and has reached 1e, the center is then moved to 2s by an orbital correction. When
Figure 13.3 Controlling the eccentricity where (b) and (c) are magnified about point O.
Station Keeping
135
the center has traveled along 2s–2e and has reached 2e, the center is then moved to 3s by an orbital correction, and so forth. Each correction is done using the eccentricity-changing maneuver described in Chapter 11. Under this control, each line segment has the length of 2πd0/n, so every line segment fits inside a boundary circle with radius πd0/n. The maximal eccentricity is therefore reduced by a factor of π/n compared with that of d0 in Figure 13.2. This is ideal for eccentricity control, if we are ready to do frequent maneuvers. The ∆v required per year for the ideal control is estimated as follows. The circle in Figure 13.3(a) has a circumference of length 2πd0. Consider a hypothetical maneuver that is designed to move the orbital circle center over a linear path of the same length 2πd0. The ∆v required for this hypothetical maneuver then represents the ∆v required per year. From (11.8), set ∆v =
v 2 πd 0 2a
Here, d0 is given by (12.13), to be d0 =
3 A C 2 ΨW M
with parameters defined as follows: Ψ: orbital revolution rate of the Sun; Ω: orbital revolution rate of the satellite; C: constant for the flux density of the sunlight, 4.56 × 10−6 [N/m2]. As a result, we have
∆v =
3v 2 π A 3 A C = YC 4a W Ψ M 4 M
In this equation, v = a Ω was used, and 2π/Y = Y is for 1 year. If the area-to-mass ratio A/M is given in m2/kg, this result is written as ∆v [m/s] = 108 A/M. For our example of A/M = 0.06, the ∆v estimate is 6.5 m/s. This example suggests that control of eccentricity tends to require more ∆v than control of the drift rate.
136
Radio Interferometry and Satellite Tracking
13.3 Combined EW Keeping In the preceding discussions we assumed that the drift rate control and the eccentricity control were separately planned. Combining the two may lead to some economy of the ∆v cost, as follows. In Figure 13.1, we consider planning a drift-rate maneuver and an eccentricity maneuver as a combined set at #1, and similarly at #2, and so on. Let us look, for example, at #2 precisely. Here, the eccentricity maneuver is planned, as illustrated in Figure 13.4, so as to move the orbital circle center from 2e to 3s. Now, suppose here that the drift-rate maneuver is done with a single impulse. This makes the orbital circle center move, as illustrated in Figure 11.1 in Chapter 11. If the timing of the single impulse is chosen right, the orbital circle center moves, in Figure 13.4, from 2e to some 2e′ along the line 2e–3s. The task of the eccentricity maneuver is then to move it from 2e′ to 3s, resulting in some economy of ∆v. The drift-rate maneuver and the eccentricity maneuver can be combined into a single maneuver with two impulses, with their magnitudes not being equal here. One can consider a more practical way to control eccentricity, as illustrated in Figure 13.3(c). Here, in contrast to part (b), the lines do not pass O. Accordingly, the distance between 1e and 2s, that between 2e and 3s, and so on, becomes smaller, and this means less ∆v is required for the maneuvers. Controlling eccentricity in this way would suit any satellite if its area-to-mass ratio A/M is not too large and if the eccentricity boundary is not too small. Now, look at a line, for example, 2s–2e in Figure 13.3(c). This line may look, in Figure 13.5(a), like (a) for one satellite, or like (b) or (c) for different satellites in reference to an eccentricity boundary circle. If we observe various satellites at a time, their orbital circle centers would then be distributed around area (d). This is consistent with Figure 13.5(b), the actual distribution for operational
Figure 13.4 Combined drift rate-eccentricity maneuver.
Station Keeping
137
Figure 13.5 Distribution of eccentricity: theory (a) and actual (b).
satellites as observed from orbital data made public [1]. The orbital data are for a chosen season of the year 2011 when the sun is placed as marked on the figure, and the boundary circle is for e = 0.0005. Most satellites seem to use the practical control, although the possibility of those distributed close to O may be using the ideal control.
13.4 NS Keeping The perturbation by the sun and the moon causes a tilting motion of the orbital plane, as illustrated in Figure 12.21. In that figure, the orientation of the orbital plane was represented by a projected unit vector, and its motion over a long-term average was to drift toward the positive direction of the x-axis. NS keeping is designed to confine the projected unit vector inside a boundary circle with the radius usually being set to 0.1 deg, as illustrated in Figure 13.6. The broken line represents the long-term average motion, so the orbital correction is to move it back, before it goes out of the boundary circle, toward −x as marked by the (a). This is done using a maneuver that gives the satellite a northward ∆v when the satellite is at A in Figure 13.7, or a southward ∆v when the satellite is at B. If the correction is for a full 0.2 deg like Figure 13.6(a), the ∆v is 10.7 m/s in magnitude, from (11.10). The rate of the long-term drift is between 0.75 and 0.94 deg per year, so the ∆v per year is between 40.1 and 50.3 m/s, averaging 45.2 m/s. That is, NS keeping requires more ∆v than EW keeping, by an order of magnitude. This is why the propellant consumption for NS keeping is the major factor used to determine the operational lifetime of a satellite. The tilting motion of the orbital plane has also y components, as seen in Figure 12.21 in Chapter 12. Correspondingly, the orbital plane may sometimes
138
Radio Interferometry and Satellite Tracking
Figure 13.6 NS keeping and the boundary circle.
Figure 13.7 NS maneuver timing.
have to be moved back like (b) or (c) in Figure 13.6. If we must choose a smaller boundary circle, the y-component motion becomes significant relative to the circle, and this may cause trouble. For example, consider a situation like that illustrated in Figure 13.8. A correction now takes place like (a), and the subsequent drift motion becomes like (b), but this motion reaches the boundary too soon. This trouble can be avoided if the correction takes place like (c); the subsequent motion then becomes like (d) and this will keep the drift motion inside the boundary longer. For this to occur, the maneuver is done at some point A′ or B′ in Figure 13.7. This maneuver, however, requires more ∆v because (c) is longer than (a) in Figure 13.8.
13.5 Factors Depending on Satellites Another factor may increase the required ∆v in relation to the design of the satellite. Theoretically speaking, the thrust for NS keeping should point to the north or to the south, but actually it may be off-pointing from the north or
Station Keeping
139
Figure 13.8 NS keeping with a smaller boundary circle.
south, as illustrated in Figure 13.9. This is to avoid the problem of the thrust plume hitting the solar array panel to generate an attitude disturbance torque or to dirty the surface of the panel. The required ∆v increases inversely proportionally to the cosine of the off-pointing angle. If the NS thrust is off-pointed, the ∆v has a projected component in the orbital plane. If this in-plane component points to the tangential direction, then it causes an error in longitudinal drift rate; this must be avoided definitely. So, the NS thrust is usually set so that its in-plane projection will have a radial component only. Consequently, the NS maneuver yields a change in the eccentricity. This change, however, will be canceled if the NS maneuver is done with two equal impulses, with the first one being done at A in Figure 13.7 and the second one at B. If the thruster is nevertheless set so that it points directly to the north or south, the NS maneuver may be prohibited during some period of time so
Figure 13.9 Off-pointing NS thrust.
140
Radio Interferometry and Satellite Tracking
that the unwanted action of the thrust on the solar panel may be avoided. The maneuver timing is then shifted, for example, from the original A to somewhere like A′ in Figure 13.7. This makes the orbital correction less efficient, which means an increase in ∆v is required. In this way the ∆v cost of NS keeping may include some surplus that depends on the design of the satellite. A special satellite design may also affect the NS-keeping maneuver. A very large antenna on board the satellite may have a flexible structure if it is made as light as possible. Such a structure may start a slow, vibrating motion when excited by the strong impulse of the NS maneuver, and it takes time for the vibration to attenuate. If such a vibration is undesired, the NS maneuvers must be planned with increased frequency so that the impulse for each maneuver becomes small enough to excite a vibrating motion as small as desired. Ultimately, the maneuvers are planned to take place every time the satellite passes A or B, or both, in Figure 13.7. This style of NS maneuver would suit a satellite with electric or ionic thrusters. As a result, the orbital inclination will be kept small. In this way, the NS keeping will depend on individual satellites. Figure 13.10 shows the distribution of the orbital inclinations for operational satellites, as observed in January 2011, from orbital data made public [1]. The distribution spreads in the boundary circle of 0.1 deg, without showing any significant features. Hence, we can say simply that a satellite under NS keeping will undergo an oscillating NS motion with an amplitude mostly smaller than 0.1 deg, while how the oscillation amplitude varies with time will be different for individual satellites.
Figure 13.10 Distribution of orbital inclination.
Station Keeping
Reference [1] “Space Track,” http://www.space-track.org/perl/login.pl.
141
14 Overcrowding and Regulations If geostationary satellites were observed collectively as a whole, they would look like they formed a large ring that circles the Earth at an altitude of 35,786 km above the equator. This is referred to as the geostationary orbital ring. Its value is crucial for communications, broadcasting, and weather satellite services, but its spatial capacity for placing satellites is not infinite. There is growing concern about capacity as more and more satellites are launched into the orbital ring. In this chapter, we review the current regulations for the placement of satellites in orbit, mainly from the viewpoint of orbital safety. In the following discussion, the wording geostationary orbit or simply orbit, does not signify any particular orbit of a satellite; it instead refers to the above-mentioned geostationary orbital ring as being a collective entity.
14.1 Orbital Regulations The International Telecommunication Union (ITU) has established rules to regulate the use of the geostationary orbit. The rules apply to satellites that emit radio waves in the orbit. Because we can hardly think of a satellite that does not emit any radio waves, the regulations apply to virtually every satellite. The regulation prescribes the following procedure [1]: Anyone who plans to place a new satellite in the orbit must publish in advance, via ITU, its information including orbital position in longitude, the frequencies of the satellite’s radio emissions, and other relevant items. This information is then input for coor-
143
144
Radio Interferometry and Satellite Tracking
dination; that is, to examine whether the new satellite would cause any radio interference with the radio frequencies already in use by existing satellites. If any possibility of interference is identified, the new satellite must modify its orbital position or radio frequencies so as to prevent the interference. This coordination is based on the first-come first-served principle. When the coordination is finished, the new satellite has its own orbital position and radio frequencies assigned. The assignment information is written into a database document, or the master register, as it is called. Frequencies recorded in the master register indicate that their use has been authorized by the ITU and that they are protected from interference as long as they are in use by the satellite operating at its assigned orbital position. This is a rough sketch of the coordination-assignment procedure; more details are described in [2]. The position in longitude assigned to the satellite becomes its nominal longitude when the satellite operates in the orbit. The satellite must be kept inside the ±0.1-deg boundaries relative to the nominal longitude, as recommended by the ITU [3]. The longitude-keeping zone thus has a width of 0.2 deg, which is called a longitude slot or an orbital slot. A narrower width for the slot would allow more slots to exist and, hence, allow more efficient use of the orbit. But, on the other hand, it increases the workload of station keeping, with EW maneuvers taking place in shorter periods, as discussed in Chapter 13. The slot width was determined from considering all of these factors. The slot width is thus prescribed in longitude, but not in latitude, by the regulation. In terms of the quality of communication services, the satellite should be stationary equally in longitude and in latitude, so the common practice of station keeping is to apply the same standard of ±0.1 deg to both longitude and latitude. This decision is made by the satellite operators, not by the ITU and, in fact, some satellites move in latitude by more than 0.1 deg if this does not cause trouble in terms of users’ access to the satellite. Another rule is applied to a satellite when it nears its end of life. The satellite must be transferred, before its propellant is used up, to an orbit outside the geostationary orbit so that it will never cross the geostationary orbit. The satellite is switched off after it has reached the outer orbit, as recommended by ITU [4]. The satellite then undergoes uncontrolled orbital motion, with its eccentricity changing with time owing to the solar radiation perturbation (see Chapter 12). Accordingly, its perigee altitude changes with time, but the perigee must not come close to the geostationary orbit. The recommendation considers this factor, and specifies a target radius for the outer orbit that is dependent on the satellite’s area-to-mass ratio. Orderly, efficient, safe use of the geostationary orbit is thus maintained, in theory, if every satellite follows the regulations.
Overcrowding and Regulations
145
14.2 Problem of Overcrowding The theory faces some difficulty, however, when there are too many satellites in the geostationary orbit. Because it is beneficial to be a first comer in the coordination procedure, early entries tend to crowd into the procedure, to complicate the coordination. Once coordination is finished and a new satellite has had its orbit and frequency assigned, sometimes the supposed satellite does not come into the orbit because its plan had a poor technical basis or poor financial basis or no basis at all. A satellite that exists only in the master register document is called a paper satellite. Paper satellites make the coordination task even more complex, thus hindering the efficient use of the precious orbital positions and frequencies. Another problem arises when double assignment to an orbital slot occurs. The motive of coordination is solely to prevent radio interference, so the process does not check to determine if two or more satellites that use different frequencies are being assigned to one and the same slot. The satellites going into the same orbital slot may experience close encounters when they start operating. This problem belongs to the satellite operators, because ITU takes no responsibility; this makes the problem more serious if the satellites belong to different operators or different nations. Usually the operators are able to keep track of their own satellites only. So, determining if the satellites are coming too close to each other is a matter of concern for orbital safety. One could consider either a regulatory approach or a technical approach to these problems. The former would require a reorganization of the rules and improvement in the procedure of coordination and assignment. If this attempt is too difficult, then we need to turn to a technical approach. One idea is satellite monitoring in which satellite downlinks are received that identify the satellites’ orbital positions in some organized way. Exact monitoring provides the latest information about the uses of orbits and frequencies, which is valuable for coordination. Unused assignments in the master register, if any, should be determined. Also, precise satellite positions known from monitoring will help maintain orbital safety. This idea will have growing significance if there are still more satellites going into the orbit from now on.
References [1] ITU, Handbook on Satellite Communications, New York: Wiley Interscience, 2002, Chap. 9. [2] Elbert, B. R., The Satellite Communication Applications Handbook, Norwood, MA: Artech House, 2003, Chap. 12.
146
Radio Interferometry and Satellite Tracking
[3] ITU, “Station-Keeping in Longitude of Geostationary Satellites in the Fixed-Satellite Service,” ITU-R Recommendation S.484-3, Geneva: ITU, 1992. [4] ITU, “Environmental Protection of the Geostationary-Satellite Orbit,” ITU-R Recommendation S.1003-1, Geneva: ITU, 2004.
Part III Interferometric Tracking
15 Overview of Part III: Interferometric Tracking Part III of this book discusses how interferometers can be used for satellite tracking and orbit estimation and features actual application cases. Discussions are based on Parts I and II, so they will be referred to in many places. The first topic for us to discuss is this: What is orbit estimation? What information is gathered by tracking? How should the information be processed, using what kind of software? The concept and principle of orbit estimation is thus described in Chapter 16. The interferometer must gather good information in order to provide good orbit estimation, and this consideration leads to the idea of a prototype interferometer. The prototype interferometer is developed into a realistic form in Chapter 17, where we discuss interferometric tracking in terms of the usual azimuth and elevation angles, and connect it to the orbit estimation principle. These chapters thus show the physical meaning of tracking and orbit estimation. After those two chapters, our discussion turns to particular subjects. Satellite tracking is done for different purposes and, correspondingly, different kinds of interferometers can exist. If a number of satellites are crowded into an orbit, it is particularly important to control their orbital longitudes. Chapter 18 demonstrates the case of an interferometer that has been designed specifically for tracking satellite longitudes, and examines its use for precise longitude keeping. If station keeping must have increased accuracies from now on, yet avoid the use of sophisticated hardware, then one option is to combine the interferometer with another type of tracking. Thus, Chapter 19 illustrates the case of a combined system of interferometry and ranging capabilities. An interferometer that uses optical cables shows good tracking performance, which in turn can provide accurate orbit estimation. 149
150
Radio Interferometry and Satellite Tracking
If two or more satellites are coming too close to each other in the overcrowded orbit, their relative motion is a sensitive problem. Consequently, Chapter 20 covers the case of an interferometer that has been designed specifically for differential tracking. It tracks the relative motion of satellites with precision, while using simple hardware. Chapter 21 discusses a specially designed interferometer that has a mechanically movable baseline. Its special design is aimed at removing the problems of phase error and ambiguity that exist inherently in interferometry. As a result, the interferometer becomes able to watch and monitor the orbit of any satellite, known or unknown, with precision. A powerful tool of tracking and orbit estimation thus becomes reality. All of these interferometers are for real cases, not just for theories on the desk. In Chapter 14 we addressed the problem of overcrowded geostationary orbits, and suggested that accurate orbital monitoring should be the key to the problem. The interferometers we are now going to see will surely give us that key. Finally, Chapter 22 discusses a different kind of interferometer, one that has an inverted geometry—the antennas are high up in the orbit and they point toward the Earth. Its purpose is to track down an illegitimate Earth station that is emitting unwanted signals. This kind of interferometer is now in growing demand, as more and more cases of RF interference occur. The chapter discusses the fundamental principles of this important interferometer. Although this interferometer has a purpose other than satellite tracking, it has a close relationship to the interferometer for tracking and orbit estimation, as we will see. Part III thus draws a complete picture of what the application of interferometry can do for satellites in geostationary orbits that are dealing with orbital overcrowding and increasing RF interference.
16 Tracking and Orbit Estimation If an interferometer or any other equipment is used in satellite tracking, its purpose is to acquire data needed for orbit estimation. So, before considering interferometers in detail, the reader should be informed about orbit estimation. This would include the principle of orbit estimation, an outline of estimation software, and operational conditions of orbit estimation. This chapter covers these topics in simple terms, so that they may be imagined with their physical meanings. In the following sections, tracking and orbit estimation are described first as a general concept. Later, the use of an interferometer is discussed, and finally, the significance of the interferometer in satellite tracking is pointed out.
16.1 General Concept Tracking and orbit estimation of a geostationary satellite are illustrated, as a general concept, in Figure 16.1. There is a target satellite that emits radio signals, and the signals arrive at an Earth station with tracking equipment, which is a radio interferometer in our particular case of interest. In more general cases the tracking equipment is a ranging system, which uses signals that make the round trip between the Earth station and the satellite, and there may be two or even more Earth stations involved. In some cases an angle-pointing antenna is also used. Whatever the equipment may be, tracking continues over hours and days to yield a set of observation data, and the data set contains the information on the orbital motion of the satellite. Orbit estimation is a process of extracting the orbital information from the tracking observation data, and it works in a software system as follows. First, an initial guess of the position and velocity of the satellite is made. This is done 151
152
Radio Interferometry and Satellite Tracking
Figure 16.1 Satellite tracking and orbit estimation.
by assuming a satellite that is ideally stationary at its nominal position. Using the initial guess, a software unit called an orbit generator will determine where the satellite was supposed to be when the tracking observations were made. This is then input to a tracking model, which calculates theoretically what should have been the values of the observation data. One can then compare the calculated data with the observed data, to determine the difference between them. This is the “Observed minus Calculated,” or “O minus C,” entry that appears as O−C in Figure 16.1. The O−C would be small if the initial guess was correct, but this does not usually occur; some correction must be applied to the position and velocity. This correction is given as being the O−C times a coefficient, with the coefficient determined depending on how the tracking was accomplished. The orbit estimation software is thus a kind of simulator. It simulates the motion of the satellite and the function of the tracking equipment as existing in the real world. The smaller the O−C, the better the simulation is performing. If the O−C becomes small enough, the position and velocity represent the true orbital motion, and this is the principle of orbit estimation.
16.2 Styles of Orbit Estimation The process of orbit estimation may be viewed along the time axis, as illustrated in Figure 16.2. The initial guess of position and velocity is set at some reference time t0. The observation data are collected at times t1, t2, …, tn. The data observed may be generally a complex of various types of measurements made at different Earth stations. The type and station for each data point must
Tracking and Orbit Estimation
153
Figure 16.2 Orbit estimation timeline for batch processing.
have been specified by some tracking schedule. Correspondingly, the calculated data are prepared such that they match the same tracking schedule, and this allows the O−C to be obtained over the tracking period of time from t1 to tn. This O−C determines how much the position and velocity should be corrected. Then after the correction, the newly obtained O−C will become smaller than the previous O−C. This process of correction is iterated, perhaps three or four times but not much more, until the O−C value is reduced enough that the position and velocity need no more correction. The position and velocity thus obtained supply the orbital elements at reference time t0. Orbit estimation is thus a process of improving the initial guess of the orbital elements until the guess becomes accurate. Another style of orbit estimation is possible, as illustrated in Figure 16.3. Observation data are collected in steps at times t1, t2, …, ti, ti+1, …, over a prolonged period of time; the position and velocity are also estimated at each time step. In this context the position and velocity are regarded as making a six-dimensional vector, which is referred to as a state vector. That is, a state vector will
Figure 16.3 Orbit estimation timeline for sequential processing.
154
Radio Interferometry and Satellite Tracking
be estimated at each step ti. Suppose that, at the present time step ti, we have estimated the state vector. When we have a new observation data point at next time step ti+1, the action for estimation is as follows. From the last estimate of the state vector at ti, predict the state vector at ti+1 using an orbit generator. From this prediction, make the calculated data at ti+1 using a tracking model. One can then make the O−C at ti+1. Note that this O−C is a small data set, because it corresponds to the data collected at a single step only. Using this O−C, improve the predicted state vector at ti+1. That is, we advance the state vector from ti to ti+1 by simple prediction, and improve it by using the observation data obtained at ti+1. This step-by-step improvement from ti to ti+1 will be repeated, from ti+1 to ti+2, from ti+2 to ti+3, and so on. In this way we obtain at each time step the most likely estimate of the position and velocity of the satellite. At t1 we set an initial guess of the state vector in the same way as mentioned before.
16.3 Choice of Estimation Style The first type of orbit estimation discussed above is called batch estimation; the second is called sequential estimation. Theoretically speaking, the two will yield the same result if the forces acting on the satellite are known exactly and are reflected in the orbit generator. The sequential estimation, however, may provide better performance, for example, in a case like the following. Suppose that, in Figure 16.2, an orbital maneuver takes place at some time between t1 and tn. The force applied to the satellite during the maneuver is known theoretically, but in reality the thruster may generate a force slightly more in magnitude, or less, than planned. To take this occurrence into consideration, the ∆v produced by the maneuver is set as an additional unknown parameter to be estimated together with the orbital elements. If more parameters are to be estimated, generally more observation data are needed. So, we set a longer tracking period. If the tracking period becomes longer, then another maneuver ∆v may occur. So, we need to set the tracking period even longer, thus falling into an endless spiral. This is possible if orbital maneuvers take place as frequently as in the case mentioned in Chapter 13 for NS keeping. This problem can be dealt with by means of sequential estimation. Suppose that, in Figure 16.3, a maneuver occurs at some time between ti and ti+1. The state vector is predicted from ti to ti+1, with the maneuver ∆v being taken into account, while some inaccurate part of ∆v will cause an error in the predicted state vector at ti+1. Correspondingly, the O−C made at ti+1 will show an increased value. Hence a correction is applied to the orbital state at ti+1. That is, the observation made at ti+1 is used to correct for the error caused by the inaccurate part of the maneuver ∆v. A correction will be made at ti+1 if the maneuver
Tracking and Orbit Estimation
155
inaccuracy was small and the observation data at ti+1 were good; otherwise, the correcting action will continue for more time steps. If we have to track an unknown satellite, such as a satellite from a different operator or a different nation, we do not know when and how the maneuver takes place, if it takes place at all. In such a case it is the Dv itself, not its inaccurate part, that causes the error in the predicted state vector. A large O−C will then appear, thus telling us that a maneuver has occurred. The O−C will be reduced again, as time steps pass, and the rate of reduction would depend on the magnitude of the ∆v. Sequential estimation as such will be the only possible estimation method to use if we have to monitor the orbit of an unknown satellite in the overcrowded environment of the stationary orbit.
16.4 Software Units The orbit estimation software includes various units, as illustrated earlier in Figure 16.1. Their functions are as follows if outlined concisely. The orbit generator advances the position and velocity of a satellite, or the state vector, from any time t0 to any t1 while considering the forces acting on the satellite, as illustrated in Figure 16.4. The forces include the two-body attraction of the Earth and the perturbing forces. The major perturbing forces are the four that were discussed in Chapter 12. The forces are added up and integrated with respect to time by using a numerical integration method. Though we figured out the perturbations by theory in Chapter 12, here in orbit estimation we usually rely on numerical integration because the numerical method provides better accuracy than the theory. Calculation of the perturbing forces requires the use of various data sources: the ephemeris of the sun and moon, a table defining the nonspherical shape of Earth’s gravity potential, and the instantaneous orientation of Earth in the inertial space at a given time. It refers also to a table of maneuvers that is used to determine if any ∆v is being planned. In
Figure 16.4 Orbit generator outline.
156
Radio Interferometry and Satellite Tracking
batch estimation (Figure 16.2) the orbit generator advances from t0 to any time between t1 and tn, whereas in sequential estimation (Figure 16.3) it is stepwise from ti to ti+1; these functions of time-advancing can be provided by the same orbit generator. The orbit generator may be used with any satellite, with the exception of the maneuver ∆v, which depends on the satellite’s hardware. The numerical integration tends to require a complex software code, especially if its goals is high precision. To ease this problem, a simplifying method can be used. This simplified method decomposes the orbital motion into the two-body motion and perturbed motions. The motion resulting from the twobody attraction is solved exactly by Kepler’s laws, whereas the motion resulting from the perturbation is calculated by numerical integration. The two results are then summed up. This reduces considerably the workload of numerical integration because the perturbing forces are very small compared with the twobody attraction. The result is a high computation speed accompanied by a reduction in the coding size and complexity of the numerical integration [1]. The tracking model of Figure 16.5 illustrates how the tracking equipment functions when linked with the satellite. If the equipment is an interferometer in our particular case of interest, there are two antennas or more in the Earth station. Their positions are given in the Earth-fixed coordinate frame. So, the position of the satellite is converted into the same Earth-fixed frame by referring to the Earth orientation. If the satellite and the antennas are positioned in a common coordinate frame, the signal propagation paths from the satellite to the antennas can be determined by geometry. This allows for theoretical calculation of the phase differences between the antennas, hence, the “calculated” values. The tracking model thus reflects the design and installation of the tracking equipment. As the signal from the satellite propagates through the
Figure 16.5 Tracking model outline.
Tracking and Orbit Estimation
157
atmosphere, its refraction effect is considered so that the calculation will correspond to the actual observation. The orbit generator can be regarded as an input/output system; that is, a state vector at t0 is input and a state vector at t1 is output. One can then create a differential relationship in the form of ∂(output)/∂(input), which makes a matrix. Similarly, the differential relationship is set for the tracking model, and by using these relationships we can determine the correction coefficient, which is also a matrix, that should be multiplied with the O−C. The correction-coefficient matrix is determined on the basis of the least-squares principle for batch estimation, or on the basis of an algorithm typically known as Kalman filtering for sequential estimation [2].
16.5 Meaning of Orbit Estimation We saw that orbit estimation is used to determine a set of six parameters: three for position and three for velocity. Determining the six parameters has a specific meaning if the orbit is near stationary, as discussed next. We can represent the motion of a near-stationary satellite by using a relative coordinate frame centered at the satellite’s nominal stationary position, as illustrated by Figures 10.9 and 10.10 in Chapter 10. The motion was given in (10.17), (10.18), and (10.19), which are reproduced here:
R = ∆a - ae cos (Wt - α)
L = L0 -
3 ∆a Wt + 2ae sin (Wt - α) 2
Z = ai sin ( Wt - β )
(16.1)
(16.2)
(16.3)
Here, R, L, and Z are radial, longitudinal, and north axes, respectively. The nominal orbital radius a and the Earth rotation rate W are given constants; so we find here six parameters at our disposal: ∆a, e, α, L0, i, and β. The determination of six parameters as mentioned earlier corresponds to determining the six parameters found here. We know, from Chapter 12, that the orbit changes slowly with time owing to perturbations. Accordingly, the parameters found here must be regarded as changing slowly with time. For example, the gravity of the sun and moon makes the orbital plane tilt, thus making i and β in (16.3) change slowly. This is reflected in orbit estimation; so, the parameters are deter-
158
Radio Interferometry and Satellite Tracking
mined as at some time of reference, that is, at t0 in batch estimation (see Figure 16.2), or at any ti in sequential estimation (see Figure 16.3) The presence of perturbation brings an additional unknown parameter that should be estimated together with orbital parameters. The perturbation by the solar radiation pressure included a factor A/M, with A being the cross section and M the mass of the satellite, as in (12.9). This A is an effective value that depends in a complex manner on the shape and surface material of the satellite, so its true value is often difficult to determine. Thus, one common practice is to set the factor A/M as an unknown parameter. Correspondingly, seven elements are estimated in batch estimation, and a seven-dimensional state vector is estimated in sequential estimation.
16.6 Tracking Using an Interferometer We find that, from the relationship of (16.2), observing the satellite motion in L allows us to determine the parameters L0, ∆a, e, and α. This can be seen from Figure 16.6. The motion in L comprises a linear part and a sinusoidal part. From the linear part, we know the slope (∆a) and the constant (L0), and from the sinusoidal part we know its amplitude (e) and phase (α)—this reasoning follows [3]. Similarly, from (16.3), observing the motion in Z allows us to determine i and β. This leads us to an idea for how to track using an interferometer, as illustrated in Figure 16.7. Our target satellite S is in the neighborhood of its nominal stationary position O, and the position of S is measured in relative coordinates: R for radial, L for longitudinal, and Z for north. The interferometer is placed on the ground at A right under O, with its baselines AB and AC being set horizontally. If baseline AB is parallel to the L-axis, then it detects the satellite motion in L, hence allowing L0, ∆a, e, and α to be determined. If baseline AC is parallel to the Z-axis, then it detects the satellite motion in Z, hence allowing i and β to be determined. Orbit estimation is thus made possible by the interferometer.
Figure 16.6 Determining the orbital parameters.
Tracking and Orbit Estimation
159
The motion in L has a sinusoidal term with a period of 1 day, so observing the L motion for 1 day will enable us to determine the parameters, and this is similar for the motion in Z. Hence the period of time needed for tracking is basically 1 day, which is an important fact that applies generally to stationarysatellite tracking. From the same reasoning we should point out that the interferometer must have two baselines. Suppose that there is only one baseline and it points to an arbitrary orientation between AB and AC. What this baseline detects then is a linear combination of the L component and the Z component of the satellite motion. So, it is impossible to determine the L motion parameters and the Z motion parameters separately. Orbit estimation is therefore impossible, if orbit estimation means to determine fully the six orbital parameters. Now back to Figure 16.7; suppose that tracking continues for 2 days. From the first day of tracking, we obtain the first set of six orbital elements by batch estimation, and from the second day the second set of six orbital elements. The two sets should be consistent with each other if variations due to the perturbation are taken into account. If the two are not consistent, then we know that the account of perturbation was not correct, perhaps because the factor A/M for the solar radiation pressure was incorrect. We can then correct it so that the two sets will become consistent. In other words, orbit estimation needs basically 2 days of tracking if the factor A/M is included as an estimation parameter. The interferometer illustrated in Figure 16.7 may be regarded as a prototype interferometer. It is not practical because its location geometry is too special, but it was useful for theoretical reasoning of tracking and orbit estimation. Practical interferometers will be derived from this prototype, as we will see in the following chapters.
Figure 16.7 Prototype interferometer.
160
Radio Interferometry and Satellite Tracking
References [1] Tanaka, T., and S. Kawase, “A High-Speed Integration Method for the Satellite’s Ephemeris Generation,” J. of Guidance and Control, Vol. 1, No. 3, 1978, pp. 222–223. [2] Pocha, J. J., An Introduction to Mission Design for Geostationary Satellites, Norwell, MA: Kluwer, 1987, pp. 164–198. [3] Soop, E. M., Handbook of Geostationary Orbits, Norwell, MA: Kluwer, 1994, pp. 252–256.
17 Azimuth-Elevation Tracking In the previous chapter we pointed out that the prototype interferometer makes orbit estimation possible. In this chapter we take a step forward to show that any interferometer with two independent baselines makes orbit estimation possible. First we discuss why satellite directions are commonly measured in azimuth and elevation angles, and connect this discussion to the interferometer with two baselines in general. We then analyze the geometry of baselines relative to the satellite, and finally, we find the conditions for accurate orbit estimation.
17.1 Azimuth-Elevation Angles If a satellite is viewed at an Earth station and its direction is measured, the direction should be measured by two angles, which are most commonly azimuth and elevation. As illustrated in Figure 17.1, the azimuth measures the angle from north to the satellite clockwise in the horizontal plane, and the elevation is the angle between the satellite and the horizontal plane. Azimuth and elevation angles are closely related to the pointing mechanism of antennas used in Earth stations. The antenna must rotate about two axes in order to point to the satellite accurately. Because it must rotate under the presence of gravity, it is reasonable to first set one axis vertical. This axis supports a rotary stage, on which the second axis, the horizontal, is set. The second axis always remains horizontal if the first axis turns. The antenna will then rotate smoothly if the weight of the antenna is balanced about the horizontal axis. The vertical and horizontal axes thus become the axes about which azimuth and elevation angles are measured. That is, the “azimuth and elevation” is used to name angles and also to define the type of the rotary mechanism of the antenna.
161
162
Radio Interferometry and Satellite Tracking
Figure 17.1 Azimuth and elevation angles. N: north; Z: zenith.
The azimuth-elevation antenna is able to point to a satellite in any direction, except for a satellite near the zenith. If the satellite comes near the zenith, the azimuth axis is forced to turn very quickly, thus causing trouble with the rotary mechanism. This occurs for satellites flying in low Earth orbits, for example communication or Earth observation satellites in polar orbits. A geostationary satellite, however, does not come near the zenith unless the Earth station is deliberately located right under the satellite. Using an azimuth-elevation antenna is thus reasonable for tracking and pointing a geostationary satellite. If an antenna is operating exclusively for one stationary satellite, then it does not need to rotate over a wide range of angles. Suppose that, in Figure 17.2, the satellite has its nominal stationary position at S0, while the satellite is actually at S. When the satellite moves from S0 to S, it moves by a horizontal angle H and a vertical angle V. If the satellite is subject to station keeping with the standard width of 0.2 deg, the angles H and V are limited approximately by the same width. The changing width of the elevation is thus 0.2 deg, while that of the azimuth becomes larger by a factor of 1/cos(elevation). If the antenna is able to change its azimuth and elevation by these widths with some margin, the antenna keeps tracking and pointing to the satellite. It is thus common for an Earth-station antenna to have limited motion ranges for azimuth and elevation.
Figure 17.2 Horizontal H and vertical V angles.
Azimuth-Elevation Tracking
163
17.2 Azimuth-Elevation Interferometer If a limited-motion antenna is tracking a near-stationary satellite to measure azimuth and elevation angles, its measuring function can be substituted by an interferometer, as illustrated in Figure 17.3. Here, similar to Figure 17.2, S0 and S are the nominal and actual satellite positions, and the angular distance between them is given by H and V, the horizontal and vertical angles, respectively. The interferometer has two baselines AB and AC in the horizontal plane. The baseline AB is orthogonal to S0A, the path of the downlink, while the AC aligns with the horizontally projected path of the downlink; so the two baselines are orthogonal to each other. The baseline AB then detects the angle H, and baseline AC detects the angle V. The interferometer thus detects the direction angles equivalent to azimuth and elevation, in reference to the nominal satellite position. Consider what happens if the baseline geometry is changed, as illustrated in the example of Figure 17.4. Here, the baselines AB and AC have been rotated by 45 deg in the horizontal plane. Let A, B, and C denote the phases of the downlink signal when received at antennas A, B, and C, respectively. Then what the baseline AB detects is x = B − A, and what the AC detects is y = C − A. Now, suppose we convert the detected x and y into x − y and x + y as follows:
x - y = B -C
(17.1)
B +C x+y = - A × 2 2
(17.2)
Figure 17.3 Azimuth-elevation interferometer.
164
Radio Interferometry and Satellite Tracking
Figure 17.4 Changing the baseline geometry.
The x − y is what a supposed baseline BC would detect, so it is for the angle V, or the elevation. The x + y is what a supposed baseline AD would detect, where D is at the middle of B and C; so it is for the angle H, or the azimuth. In other words, we found a linear combination of x and y that provides the azimuth, and another linear combination of x and y that provides the elevation. This way of reasoning will apply if baselines AB and AC have arbitrary lengths and orientations. Hence, if a two-baseline interferometer outputs a set of phase data x and y, we can then find linear combinations of x and y that provide the H and V angles, or the azimuth and elevation, respectively. The coefficients for making linear combinations depend on the geometry of the baselines relative to the satellite. Any two-baseline interferometer may be called in this sense an azimuth-elevation interferometer.
17.3 Detection Unit Vector of a Baseline Let us clarify how a baseline detects the motion of a satellite. Suppose that a baseline is placed in a relative geometry with respect to the satellite, as illustrated in Figure 17.5. The satellite has its nominal stationary position at S0, and the baseline AB has a length b. The position S0 is distant from the baseline by r, and is at an angle ψ relative to the baseline. The plane that contains AB and S0 is called P. Consider a unit vector u that lies in the plane P; this u is orthogonal to the line connecting the baseline and S0. (Which point of the baseline to connect does not matter since b