Contents 1 Image Formation
5
1.1
Optical Components of the Eye . . . . . . . . . . . . . . . . . . . . . . .
5
1.2 ...
64 downloads
1298 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Contents 1 Image Formation
5
1.1
Optical Components of the Eye . . . . . . . . . . . . . . . . . . . . . . .
5
1.2
Reflections From the Eye . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.3
Linear Systems Methods . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.4
Shift-Invariant Linear Transformations . . . . . . . . . . . . . . . . . . .
18
1.5
The Optical Quality of the Eye . . . . . . . . . . . . . . . . . . . . . . . .
20
1.6
Lenses, Diffraction and Aberrations . . . . . . . . . . . . . . . . . . . .
26
1
2
CONTENTS
List of Figures 1.1
EyeBall From Salzman . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1.2
Monitor to Retina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
1.3
Ophtalmoscope Principles . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.4
Double Pass Instrument . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
1.5
Retinal Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
1.6
Campbell and Gubisch Data . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.7
Homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
1.8
Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
1.9
Homogeneity and Superposition . . . . . . . . . . . . . . . . . . . . . .
15
1.10 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
1.11 Sinusoids and Double Passage . . . . . . . . . . . . . . . . . . . . . . .
20
1.12 The Estimated Linespread Function . . . . . . . . . . . . . . . . . . . .
22
1.13 Westheimer’s Linespread Function . . . . . . . . . . . . . . . . . . . . .
23
1.14 Example Retinal Images . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
1.15 The MTF for Westheimer’s Linespread . . . . . . . . . . . . . . . . . . .
24
1.16 Snell’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
1.17 Depth of Field in the Human Eye . . . . . . . . . . . . . . . . . . . . . .
28
1.18 Pinhole Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
1.19 Diffraction limited pinhole image . . . . . . . . . . . . . . . . . . . . . .
30
1.20 Diffraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
1.21 Pointspread Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
3
4
LIST OF FIGURES 1.22 Astigmatism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
1.23 Chromatic Aberration . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
1.24 OTF of Chromatic Aberration . . . . . . . . . . . . . . . . . . . . . . . .
35
Chapter 1 Image Formation The cornea and lens are at the interface between the physical world of light and the neural encoding of the visual pathways. The cornea and lens bring light into focus at the light sensitive receptors in our retina and initiate a series of visual events that result in our visual experience. The initial encoding of light at the retina is but the first in a series of visual transformations: The stimulus incident at the cornea is transformed into an image at the retina. The retinal image is transformed into a neural response by the light sensitive elements of the eye, the photoreceptors. The photoreceptor responses are transformed to a neural response on the optic nerve. The optic nerve representation is transformed into a cortical representation, and so forth. We can describe most of our understanding of these transformations, and thus most of our understanding of the early encoding of light by the visual pathways by using linear systems theory. Because all of our visual experience is limited by the image formation within our eye, we begin by describing this transformation of the light signal and we will use this analysis as an introduction to linear methods.
1.1 Optical Components of the Eye Figure 1.1 contains an overview of the imaging components of the eye. Light from a source arrives at the cornea and is focused by the cornea and lens onto the photoreceptors, a collection of light sensitive neurons. The photoreceptors are part of a thin layer of neural tissue, called the retina. The photoreceptor signals are communicated through the several layers of retinal neurons to the neurons whose output fibers makes up the optic nerve. The optic nerve fibers exit through a hole in the retina called the optic disk. The optical imaging of light incident at the cornea into an image at the retinal photoreceptors is the first visual transformation. Since all of 5
CHAPTER 1. IMAGE FORMATION
6 Source
Cornea
Iris Lens
Optic disk Retina Optic nerve
Fovea
Figure 1.1: The imaging components of the eye. The cornea and lens focus the image onto the retina. Light enters through the pupil which is bordered by the iris. The fovea is a region of the retina that is specialized for high visual acuity and color perception. The retinal output fibers leave at a point in the retina called the blindspot. The bundle of output fibers is called the optic nerve. our visual experiences are influenced by this transformation, we begin the study of vision by analyzing the properties of image formation. When we study transformations, we must specify their inputs and outputs. As an example, we will consider how simple one-dimensional intensity patterns displayed on a video display monitor are imaged onto the retina (Figure 1.2a). In this case the input is the light signal incident at the cornea. One-dimensional patterns have a constant intensity along the, say, horizontal dimension and varies along the perpendicular (vertical) dimension. We will call the pattern of light intensity we measure at the monitor screen the monitor image. We can measure the intensity of the one-dimensional image by placing a light-sensitive device called a photodetector at different positions on the screen. The vertical graph in Figure 1.2b shows a measurement of the intensity of the monitor image at all screen locations. The output of the optical transformation is the image formed at the retina. When the input image is one-dimensional, the retinal image will be one-dimensional, too. Hence, we can represent it using a curve as in Figure 1.2c. We will discuss the optical components of the visual system in more detail later in this chapter, but from simply looking at a picture of the eye in Figure 1.1 we can see that the monitor image passes through a lot of biological material before arriving at the retina. Because the optics of the eye are not perfect, the retinal image is not an exact copy of the monitor image: The retinal image is a blurred copy of the input image.
1.1. OPTICAL COMPONENTS OF THE EYE
(b) Intensity
(a)
7
Screen position
Intensity
(c)
Retinal position Figure 1.2: Retinal image formation illustrated with a single-line input image. (a) A one-dimensional monitor image consists of a set of lines at different intensities. The image is brought to focus on the retina by the cornea and lens. (b) We can represent the intensity of a one-dimensional image using a simple graph that shows the light as a function of horizontal screen position. Only a single value is plotted since the onedimensional image is constant along the vertical dimension. (c) The retinal image is a blurred version of the one-dimensional input image. The retinal image is also one-dimensional and is also represented by a single curve.
CHAPTER 1. IMAGE FORMATION
8 (a)
(b)
Source
Examiner’s eye Position 2
Source
Light absorber
Beam splitter
Position 1
Examiner’s eye
Subject’s eye
Subject’s eye
Figure 1.3: Principles of the ophthalmoscope. An ophthalmoscope is used to see an image reflected from the interior of the eye. (a) When we look directly into the eye, we cast a shadow making it impossible to see light reflected from the interior of the eye. (b) The ophthalmoscope permits us to see light reflected from the interior of the eye. Helmholtz invented the first ophthalmoscope. (After Cornsweet, 1970). The image in Figure 1.2b shows one example of an infinite array of possible input images. Since there is no hope of measuring the response to every possible input, to characterize optical blurring completely we must build a model that specifies how any input image is transformed into a retinal image. We will use linear systems methods to develop a method of predicting the retinal image from any input image.
1.2 Reflections From the Eye To study the optics of a human eye you will need an experimental eye, so you might invite a friend to dinner. In addition, you will need a light source, such as a candle, as a stimulus to present to your friend’s eye. If you look directly into your friend’s eye, you will see a mysterious darkness that has beguiled poets and befuddled visual scientists. The reason for the darkness can be understood by considering the problem of ophthalmoscope design illustrated in Figure 1.3a. If the light source is behind you, so that your head is between the light source and the eye you are studying, then your head will cast a shadow that interferes with the light from the point source arriving at your friend’s eye. As a result, when you look in to measure the retinal image you see nothing beyond what is in your heart. If you move to the side of the light path, the image at the back of your friend’s eye will be reflected towards the light source, following a reversible path. Since you are now on the side, out of the path of the light source, no light will be sent towards your eye1 1
The great nineteenth century scientist, H. von Helmholtz, built the first ophthalmoscope. The ophthalmoscope design shown in Figure 1.3 is unconventional, though it does include the basic principle:
1.2. REFLECTIONS FROM THE EYE Aperture
9 Beam splitter
Source slit
Observer
Source
Analyzing slit Mirror
Light detector
Image of light reflected from the eye
Figure 1.4: A modified opthalmoscope measures the human retinal image. Light from a bright source passes through a slit and into the eye. A fraction of the light is reflected from the retina and is imaged. The intensity of the reflected light is measured at different spatial positions by varying the location of the analyzing slit. (After Campbell and Gubisch, 1967). Flamant (1955) first measured the retinal image using a modified ophthalmoscope. She modified the instrument by placing a light sensitive recording, a photodetector, at the position normally reserved for the ophthalmologist’s eye. In this way, she measured the intensity pattern of the light reflected from the back of the observer’s eye. Campbell and Gubisch (1967) used Flamant’s method to build their apparatus, which is sketched in Figure 1.4. Campbell and Gubisch measured the reflection of a single bright line, that served the input stimulus in their experiment. As shown in the Figure, a beam-splitter placed between the input light and the observer’s eye divides the input stimulus into two parts. The beam-splitter causes some of the light to be turned away from the observer and lost; this stray light is absorbed by a light baffle. The rest of the light continues toward the observer. When the light travels in this direction, the beam-splitter is an annoyance, serving only to lose some of the light; it will accomplish its function on the return trip. The light that enters the observer’s eye is brought to a good focus on the retina by a lens. A small fraction of the light incident on the retina is reflected and passes – a second time – through the optics of the eye. On the return path of the light, the beam-splitter now plays its functional role. The reflected image would normally return to a focus at the light source. But the beam-splitter divides the returning beam so that a portion of it is brought to focus in a measurement plane to one side of the apparatus. Using a very fine slit in the measurement plane, with a photodetector We need to arrange a light path so that the examiner’s eye does not cast a shadow. A bright source light is required since the back of the human eye is not very reflective. An more conventional design is in Appendix III of Visual Perception by T. Cornsweet.
CHAPTER 1. IMAGE FORMATION
10
Photoreceptors
Light path
Figure 1.5: The retina contains the light sensitive photoreceptors where light is focussed. This cross-section of a monkey retina outside the fovea shows there are several layers of neurons in the optical path between the lens and the photoreceptors. As we will see later, in the central fovea these neurons are displaced to leaving a clear optical path from the lens to the photoreceptors (Source: Boycott and Dowling, 1969).
behind it, Campbell and Gubisch measured the reflected light and used the measurements of the reflected light to infer the shape of the image on the retinal surface. What part of the eye reflects the image? In Figure 1.5 we see a cross-section of the peripheral retina. In normal vision, the image is focused on the retina at the level of the photoreceptors. The light measured by Campbell and Gubisch probably contains components from several different planes at the back of the eye. Thus, their measurements probably underestimate the quality of the image at the level of the photoreceptors. Figure 1.6 shows several examples of Campbell and Gubisch’s measurements of the light reflected from the eye when the observer is looking at a very fine line. The different curves show measurements for different pupil sizes. When the pupil was wide open (top, 6.6mm diameter) the reflected light is blurred more strongly than when the pupil is closed (middle, 2.0mm). Notice that the measurements made with a large pupil opening are less noisy; when the pupil is wide open more light passes into the eye and more light is reflected, improving the quality of the measurements. The light measured in Figure 1.6 passed through the optical elements of the eye twice, while the retinal image passes through the optics only once. It follows that the spread in these curves is wider than the spread we would observe had we measured
1.3. LINEAR SYSTEMS METHODS
11
Pupil diameter (mm) 6.6 5.8
Intensity of light reflected from the eye
4.9 3.8 3.0 2.4 2.0 1.5 1.0
20 10 0 10 20 Visual angle (minutes of arc)
Figure 1.6: Experimental measurements of light that has been reflected from a human eye looking at a fine line. The reflected light has been blurred by double passage through the optics of the eye. (Source: Campbell and Gubisch, 1967). at the retina. How can we use these doublepass measurements to estimate the blur at the retina? To solve this problem, we must understand the general features of their experiment. It is time for some theory.
1.3 Linear Systems Methods A good theoretical account of a transformation, such as the mapping from monitor image to retinal image, should have two important features. First, the theoretical account should suggest to us which measurements we should make to characterize the transformation fully. Second, the theoretical account should tell us how to use these measurements to predict the retinal image distribution for all other monitor images. In this section we will develop a set of general tools, referred to as linear systems methods. These tools will permit us to solve the problem of estimating the optical transformation from the monitor to the retinal image. The tools are sufficiently general, however, that we will be able to use them repeatedly throughout this book. There is no single theory that applies to all measurement situations. But, linear systems theory does apply to many important experiments. Best of all, we have a simple experimental test that permits us to decide whether linear systems theory is appropriate to our measurements. To see whether linear systems theory is
CHAPTER 1. IMAGE FORMATION
12 Monitor image
Retinal image
(a)
(b)
(c)
Figure 1.7: The principle of homogeneity illustrated. An input stimulus and corresponding retinal image are shown in each part of the figure. The three input stimuli are the same except for a scale factor. Homogeneity is satisfied when the corresponding retinal images are scaled by the same factor. Part (a) shows an input image at unit intensity, while (b) and (c) show the image scaled by 0.5 and 2.0 respectively. appropriate, we must check to see that our data satisfy the two properties of homogeneity and superposition.
Homogeneity A test of homogeneity is illustrated in Figure 1.7. The left-hand panels show a series of monitor images, and the right-hand panels show the corresponding measurements of reflected light2 . Suppose we represent the intensities of the lines in the one-dimensional monitor image using the vector (upper left) and we represent the retinal image measurements by the vector . Now, suppose we scale the input signal by a factor , so that the new input is . We say that the system satisfies homogeneity if the output signal is also scaled by the same factor of , and thus the new output is . For example, if we halve the input intensity, then the reflected
Å
2
We will use vectors and matrices in our calculations to eliminate burdensome notation Matrices will be denoted by boldface, upper case Roman letters, . Column vectors will be denoted using lower case boldface Roman letters, . The transpose operation will be denoted by a superscript T, . Scalar values will be in normal typeface, and they will usually be denoted using Roman characters ( ) except when tradition demands the use of Greek symbols (). The entry of a vector, , is a scalar and will be denoted as . The column of a matrix, , is a vector that we denote as . The scalar entry in the row and column of the matrix will be denoted .
Ú
Å
Å
Ú Ñ
Ú
1.3. LINEAR SYSTEMS METHODS Monitor image
13 Retinal image
(a)
(b)
(c)
Figure 1.8: The principle of superposition illustrated. Each of the three parts of the picture shows an input stimulus and the corresponding retinal image. The stimulus in part (a) is a single-line image and in part (b) the stimulus is a second line displaced from the first. The stimulus in part (c) is the sum of the first two lines. Superposition holds if the retinal image in part (c) is the sum of the retinal images in parts (a) and (b). light measured at their photodetector should be one-half the intensity (middle panel). If we double the light intensity, the response should double (bottom panel). Campbell and Gubisch’s measurements of light reflected from the human eye satisfy homogeneity.
Superposition Superposition, used as both an experimental procedure and a theoretical tool, is probably the single most important idea in this book. You will see it again and again in many forms. We describe it here for the first time. Suppose we measure the response to two different input stimuli. For example, suppose we find that input pattern (top left) generates the response (top right), and input pattern (middle left) generates response (middle right). Now we measure the response to a new input stimulus equal to the sum of and . If the response to the new stimulus is the sum of the responses measured singly, , then the system is a linear system. By measuring the responses stimuli individually and then the response to the sum of the stimuli, we test superposition. When the responses to sum of the stimuli equals the sum of the individual responses, then we
CHAPTER 1. IMAGE FORMATION
14
say the system satisfies superposition. Campbell and Gubisch’s measurements of light reflected from the eye satisfy this principle. We can summarize homogeneity and superposition succinctly using two equations. Write the linear optical transformation that maps the input image to the light intensity at each of the receptors as
(1.1)
Homogeneity and superposition are defined by the pair of equations 3
(1.2) (1.3)
Implications of Homogeneity and Superposition Figure 1.9 illustrates how we will use linear systems methods to characterize the relationship between the input signal from a monitor, light reflected from the eye4 . First, we make an initial set of measurements of the light reflected from the eye for each single-line monitor image, with the line set to unit intensity. If we know the images from single-line images, and we know the system is linear, then we can calculate the light reflected from the eye from any monitor image: Any one-dimensional image is the sum of a collection of lines. Consider an arbitrary one-dimensional image, as illustrated at the top of Figure 1.9. We can conceive of this image as the sum of a set of single-line monitor images, each at its own intensity, . We have measured the reflected light from each single-line image alone, call this for the line. By homogeneity it follows that the reflected light from line will be a scaled version of this response, namely .. Next, we combine the light reflected from the single-line images. By superposition, we know that the light reflected from the original monitor image, , is the sum of the light reflected from the single-line images,
3
(1.4)
Notice that the superposition leads us to expect homogeneity for integer scalars since
Ô and in general if we sum copies of Ô
Ô Ô Ô
Ô Ô
Ô
We write the homogeneity separately from superposition to avoid the tedium of treating the case of irrational numbers in certain proofs. 4 We analyze a one-dimensional monitor images to simplify the notation. The principles remain the same, but the notation becomes cumbersome, when we consider two-dimensional images.
1.3. LINEAR SYSTEMS METHODS
15
(a)
=
+
+
=
+
+
+
p r 2 + p r3 2
(b)
(c)
r
= p1 r 1
3
Figure 1.9: Application of homogeneity and superposition. (a) A one-dimensional monitor image is the weighted sum of a set of lines. An example of a one-dimensional image is shown on the left and the individual monitor lines comprising the monitor image are shown separately on the right. (b) Each line in the component monitor image contributes to the retinal image. The retinal images created by the individual lines are shown below the individual monitors. The sum of the retinal images is shown on the left. (c) The retinal image generated by the monitor line at unit intensity is represented by the vector . The intensity of the monitor line is . By homogeneity, the retinal image of the monitor line is . By superposition, the retinal image of the collection of monitor lines is sum of the individual retinal images, .
CHAPTER 1. IMAGE FORMATION
r =
r1 r2
...
p1 p rN
2
...
16
p
N
r = Rp Figure 1.10: Matrix multiplication is a convenient notation for linear systems methods. For example, the weighted sum of a set of vectors, as in part (c) of Figure 1.9, can be represented using matrix multiplication. The matrix product equals the sum of the columns of weighted by the entries of . When the matrix describes the responses of a linear system, we call it a system matrix. Equation 1.4 defines a transformation that maps the input stimulus, , into the measurement, . Because of the properties of homogeneity and superposition, the transformation is the weighted sum of a fixed collection of vectors: When the monitor image varies, only the weights in the formula, , vary but the vectors , the reflections from single-line stimuli, remain the same. Hence, the reflected light will always be the weighted sum of these reflections. To represent the weighted sum of a set of vectors, we use the mathematical notation of matrix multiplication. As shown in Figure 1.10, multiplying a matrix times a vector computes the weighted sum of the matrix columns; the entries of the vector define the weights. Matrix multiplication and linear systems methods are closely linked. In fact, the set of all possible matrices define the set of all possible linear transformations of the input vectors. Matrix multiplication has a shorthand notation to replace the explicit sum of vectors in Equation 1.4. In the example here, we define a matrix, , whose columns are the responses to individual monitor lines at unit intensity, . The matrix is called the system matrix. Matrix multiplication of the input vector, , times the system matrix , transforms the input vector into the output vector. Matrix multiplication is written using the notation
(1.5)
1.3. LINEAR SYSTEMS METHODS
17
Matrix multiplication follows naturally from the properties of homogeneity and superposition. Hence, if a system satisfies homogeneity and superposition, we can describe the system response by creating a system matrix that transforms the input to the output. A numerical example of a system matrix. Let’s use a specific numerical example to illustrate the principle of matrix multiplication. Suppose we measure a monitor that displays only three lines. We can describe the monitor image using a column vector with three entries, .
The lines of unit intensity are , and . We measure the response to these input vectors to build the system matrix. Suppose the measurements for these three lines are , , and respectively. We place these responses into the columns of the system matrix: (1.6)
We can predict the response to any monitor image using the system matrix. For example, if the monitor image is we multiply the input vector and the system matrix to obtain the response, on the left side of Equation 1.7.
(1.7)
Why linear methods are useful Linear systems methods are a good starting point for answering an essential scientific question: How can we generalize from the results of measurements using a few stimuli to predict the results we will obtain when we measure using novel stimuli? Linear systems methods tell us to examine homogeneity and superposition. If these empirical properties hold in our experiment, then we will be able to measure responses to a few stimuli and predict responses to many other stimuli. This is very important advice. Quantitative scientific theories are attempts to characterize and then explain systems with many possible input stimuli. Linear
18
CHAPTER 1. IMAGE FORMATION
systems methods tell us how to organize experiments to characterize our system: measure the responses to a few individual stimuli, and then measure the responses to mixtures of these stimuli. If superposition holds, then we can obtain a good characterization of the system we are studying. If superposition fails, your work will not be wasted since you will need to explain the results of superposition experiments to obtain a complete characterization of the measurements. To explain a system, we need to understand the general organizational principles concerning the system parts and how the system works in relationship to other systems. Achieving such an explanation is a creative act that goes beyond simple characterization of the input and output relationships. But, any explanation must begin with a good characterization of the processing the system performs.
1.4 Shift-Invariant Linear Transformations Shift-Invariant Systems: Definition Since homogeneity and superposition are well satisfied by Campbell and Gubisch’s experimental data, we can predict the result of any input stimulus by measuring the system matrix that describes the mapping from the input signal to the measurements at the photodetector. But the experimental data are measurements of light that has passed through the optical elements of the eye twice, and we want to know the transformation when we pass through the optics once. To correct for the effects of double passage, we will take advantage of a special property of optics of the eye, shift-invariance. Shift-invariant linear systems are an important class of linear systems, and they have several properties that make them simpler than general linear systems. The following section briefly describes these properties and how we take advantage of them. The mathematics underlying these properties is not hard; I sketch proofs of these properties in the Appendix. Suppose we start to measure the system matrix for the Campbell and Gubisch experiment by measuring responses to different lines near the center of the monitor. Because the quality of the optics of our eye is fairly uniform near the fovea, we will find that our measurements, and by implication the retinal images, are nearly the same for all single-line monitor images. The only way they will differ is that as the position of the input translates, the position of the output will translate by a corresponding amount. The shape of the output, however, will not change. An example of two measurements we might find when we measure using two lines on the monitor is illustrated in the top two rows of Figure 1.8. As we shift the input line, the measured output shifts. This shift is a good feature for a lens to have, because as an object’s position changes, the recorded image should remain the same (except for a shift). When we shift the input and the form of the output is invariant,
1.4. SHIFT-INVARIANT LINEAR TRANSFORMATIONS
19
we call the system shift-invariant.
Shift-Invariant Systems: Properties We can define the system matrix of a shift-invariant system from the response to a single stimulus. Ordinarily, we need to build the system matrix by combining the responses to many individual lines. The system matrix of a linear shift-invariant system is simple to estimate since these responses are all the same except for a shift. Hence, if we measure a single column of the matrix, we can fill in the rest of the matrix. For a shift-invariant system, there is only one response to a line. This response is called the linespread of the system. We can use the linespread function to fill in the entire system matrix. The response to a harmonic function at frequency is a harmonic function at the same frequency. Sinusoids and cosinusoids are called harmonics or harmonic functions. When the input to shift-invariant system is a harmonic at frequency , the output will be a harmonic at the same frequency. The output may be scaled in amplitude and shifted in position, but it still will be a harmonic at the input frequency. For example, when the input stimulus is defined at points and at these points its values are sinusoidal, . Then, the response of a shift-invariant system will be a scaled and shifted sinusoid, . There is some uncertainty concerning the output because there two unknown values, the scale factor, , and phase shift, . But, for each sinusoidal input we know a lot about the output; the output will be a sinusoid of the same frequency as the input. We can express this same result another useful way. Expanding the sinusoidal output using the summation rule we have
(1.8)
where
In other words, when the input is a sinusoid at frequency , the output is the weighted sum of a sinusoid and a cosinusoid, both at the same frequency as the input. In this representation, the two unknown values are the weights of the sinusoid and the cosinusoid. For many optical systems, such as the human eye, the relationship between harmonic inputs and the output is even simpler. When the input is a harmonic
(1.9)
CHAPTER 1. IMAGE FORMATION
20
Intensity
(a)
A
Intensity
sA
Intensity
(b)
sA
(c) 2
Spatial position
Figure 1.11: Double passage. (a) The amplitude, , of an input cosinusoid stimulus is scaled by a factor, , after passing through even-symmetric shift-invariant symmetric optics as shown in part (b). (c) Passage through the optics a second time scales the amplitude again, resulting in a signal with amplitude . function at frequency , the output is a scaled copy of the function and there is no the output will be shift in spatial phase. For example, when the input is , and only the scale factor, which depends on frequency, is unknown.
1.5 The Optical Quality of the Eye We are now ready to correct the measurements for the effects of double passage through the optics of the eye. To make the method easy to understand, we will analyze how to do the correction by first making the assumption that the optics introduce no phase shift into the retinal image; this means, for example, that a cosinusoidal stimulus creates a cosinusoidal retinal image, scaled in amplitude. It is not necessary to assume that there is no phase shift but the assumption is reasonable and the main principles of the analysis are easier to see if we assume there is no phase shift. To understand how to correct for double passage, consider a hypothetical alternative experiment Campbell and Gubisch might have done (Figure 1.11). Suppose Campbell and Gubisch had used input stimuli equal to cosinusoids at various spatial frequencies, . Because the optics are shift-invariant and there is no frequency-dependent phase shift, the retinal image of a cosinusoid at frequency is
1.5. THE OPTICAL QUALITY OF THE EYE
21
a cosinusoid scaled by a factor . The retinal image passes back through the optics and is scaled again, so that the measurement would be a cosinusoid scaled by the factor . Hence, had Campbell and Gubisch used a cosinusoidal input stimulus, we could deduce the retinal image from the measured image easily: The retinal image would be a cosinusoid with an amplitude equal to the square root of the amplitude of the measurement. Campbell and Gubisch used a single line, not a set of cosinusoidal stimuli. But, we can still apply the basic idea of the hypothetical experiment to their measurements. Their input stimulus, defined over locations, is
if if
(1.10)
As I describe in the appendix, we can express the stimulus as the weighted sum of harmonic functions by using the discrete Fourier series. The representation of a single line is equal to the sum of cosinusoidal functions
(1.11)
Because the system is shift-invariant, the retinal image of each cosinusoid was a scaled cosinusoid, say with scale factor . The retinal image was scaled again during the second pass through the optics, to form the cosinusoidal term they measured. 5 Using the discrete Fourier series, we also can express the measurement as the sum of cosinusoidal functions, Measurement
(1.12)
We know the values of , since this was Campbell and Gubisch’s measurement. The image of the line at the retina, then, must have been
(1.13)
The values define the linespread function of the eye’s optics. We can correct for the double passage and estimate the linespread because the system is linear and shift-invariant. 5
Be bothered by the fact that the discrete Fourier series approximation is an infinite set of pulses, rather than a single line. To understand why, consult the Appendix.
CHAPTER 1. IMAGE FORMATION
22
Visual angle (minutes of arc)
Figure 1.12: The linespread function of the human eye. The solid line in each panel is a measurement of the linespread. The dotted lines are the diffraction-limited linespread for a pupil of that diameter. (Diffraction is explained later in the text). The different panels show measurements for a variety of pupil diameters (From Campbell and Gubisch, 1967). As you read further about experimental and computational methods in vision science, remember that there is nothing inherently important about sinusoids as visual stimuli; we must not confuse the stimulus with the system or with the theory we use to analyze the system. When the system is a shift-invariant linear system, sinusoids can be helpful in simplifying our calculations and reasoning, as we have just seen. The sinusoidal stimuli are important only insofar as they help us to measure or clarify the properties of the system. And if the system is not shift-invariant, the sinusoids may not be important at all.
The Linespread Function Figure 1.12 contains Campbell and Gubisch’s estimates of the linespread functions of the eye. Notice that as the pupil size increases, the width of the linespread function increases which indicates that the focus is worse for larger pupil sizes. As the pupil size increases, light reaches the retina through larger and larger sections of the lens. As the area of the lens affecting the passage of light increases, the amount of blurring increases. The measured linespread functions, , along with our belief that we are studying a shift-invariant linear system, permit us to predict the retinal image for any
1.5. THE OPTICAL QUALITY OF THE EYE
23
Relative intensity
1.0 0.8
0.6 0.4
0.2 0 -4
-2
0
2
4
Visual angle (minutes of arc)
Figure 1.13: An analytic approximation of the human linespread function for an eye with a 3.0mm diameter pupil (Westheimer, 1986). one-dimensional input image. To calculate these predictions, it is convenient to have a function that describes the linespread of the human eye. G. Westheimer (1986) suggested the following formula to describe the measured linespread function of the human eye, when in good focus, and when the pupil diameter is near 3mm.
¾
(1.14)
where the variable refers to position on the retina specified in terms of minutes of visual angle. A graph of this linespread function is shown in Figure 1.13 We can use Westheimer’s linespread function to predict the retinal image of any one-dimensional input stimulus6 . Some examples of the predicted retinal image are shown in Figure 1.14. Because the optics blurs the image, even the light from a very fine line is spread across several photoreceptors. We will discuss the relationship between the optical defocus and the positions of the photoreceptors in Chapter ??.
The Modulation Transfer Function In correcting for double passage, we thought about the measurements in two separate ways. Since our main objective was to derive the linespread function, a function of spatial position, we spent most of our time thinking of the measurements 6
Westheimer’s linespread function is for an average observer under one set of viewing conditions. As the pupil changes size and as observer’s age, the linespread function can vary. Consult IJspeert et al. (1993) and Williams et al. (1995) for alternatives to Westheimer’s formula.
CHAPTER 1. IMAGE FORMATION
24 (a)
5
0
5
5
0
5
5
0
5
Visual angle (minutes of arc)
(b)
Photoreceptor positions
Figure 1.14: Examples of the effect of optical blurring. (a) Images of a line, edge and a bar pattern. (b) The estimated retinal image of the images after blurring by Westheimer’s linespread function. The spacing of the photoreceptors in the retina is shown by the stylized arrows.
1.0
Amplitude scale factor
Westheimer
0.5
Williams et al. 0.0 0
10
20
30
40
50
60
Spatial frequency (cycles per degree)
Figure 1.15: Modulation transfer function measurements of the optical quality of the lens made using visual interferometry (Williams et al., 1995; described in Chapter ??). The data are compared with the predictions from the linespread suggested by Westheimer (1984) and a curve fit through the data by Williams et al. (1995).
1.5. THE OPTICAL QUALITY OF THE EYE
25
in terms of light intensity as a function of spatial position. When we corrected for double passage through the optics, however, we also considered a hypothetical experiment in which the stimuli were harmonic functions (cosinusoids). To perform this calculation, we found that it was easier to correct for double passage by thinking of the stimuli as the sum of harmonic functions, rather than as a function of spatial position. These two ways of looking at the system, in terms of spatial functions or sums of harmonic functions, are equivalent to one another. To see this, notice that we can use the linespread function to derive the retinal image to any input image. Hence, we can use the linespread to compute the scale factors of the harmonic functions. Conversely, we already saw that by measuring how the system responds to the harmonic functions, we can derive the linespread function. It is convenient to be able to reason about system performance in both ways. The optical transfer function defines the system’s complete response to harmonic functions. The optical transfer function is a complex-valued function of spatial frequency. The complex values code both the scale factor and the phase shift the system induces in each harmonic function. When the linespread function of the eye is an even-symmetric function, there is no phase shift of the harmonic functions. In this case, we can describe the system completely using a real valued function, the modulation transfer function. This function defines the scale factors applied to each spatial frequency. The data points in Figure 1.15 show measurements of the modulation transfer function of the human eye. These data points were measured using a method called visual interferometry that is described in Chapter ??. Along with the data points in Figure 1.15, I have plotted the predicted modulation transfer function using Westheimer’s linespread function and a curve fit to the data by Williams et al. (1995). The curve derived by Westheimer (1986) using completely different data sets differs from the measurements by Williams et al. (1995) by no more than about twenty percent. This should tell you something about the relative precision of these descriptions of the optical quality of the lens. The linespread function and the modulation transfer function offer us two different ways to think about the optical quality of the lines. The linespread function in Figure 1.13, describes defocus as the spread of light from a fine slit across the photoreceptors: the light is spread across three to five photoreceptors. The modulation transfer function in Figure 1.15 describes defocus as an amplitude reduction of harmonic stimuli: beyond 12 cycles per degree the amplitude is reduced by more than a factor of two.
CHAPTER 1. IMAGE FORMATION
26 (a) Reflected ray
φ
Surface normal
φ’
φ
Incident ray
(b)
Refracted ray
n
n’
(c)
Figure 1.16: Snell’s law. The solid lines indicate surface normals and the dashed lines indicate the light ray. (a) When a light ray passes from one medium to another, the ray can be refracted so that the angle of incidence () does not equal the angle of refraction ( ). Instead, the angle of refraction depends on the refractive indices of the new media ( and ) a relationship called Snell’s law that is defined in Equation 1.15. (b) A prism causes two refractions of the light ray and can reverse the ray’s direction from upward to downward. (c) A lens combines the effect of many prisms in order to converge the rays diverging from a point source.
1.6 Lenses, Diffraction and Aberrations Lenses and Accommodation What prevents the optics of our eye from focusing the image perfectly? To answer this question we should consider why a lens is useful in bringing objects to focus at all. As a ray of light is reflected from an object, it will travel along along a straight line until it reaches a new material boundary. At that point, the ray may be either absorbed by the new medium, reflected, or refracted. The latter two possibilities are illustrated in part (a) of Figure 1.16. We call the angle between the incident ray of light and the perpendicular to the surface the angle of incidence. The angle between the reflected ray and the perpendicular to the surface is called the angle of reflection, and it equals the angle of incidence. Of course, reflected light is not useful for image formation at all. The useful rays for imaging must pass from the first medium into the second. As
1.6. LENSES, DIFFRACTION AND ABERRATIONS
27
they pass from between the two media, the ray’s direction is refracted. The angle between the refracted ray and the perpendicular to the surface is called the angle of refraction. The relationship between the angle of incidence and the angle of refraction was first discovered by a Dutch astronomer and mathematician, Willebrord Snell in 1621. He observed that when is the angle of incidence, and is the angle of refraction, then
(1.15)
The terms and in Equation 1.15 are the refractive indices of the two media. The refractive index of an optical medium is the ratio of the speed of light in a vacuum to the speed of light in the optical medium. The refractive index of glass is , for water the refractive index is and for air it is nearly . The refractive index of the human cornea is is quite similar to water, which is the main content of our eyes. Now, consider the consequence of applying Snell’s law twice in a row as light passes into and then out of a prism, as illustrated in part (b) of Figure 1.16. We can draw the path of the ray as it enters the prism using Snell’s law. The symmetry of the prism and the reversibility of the light path makes it easy to draw the exit path. Passage through the prism bends the ray’s path downward. The prism causes the light to deviate significantly from a straight path; the amount of the deviation depends upon the angle of incidence and the angle between the two sides of the prism. We can build a lens by smoothly combining many infintesimally small prisms to form a convex lens, as illustrated in part (c) of Figure 1.16. In constructing such a lens, any deviations from the smooth shape, or imperfections in the material used to build the lens, will cause the individual rays to be brought to focus at slightly different points in the image plane. These small deviations of shape or materials are a source of the imperfections in the image. Objects at different depths are focused at different distances behind the lens. The lensmaker’s equation relates the distance between the source and the lens with the distance between the image and the lens. The lensmaker’s equation relating these two distances depends on focal length of the lens. Call the distance from the center of the lens to the source , the distance to the image , and the focal length of the lens, . Then the lensmaker’s equation is
(1.16)
From this equation, notice that we can measure the focal length of a convex thin lens by using it to image a very distant object. In that case, the term is zero so that the image distance is equal to the focal length. When I first moved to California, I spent a lot of time measuring the focal length of the lenses in my laboratory by going
CHAPTER 1. IMAGE FORMATION
28
Image distance (m)
0.05
0.04
60 0.03
70 0.02
80
0.01
0
Distance from lens to retina
0.1
0.3
1
3
10
Source distance (m)
Figure 1.17: Depth of field of the human eye. Image distance is shown as a function of source distance. The bar on the vertical axis shows the distance of the retina from the lens center. A lens power of 60 diopters brings distant objects into focus, but not nearby objects; to bring nearby objects into focus the power of the lens must increase. The depth of field, namely the distance over which objects will continue to be in reasonable focus, can be estimated from the slope of the curve.
outside and imaging the sun on a piece of paper behind the lens; the sun was a convenient source at optical infinity. It had been a less reliable source for me in my previous home. The optical power of a lens is a measure of how strongly the lens bends the incoming rays. Since a short focal length lens bends the incident ray more than a long focal length lens, the optical power is the inversely related to focal length. The optical power is defined as the reciprocal of the focal length measured in meters and is specified in units of diopters. When we view far away objects, the distance from the middle of the cornea and the flexible lens to the retina is 0.017m. Hence, the optical power of the human eye is , or roughly 60 diopters.
From the optical power of the eye ( ) and the lensmaker’s equation, we can calculate the image distance of a source at distance. For example, the top curve in Figure 1.17 shows the relationship between image distance and source distance for a 60 diopter lens. Sources beyond 1.0m are imaged at essentially the same distance behind the optics. Sources closer than 1.0m are imaged at a longer distance, so that the retinal image is blurred. To bring nearby sources into focus on the retina, muscles attached to the lens change its shape and thus change the power of the lens. The bottom two curves in Figure
1.6. LENSES, DIFFRACTION AND ABERRATIONS (a)
29
Source Pinhole
Image
(b)
Source Reduced pinhole
Image Figure 1.18: Pinhole optics. Using ray-tracing, we see that only a small pencil of rays passes through a pinhole. (a) If we widen the pinhole, light from the source spread across the image, making it blurry. (b) If we narrow the pinhole, only a small amount of light is let in. The image is sharp; the sharpness is limited by diffraction. 1.17 illustrate that sources closer than 1.0m can be focused onto the retina by increasing the power of the lens. The process of adjusting the focal length of the lens is called accommodation. You can see the effect of accommodation by first focusing on your finger placed near your noise and noticing that objects in the distance appear blurred. Then, while leaving your finger in place, focus on the distant objects. You will notice that your finger now appears blurred.
Pinhole Optics and Diffraction The only way to remove lens imperfections completely is to remove the lens. It is possible to focus images without any lens at all by using pinhole optics, as illustrated in Figure 1.18. A pinhole serves as a useful focusing element because only the rays passing within a narrow angle are used to form the image. As the pinhole is made smaller, the angular deviation is reduced. Reducing the size of the pinhole serves to reduce the amount of blur due to the deviation amongst the rays. Another advantage of using pinhole optics is that no matter how distant the source point is from the pinhole, the source is rendered in sharp focus. Since the focusing is due to selecting out a thin pencil of rays, the distance of the point from the pinhole is irrelevant and accommodation is unnecessary. But the pinhole design has two disadvantages. First, as the pinhole aperture is
CHAPTER 1. IMAGE FORMATION
30
(a)
(b)
(c)
Figure 1.19: Diffraction limits the quality of pinhole optics. The three images of a bulb filament were imaged using pinholes with decreasing size. (a) When the pinhole is relatively large, the image rays are not properly converged and the image is blurred. (b) Reducing the pinhole improves the focus. (c) Reducing the pinhole further worsens the focus due to diffraction.
reduced, less and less of the light emitted from the source is used to form the image. The reduction of signal has many disadvantages for sensitivity and acuity. A second fundamental limit to the pinhole design is a physical phenomenon. When light passes through a small aperture, or near the edge of an aperture, the rays do not travel in a single straight line. Instead, the light from a single ray is scattered into many directions and produces a blurry image. The dispersion of light rays that pass by an edge or narrow aperture is called diffraction. Diffraction scatters the rays coming from a small source across the retinal image and therefore serves to defocus the image. The effect of diffraction when we take an image using pinhole optics is shown in Figure 1.19. Diffraction can be explained in two different ways. First, diffraction can be explained by thinking of light as a wave phenomenon. A wave exiting from a small aperture expands in all directions; a pair of coherent waves from adjacent apertures create an interference pattern. Second diffraction can be understood in terms of quantum mechanics; indeed, the explanation of diffraction is one of the important achievements of quantum mechanics. Quantum mechanics supposes that there are limits to how well we may know both the position and direction of travel of a photon of light. The more we know about a photon’s position, the less we can know about its direction. If we know that a photon has passed through a small aperture, then we know something about the photon’s position and we must pay a price in terms of our uncertainty concerning its direction of travel. As the aperture becomes smaller, our certainty concerning the position of the photon becomes greater; this uncertainty
1.6. LENSES, DIFFRACTION AND ABERRATIONS
31
1.0
2
Intensity
[2J1(πx)/(πx)] 0.5
0 -2
-1
0
1
2
x
Figure 1.20: Diffraction pattern caused by a circular aperture. (a) The image of a diffraction pattern measured through a circular aperture. (b) A graph of the cross-sectional intensity of the diffraction pattern. (After Goodman, 1968). takes the form of the scattering of the direction of travel of the photons as they pass through the aperture. For very small apertures, for which our position certainty is high, the photon’s direction of travel is very broad producing a very blurry image. There is a close relationship between the uncertainty in the direction of travel and the shape of the aperture (see Figure 1.20). In all cases, however, when the aperture is relatively large, our knowledge of the spatial position of the photons is insignificant and diffraction does not contribute to defocus. As the pupil size decreases, and we know more about the position of the photons, the diffraction pattern becomes broader and spoils the focus. In the human eye diffraction occurs because light must pass through the circular aperture defined by the pupil. When the ambient light intensity is high, the pupil may become as small mm in diameter. For a pupil opening this small, the optical blurring in the human eye is due only to the small region of the cornea and lens near the center of our visual field. With this small an opening of the pupil, the quality of the cornea and lens is rather good and the main source of image blue is diffraction. At low light intensities, the pupil diameter is as large as 8 mm. When the pupil is open quite wide, the distortion due to cornea and lens imperfections is large compared to the defocus due to diffraction. One way to evaluate the quality of the optics is to compare the blurring of the eye to the blurring from diffraction alone. The dashed lines in Figure 1.12 plot the blurring expected from diffraction for different pupil widths. Notice that when the pupil is 2.4 mm, the observed linespread is about equal to the amount expected by diffraction alone; the lens causes no further distortion. As the pupil opens, the observed linespread is worse than the blurring expected by diffraction alone. For these pupil sizes the defocus is due mainly to imperfections in the optics7 . 7
Helmholtz calculated that this was so long before any precise measurements of the optical quality
CHAPTER 1. IMAGE FORMATION
32
(a)
Intensity
(b)
10 0
x
-10 -15
-10
-5
5
0
y
10
15
10 0
x
-10 -15
-10
-5
5
0
10
15
y
Figure 1.21: A pointspread function (a) and the sum of two pointspreads (b). The pointspread function is the image created by a source consisting of a small point of light. When the optics shift-invariant, the image to any stimulus can be predicted from the pointspread function.
The Pointspread Function and Astigmatism Most images, of course, are not composed of weighted sums of lines. The set of images that can be formed from sums of lines oriented in the same direction are all one-dimensional patterns. To create more complex images, we must either use lines with different orientations or use a different fundamental stimulus: the point. Any two-dimensional image can be described as the sum of a set of points. If the system we are studying is linear and shift-invariant, we can use the response to a point and the principle of superposition to predict the response of a system to any two-dimensional image. The measured response to a point input is called the pointspread function. A pointspread function and the superposition of two nearby pointspreads are illustrated in Figure 1.21. Since lines can be formed by adding together many different points, we can compute the system’s linespread function from the pointspread. In general, we cannot deduce the pointspread function from the linespread because there is no way to add a set of lines, all oriented in the same direction, to form a point. If it is know that a pointspread function is circularly symmetric, however, a unique pointspread function can be deduced from the linespread function. The calculation is described of the eye were possible. He wrote, The limit of the visual capacity of the eye as imposed by diffraction, as far as it can be calculated, is attained by the visual acuity of the normal eye with a pupil of the size corresponding to a good illumination. (Helmholtz, 1909, p. 442)
1.6. LENSES, DIFFRACTION AND ABERRATIONS
Intensity
33
20 10
20 10
0
y
0
-10 -20
-10 -20
x
Figure 1.22: Astigmatism implies an asymmetric pointspread function. The pointspread shown here is narrow in one direction and wide in another. The spatial resolution of an astigmatic system is better in the narrow direction than the wide direction. in the beginning of Goodman (1968) and in Yellott, Wandell and Cornsweet (1981). When the pointspread functions is not circularly symmetric, measurements of the linespread function will vary with the orientation of the test line. It may be possible to adjust the accommodation of this type of system so that any single orientation is in good focus, but it will be impossible to bring all orientations into good focus at the same time. For the human eye, astigmatism can usually be modeled by describing the defocus as being derived from the contributions of two one-dimensional systems at right angles to one another. The defocus in intermediate angles can be predicted from the defocus of these two systems.
Chromatic Aberration The light incident at the eye is usually a mixture of different wavelengths. When we measure the system response, there is no guarantee that the linespread or pointspread function we measure with different wavelengths will be the same. Indeed, for most biological eyes the pointspread function is very different as we measure using different wavelengths of light. When the pointspread function of different wavelengths of light is quite different, then the lens is said to exhibit chromatic aberration. When the incident light is the mixture of many different wavelengths, say white
CHAPTER 1. IMAGE FORMATION
34
(a) Power of lens (diopters)
1 0.5 0 -0.5 -1 -1.5 -2
Wald and Griffin (1947) Bedford and Wyszecki (1957) Thibos et al. (1992)
-2.5 -3 350
400
450
500
550
600
650
700
750
Wavelength (nm) (b) Source at infinity Focal length Lens power (diopters) =
1 Focal length (m)
Figure 1.23: Chromatic aberration of the human eye. (a) The data points are from Wald and Griffin (1947), and Bedford and Wyszecki (1957). The smooth curve plots the formula used by Thibos et al. (1992), where is wavelength in micrometers, is the defocus in diopters, , , and . This formula implies an in-focus wavelength of 578 nm. (b) The power of a thin lens is the reciprocal of its focal length, which is the image distance from a source at infinity. (After Marimont and Wandell, 1993).
1.6. LENSES, DIFFRACTION AND ABERRATIONS
1
1
0.8
0.8
0.6
35
0.6
0.4
0.4
0.2
0.2
0
0 0
-0.2 0
700 600
10
Spa
tial
freq
30
uen
400
cy (c pd
)
gt
en
W
l ave
)
nm h(
500
20
-0.2 400
10
Wave
600 700
lengt
h
30
al
ti pa
cy
en
20
500
u req
f
S
Figure 1.24: Two views of the modulation transfer function of a model eye at various wavelengths. The model eye has the same chromatic aberration as the human eye (see Figure 1.23) and a 3.0mm pupil diameter. The eye is in focus at 580nm; the curve at 580nm is diffraction limited. The retinal image has no contrast beyond four cycles per degree at short wavelengths. (From Marimont and Wandell, 1993).
light, then we can see a chromatic fringe at edges. The fringe occurs because the different wavelength components of the white light are focused more or less sharply. Figure 1.23a plots one measure of the chromatic aberration. The smooth curve plots the lens power, measured in units of diopters needed to bring each wavelength into focus along with a 578nm light. Figure 1.23 shows the optical power of a lens necessary to correct for the chromatic aberration of the eye. When the various wavelengths pass through the correcting lens, the optics will have the same power as the eye’s optics at 578nm. The two sets of measurements agree well with one another and are similar to what would be expected if the eye were simply a bowl of water. The smooth curve through the data is a curve used by Thibos et al. (1992) to predict the data. An alternative method of representing the axial chromatic aberration of the eye is to plot the modulation transfer function at different wavelengths. The two surface plots in Figure 1.24 shows the modulation transfer function at a series of wavelengths. The plots show the same data, but seen from different points of view so that you can see around the hill. The calculation in the figure is based on an eye with a pupil diameter of 3.0mm, the same chromatic aberration as the human eye, and in perfect focus except for diffraction at 580nm. The retinal image contains very poor spatial information at wavelengths that are far from the best plane of focus. By accommodation, the human eye can place any wavelength into good focus, but it is impossible to focus all wavelengths
CHAPTER 1. IMAGE FORMATION
36 simultaneously8 .
8
A possible method of improving the spatial resolution of the eye to different wavelengths of light is to place the different classes of photoreceptors in slightly different image planes. Ahnelt et al. (1987) and Curcio et al. (1991) have observed that the short-wavelength photoreceptors have a slightly different shape and length from the middle- and long-wavelength photoreceptors. In principle, this difference could play a role to compensate for the chromatic aberration of the eye. But, the difference is very small, and it is unlikely that it plays any significant role in correcting for axial chromatic aberration.
1.6. LENSES, DIFFRACTION AND ABERRATIONS
37
Exercises 1. Matrix calculations. (a) Consider the matrix
Write the transpose of , the matrix Ì .
(b) Compute the result of multiplying with the column vector
!
(that is, compute ). (c) Compute the result of multiplying with the column vector (d) Compute the result of multiplying with the column vector
.
(e) Compute the result of multiplying by the matrix
(f) How do the results of multiplying by the matrix compare with the results of multiplying by the individual vectors? 2. Answer the following questions about lens specifications. (a) When your eye doctor prescribes lenses for you, she tells you the visual correction you require in terms of diopters. What does it mean to require a 6-diopter optical correction? (b) What does visual astigmatism mean? (c) Do you think it is likely that different people vary greatly in the extent of chromatic aberration in their eyes? Why or why not? (d) Suppose one individual needs a 6-diopter correction and another individual needs a 3-diopter correction. Give your best guess about the relative linespread functions of the two individuals. Give your best guess about the relative modulation transfer functions of the two individuals.
38
CHAPTER 1. IMAGE FORMATION 3. Answer these questions with respect to experimental estimates of the linespread function of the optics. (a) What instrumentation might Campbell and Gubisch have used to measure the pointspread function of the eye? What additional problems would they have had if they had measured the pointspread. (See papers by Artal et al. (1989) for such measurements.) (b) Can the linespread function be determined from the pointspread function? If so, how? (c) Can the pointspread function be determined from a single linespread function? If so, how? (See the appendix in Yellott, Wandell, and Cornsweet, 1980.) (d) Do you think it is possible that Campbell and Gubisch measured the light reflected precisely from the photoreceptor plane? Why or why not? If not, how should we evaluate the linespread function that we estimate compared to the linespread function in the plane of the photoreceptors? (e) IJspeert et al. (1993) recently described a new set of equations to characterize the optical linespread of the eye. These equations are intended to generalize Westheimer’s function described in the text. Read their paper and compare their new curves with Westheimer’s formula.
Contents 1 The Photoreceptor Mosaic
5
1.1
The S Cone Mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
1.2
Visual Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
1.3
Sampling and Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
1.4
The L and M Cone Mosaic . . . . . . . . . . . . . . . . . . . . . . . . . .
22
1.5
Summary and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . .
25
1
2
CONTENTS
List of Figures 1.1
Rods and Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1.2
Schematic of Rods and Cones . . . . . . . . . . . . . . . . . . . . . . . .
7
1.3
Cone Spectral Sensitivities . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.4
Photoreceptor Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
1.5
Calculating Viewing Angle . . . . . . . . . . . . . . . . . . . . . . . . .
10
1.6
Short-wavelength Cone Mosaic: Psychophysics . . . . . . . . . . . . . .
12
1.7
Short-Wavelength Cone Mosaic: Procion Yellow Stains . . . . . . . . .
13
1.8
Interference and Double Slits . . . . . . . . . . . . . . . . . . . . . . . .
15
1.9
Visual Interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
1.10 Sinusoidal Interference Pattern . . . . . . . . . . . . . . . . . . . . . . .
17
1.11 Aliasing Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
1.12 Squarewave aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
1.13 Drawings of Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
1.14 Choosing monitor phosphors.
. . . . . . . . . . . . . . . . . . . . . . .
27
1.15 Homework Problem: Sensor sample positions . . . . . . . . . . . . . .
29
3
4
LIST OF FIGURES
Chapter 1 The Photoreceptor Mosaic In Chapter ?? we reviewed Campbell and Gubisch’s (1967) measurements of the optical linespread function. Their data are presented in Figure ??, as smooth curves, but the actual measurements must have taken place at a series of finely spaced intervals called sample points. In designing their experiment, Campbell and Gubisch must have considered carefully how to space their sample points because they wanted to space their measurement samples only finely enough to capture the intensity variations in the measurement plane. Had they positioned their samples too widely, then they would have missed significant variations in the data. On the other hand, spacing the sample positions too closely would have made the measurement process wasteful of time and resources. Just as Campbell and Gubisch sampled their linespread measurements, so too the retinal image is sampled by the nervous system. Since only those portions of the retinal image that stimulate the visual photoreceptors can influence vision, the sample positions are determined by the positions of the photoreceptors. If the photoreceptors are spaced too widely, the image encoding will miss significant variation present in the retinal image. On the other hand, if the photoreceptors are spaced very close to one another compared to the spatial variation that is possible given the inevitable optical blurring, then the image encoding will be redundant, using more neurons than necessary to do the job. In this chapter we will consider how the spatial arrangement of the photoreceptors, called the photoreceptor mosaic, limits our ability to infer the spatial pattern of light intensity present in the retinal image. We will consider separately the photoreceptor mosaics of each of the different types of photoreceptors. There are two fundamentally different types of photoreceptors in our eye, the rods and the cones. There are approximately 5 million cones and 100 million rods in each eye. The positions of these two types of photoreceptors differ in many ways across the retina. Figure 1.1 shows how the relative densities of cone
5
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
6
(b) 5
Receptors per square mm (x 10 )
(a)
Left eye 60°
-60° 40°
-40° -20°
20° 0°
1.8 1.4 1.0 0.6 0.2 -60
Blindspot
Rods Cones
Blindspot
-40
-20
0
20
40
60
Angle relative to fovea (degrees)
Figure 1.1: The distribution of rod and cone photorceptors across the human retina. (a) The density of the receptors is shown in degrees of visual angle relative to the position of the fovea for the left eye. (b) The cone receptors are concentrated in the fovea. The rod photoreceptors are absent from the fovea and reach their highest density 10 to 20 degrees peripheral to the fovea. No photoreceptors are present in the blindspot.
photoreceptors and rod photoreceptors vary across the retina. The rods initiate vision under low illumination levels, called scotopic light levels, while the cones initiate vision under higher, photopic light levels. The range of intensities in which both rods and cones can initiate vision is called mesopic intensity levels. At most wavelengths of light, the cones are less sensitive to light than the rods. This sensitivity difference, coupled with the fact that there are no rods in the fovea, explains why we can not see very dim sources, such as weak starlight, when we fixate our fovea directly on them. These sources are too dim to be visible through the all cone fovea. The dim source only becomes visible when it is placed in the periphery and be detected by the rods. Rods are very sensitive light detectors: they generate a detectable photocurrent response when they absorb a single photon of light (Hecht et al., 1942; Schwartz, 1978; Baylor et al. 1987). The region of highest visual acuity in the human retina is the fovea. As Figure 1.1 shows, the fovea contains no rods, but it does contain the highest concentration of cones. There are approximately 50,000 cones in the human fovea. Since there are no photoreceptors at the optic disk, where the ganglion cell axons exit the retina, there is a blindspot in that region of the retina (see Chapter ??). Figure 1.2 shows schematics of a mammalian rod and a cone photoreceptor. Light imaged by the cornea and lens is shown entering the receptors through the inner segments. The light passes into the outer segment which contain light absorbing
7 (a)
Rod
Outer segment
Inner segment
{
(b)
Cone
Rod photpigment
{
Synaptic ending
Cone photpigment
{
{
Outer segment
Inner segment
Synaptic ending
Light imaged from cornea and lens
Light imaged from cornea and lens
Figure 1.2: Mammalian rod and cone photoreceptors contain the light absorbing pigment that initiates vision. Light enters the photoreceptors through the inner segment and is funneled to the outer segment that contains the photopigment. (After Baylor, 1987)
photopigments. As light passes from the inner to the outer segment of the photoreceptor, it will either be absorbed by one of the photopigment molecules in the outer segment or it will simply continue through the photoreceptor and exit out the other side. Some light imaged by the optics will pass between the photoreceptors. Overall, less than ten percent of the light entering the eye is absorbed by the photoreceptor photopigments (Baylor, 1987). The rod photoreceptors contain a photopigment called rhodopsin. The rods are small, there are many of them, and they sample the retinal image very finely. Yet, visual acuity under scotopic viewing conditions is very poor compared to visual acuity under photopic conditions. The reason for this is that the signals from many rods converge onto a single neuron within the retina, so that there is a many-to-one relationship between rod receptors and neurons in the optic tract. The density of rods and the convergence of their signals onto single neurons improves the sensitivity of rod-initiated vision. Hence, rod-initiated vision does not resolve fine spatial detail. The foveal cone signals do not converge onto single neurons. Instead, several neurons encode the signal from each cone, so that there is a one-to-many relationship between the foveal cones and optic tract neurons. The dense representation of the foveal cones suggests that the spatial sampling of the cones
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
8
S
Normalized sensitivity
1.0
M L
0.8
0.6
0.4
0.2
0 400
500
600
700
Wavelength (nm)
Figure 1.3: Spectral sensitivities of the L, M and S cones in the human eye. The measurements are based on a light source at the cornea, so that the wavelength loss due to the cornea, lens and other inert pigments of the eye play a role in determining the sensitivity. (Source: Stockman and Macleod, 1993). must be an important aspect of the visual encoding. There are three types of cone photoreceptors within the human retina. Each cone can be classified based on the wavelength sensitivity of the photopigment in its outer segment. Estimates of the spectral sensitivity of the three types of cone photoreceptors are shown in Figure 1.3. These curves are measured from the cornea, so they include light loss due to the cornea, lens and inert materials of the eye. In the next chapter we will study how color vision depends upon the differences in wavelength selectivity of the three types of cones. Throughout this book I will refer to the three types of photoreceptors as the L, M and S cones1 . Because light is absorbed after passing through the inner segment, the position of the inner segment determines the spatial sampling position of the photoreceptor. Figure 1.4 shows cross-sections of the human cone photoreceptors at the level of the inner segment in the human fovea (part a) and just outside the fovea (part b). In the fovea, cross-section shows that the inner segments are very tightly packed and form a regular sampling array. A cross-section just outside the fovea shows that the rod photoreceptors fill the spaces between the cones and disrupt the regular packing arrangement. The scale bar represents ; the cone photoreceptor inner segments 1
The letters refer to Long-wavelength, Middle-wavelength and Short-wavelength peak sensitivity.
9
(a)
(b)
rods cones
10 µm (c)
Figure 1.4: The spatial mosaic of the human cones. A cross-section of the human retina at the level of the inner segments. Cones in the fovea (a) are smaller than cones in the periphery (b). As the separation between cones grows, the rod receptors fill in the spaces. (c) The cone density varies with distance from the fovea. Cone density is plotted as a function of eccentricity for seven human retinae (After Curcio et al, 1990). in the fovea are approximately wide with a minimum center to center spacing of about . Figure 1.4c shows plots of the cone densities from several different human retinae as a function of the distance from the foveal center. The cone density varies across individuals.
Units of Visual Angle We can convert these cone sizes and separations into degrees of visual angle as follows. The distance from the effective center of of the eye’s optics to the retina is (17 mm). We compute the visual angle spanned by one cone, , from the trigonometric relationship in Figure 1.5: the tangent of an angle in a right triangle is equal to the ratio of the lengths of the sides opposite and adjacent to the angle. This leads to the following equation:
(1.1)
The width of a cone in degrees of visual angle, , is approximately degrees, or roughly one-half minute of visual angle. In the center of the eye, then, where the photoreceptors are packed densely, the cone photoreceptors are tightly packed and their centers are separated by one-half minute of visual angle.
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
10
Height Visual angle
φ
Distance
Figure 1.5: Calculating viewing angle. By trigonometry, the tangent of the viewing angle, , is equal to the ratio of height to distance in the right triangle shown. Therefore, is the inverse tangent of that ratio (Equation 1.1).
1.1 The S Cone Mosaic Behavioral Measurements Just as the rods and cones have different spatial sampling distributions, so too the three types of cone photoreceptors have different spatial sampling distributions. The sampling distribution of the short-wavelength cones was the first to be measured empirically, and it has been measured both with behavioral and physiological methods. The behavioral experiments were carried out as part of D. Williams dissertation at the University of California in San Diego. Williams, Hayhoe and MacLeod (1981) took advantage of several features of the short-wavelength photoreceptors. As background to their work, we first describe several features of the photoreceptors. The photopigment in the short-wavelength photoreceptors is significantly different from the photopigment in the other two types of photoreceptors. Notice that the wavelength sensitivity of the L and M photopigments are very nearly the same (Figure 1.3). The sensitivity of the S photopigment is significantly higher in the short-wavelength part of the spectrum than the sensitivity of the other two photopigments. As a result, if we present the visual system with a very weak light, containing energy only in the short-wavelength portion of the spectrum, the S cones will absorb relatively more quanta than the other two classes. Indeed, the discrepancy in the absorptions is so large that it is reasonable to suppose that when short-wavelength light is barely visible, at detection threshold, perception is initiated uniquely from a signal that originates in the short-wavelength receptors.
1.1. THE S CONE MOSAIC
11
We can give the short-wavelength receptors an even greater sensitivity advantage by presenting a blue test target on a steady yellow background. As we will discuss in later chapters, steady backgrounds suppress visual sensitivity. By using a yellow background, we can suppress the sensitivity of the L and M cones and the rods and yet spare the sensitivity of the S cones. This improves the relative sensitivity advantage of the short-wavelength receptors in detecting the short-wavelength test light. A second special feature of the S cones is that they are very rare in the retina. From other experiments described in Chapter ??, it has been suspected for many years that no cones containing short-wavelength photopigment are present in the central fovea. It had been earlier suspected that the number of cones containing the short-wavelength photopigment was quite small compared to the other two classes. If the S cones are widely spaced, and if we can isolate them with these choices of test stimulus and background, then we can measure the mosaic of short-wavelength photoreceptors. During the experiment, the subjects visually fixated on a small mark. They were then presented with short-wavelength test lights that were likely to be seen with a signal initiated by the S cones. After the eye was perfectly fixated, the subject pressed a button and initiated a stimulus presentation. The test stimulus was a tiny point of light, presented very briefly (10 ms). The test light was presented at different points in the visual field. If light from the short-wavelength test fell upon a region that contained S cones, sensitivity should be relatively high. On the other hand, if that region of the retina contained no S cones, sensitivity should be rather low. Hence, from the spatial pattern of visual sensitivity, Williams, Hayhoe and Macleod inferred the spacing of the S cones. The sensitivity measurements are shown in Figure 1.6. First, notice that in the very center of the visual field, in the central fovea, there is a large valley of low sensitivity. In this region, there appear to be no short-wavelength cones at all. Second, beginning about half a degree from the center of the visual field there are small, punctate spatial regions of high sensitivity. We interpret these results by assuming that these peaks correspond to the positions of this observer’s S cones. The gaps in between, where the observer has rather low sensitivity are likely to be patches of L and M cones. Around the central fovea, the typical separation between the inferred S cones is about 8 to 12 minutes of visual angle. Thus, there are five to seven S cones per degree of visual angle.
Biological Measurements There have been several biological measurements of the short-wavelength cone mosaic, and we can compare these with the behavioral measurements. Marc and
12
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
Figure 1.6: Psychophysical estimate of the spatial mosaic of the S cones. The height of the surface represents the observer’s threshold sensitivity to a short wavelength test light presented on a yellow background. The test was presented at a series of locations spanning a grid around the fovea (black dot). The peaks in sensitivity probably correspond to the positions of the S cones. (From Williams, Hayhoe, and Macleod, 1981).
1.1. THE S CONE MOSAIC
13
Figure 1.7: Biological estimate of the spatial mosaic of the S cones in the macaque retina. A small fraction of the cones absorb the procion yellow stain; these are shown as the dark spots in this image. These cones, thought to be the S cones, are shown in a crosssection through the inner segment layer of the retina. (From DeMonasterio, Schein and McCrane, 1985) Sperling (1977) used a stain that is taken up by cones when they are active. They applied this stain to a baboon retina and then stimulated the retina with short-wavelength light in the hopes of staining only the short-wavelength receptors. They found that only a few cones were stained when the stimulus was a short-wavelength light. The typical separation between the stained cones was about 6 minutes of arc. This value is smaller than the separation that Williams’ et al. observed and may be a species-related difference. F. DeMonasterio, S. Schein, and E. McCrane (1981) discovered that when the dye procion yellow is applied to the retina, the dye is absorbed in the outer segments of all the photoreceptors, but it stains only a small subset of the photoreceptors completely. Figure 1.7 shows a group of stained photoreceptors in cross-section section. The indirect arguments identifying these special cones as S cones are rather compelling. But, a more certain procedure was developed by C. Curcio and her colleagues. They used a biological marker, developed based on knowledge of the genetic code for the S cone photopigment, to label selectively the S cones in the human retina (Curcio, et al. 1991). Their measurements agree well quantitatively with Williams’ psychophysical measurements, namely that the average spacing between the S cones is 10 minutes of visual angle. Curcio and her colleagues could also confirm some early anatomical observations that the size and shape of the S cones differ slightly from the L and M cones. The S cones have a wider inner
14
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
segment, and they appear to be inserted within an orderly sampling arrangement of their own between the sampling mosaics of the other two cone types (Ahnelt, Kolb and Pflug, 1987).
Why are the S cones widely spaced? The spacing between the S cones is much larger than the spacing between the L and M cones. Why should this be? The large spacing between the S cones is consistent with the strong blurring of the short-wavelength component of the image due to the axial chromatic aberration of the lens. Recall that axial chromatic aberration of the lens blurs the short-wavelength portion of the retinal image, the part S cones are particularly sensitive to, more than the middle- and long-wavelength portion of the image (Figure ??). In fact, under normal viewing conditions the retinal image of a fine line at 450 nm falls to one half its peak intensity nearly 10 minutes of visual angle away from the location of its peak intensity. At that wavelength, the retinal image only contains significant contrast at spatial frequency components below 3 cycles per degree of visual angle. The optical defocus force the wavelength components of the retinal image the S cones encode to vary smoothly across space. Consequently, the S cones can sample the image only six times per degree and still recover the spatial variation passed by the cornea and lens. Interestingly, the spatial defocus of the short-wavelength component of the image also implies that signals initiated by the S cones will vary slowly over time. In natural scenes, temporal variation occurs mainly because of movement of the observer or an object. When a sharp boundary moves across a cone position, the light intensity changes rapidly at that point. But, if the boundary is blurred, changing gradually over space, then the light intensity changes more slowly. Since the short-wavelength signal is blurred by the optics, and temporal variation is mainly due to motion of objects, the S cones will generally be coding slower temporal variations than the L and M cones. At the very earliest stages of vision, we see that the properties of different components of the visual pathway fit smoothly together. The optics set an important limit on visual acuity, and the S cone sampling mosaic can be understood as a consequence of the optical limitations. As we shall see, the L and M cone mosaic densities also make sense in terms of the optical quality of the eye. This explanation of the S cone mosaic flows from our assumption that visual acuity is the main factor governing the photoreceptor mosaic. For the visual streams initiated by the cones, this is a reasonable assumption. There are other important factors, however, that can play a role in the design of a visual pathway. For example, acuity is not the dominant factor in the visual stream initiated by rod vision. In principle the resolution available in the rod encoding is comparable to the acuity
1.2. VISUAL INTERFEROMETRY
15
Light source
Interference pattern Figure 1.8: T. Young’s double-slit experiment uses a pair of coherent light sources to create an interference pattern of light. The intensity of the resulting image is nearly sinusoidal, and its spatial frequency depends upon the spacing between the two slits. available in the cone responses; but, visual acuity using rod-initiated signals is very poor compared to acuity using cone-initiated signals. Hence, we shouldn’t think of the rod sampling mosaic in terms of visual acuity. Instead, the high density of the rods and their convergence onto individual neurons suggests that we think of the imperative of rod-initiated vision in terms of improving the signal-to-noise under low light levels. In the rod-initiated signals, the visual system trades visual acuity for an increase in the signal-to-noise ratio. In the earliest stages of the visual pathways, then, we can see structure, function and design criteria coming together. When we ask why the visual system has a particular property, we need to relate observations from the different disciplines that make up vision science. Questions about anatomy require us to think about the behavior the anatomical structure serves. Similarly, behavior must be explained in terms of algorithms and the anatomical and physiological responses of the visual pathway. By considering the visual pathways from multiple points of view, we piece together a complete picture of how system functions.
1.2 Visual Interferometry In behavioral experiments, we measure threshold repeatedly through individual L and M using small points of light as we did the S cones. The pointspread function
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
16 (a)
(b) Mirror
Mirror Glass cube
Rotated glass cube
Source
Source Mirror Beamsplitter
Mirror Beamsplitter
Figure 1.9: A visual interferometer creates an interference pattern as in Young’s doubleslit experiment. In the device shown here the original beam is split into two paths shown as the solid and dashed lines. (a) When the glass cube is at right angles to the light path, the two beams traverse an equal path and are imaged at the same point after exiting the interferometer. (b) When the glass is rotated, the two beams traverse slightly different paths causing the images of the two coherent beams to be displaced and thus create an interference pattern. (After Macleod, Williams and Makous, 1992).
distributes light over a region containing about twenty cones, so that the visibility of even a small point of light may involve any of the cones from a large pool (see Figures ?? and ??). We can, however, use a method introduced by Y. LeGrand in 1935 to defeat the optical blurring. The technique is called visual interferometry, and it is based upon the principle of diffraction. Thomas Young (1802), the brilliant scientist, physician, and classicist demonstrated to the Royal Society that when two beams of coherent light generate an image on a surface such as the retinal surface, the resulting image is an interference pattern. His experiment is often called the double-slit or double-pinhole experiment. Using an ordinary light source, Young passed the light through a small pinhole first and then through a pair of slits, as illustrated in Figure 1.8. In the experiment, the first pinhole serves as the source of light; the double pinholes then pass the light from the common original source. Because they share this common source, light emitted from the double pinholes are in a coherent phase relationship and their wavefronts interfere with one another. This interference results in an image that varies nearly sinusoidally in intensity. We can also achieve this narrow pinhole effect by using a laser as the original source. The key elements of a visual interferometer used by MacLeod et al. (1992) are shown in Figure 1.9. Light from a laser enters the beamsplitter and is divided into one part that continues along a straight path (solid line) and a second path that is reflected
1.2. VISUAL INTERFEROMETRY
17
Figure 1.10: An interference pattern. The image was created using a double-slit apparatus. The intensity of the pattern is nearly sinusoidal. (From Jenkins and White, 1976.)
along a path to the right (dashed line). These two beams, originating from a common source, will be the pair of sources to create the interference pattern on the retina. Light from each beam is reflected from a mirror towards a glass cube. By varying the orientation of the glass cube, the experimenter can vary the path of the two beams. When the glass cube is at right angles to the light path, as is shown in part (a), the beams continue in a straight path along opposite directions and emerge from the beamsplitter at the same position. When the glass cube is rotated, as is shown in part (b), the refraction due to the glass cube symmetrically changes the beam paths; they emerge from the beamsplitter at slightly different locations and act as a pair of point sources. This configuration creates two coherent beams that act like the two slits in Thomas Young’s experiment, creating an interference pattern. The amount of rotation of the glass cube controls the separation between the two beams. Each beam passes through only a very small section of the cornea and lens. The usual optical blurring mechanisms do not interfere with the image formation, since the lens does not serve to converge the light (see the section on lenses in Chapter ??)). Instead, the pattern that is formed depends upon the diffraction due to the restricted spatial region of the light source. We can use diffraction to create retinal images with much higher spatial frequencies than are possible through ordinary optical imaging by the cornea and lens. Figure 1.10 is an image of a diffraction pattern created by a pair of two slits. The intensity of the pattern is nearly a sinusoidal function of retinal position. The spatial frequency of the retinal image can be controlled by varying the separation between the focal points; the smaller the separation between the slit, the lower the spatial frequency in the interference pattern. Thus, by rotating the glass cube in the interferometer and changing the separation of the two beams we can control the spatial frequency of the retinal image. Visual interferometry permits us to image fine spatial patterns at much higher
18
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
contrast than when we image these patterns using ordinary optical methods. For example, Figure ?? shows that a cycles per degree sinusoid cannot exceed 10 percent contrast when imaged through the optics. Using a visual interferometer, we can present patterns at frequencies considerably higher than cycles per degree at 100 percent contrast. But a challenge remains: the interferometric patterns are not fine lines or points, but rather extended patterns (cosinusoids). Therefore, we cannot use the same logic as Williams et al. and map the receptors by carefully positioning the stimulus. We need to think a little bit more about how to use the cosinusoidal interferometric patterns to infer the structure of the cone mosaic.
1.3 Sampling and Aliasing In this section we consider how the cone mosaic encodes the high spatial frequency patterns created by visual interferometers. The appearance of these high frequency patterns will permit us to deduce the spatial arrangement of the combined L and M cone mosaics. The key concepts that we must understand to deduce the spatial arrangement of the mosaic are sampling and aliasing. These ideas are illustrated in Figure 1.11. The most basic observation concerning sampling and aliasing is this: we can measure only that portion of the input signal that falls over the sample positions. Figure 1.11 shows one-dimensional examples of aliasing and sampling. Parts (a) and (b) contain two different cosinusoidal signals (left) and the locations of the sample points. The values of these two cosinusoids at the sample points are shown by the height of the arrows on the right. Although the two continuous cosinusoids are quite different, they have the same values at the sample positions. Hence, if cones are only present at the sample positions, the cone responses will not distinguish between these two inputs. We say that these two continuous signals are an aliased pair. Aliased pairs of signals are indistinguishable after sampling. Hence, sampling degrades our ability to discriminate between sinusoidal signals. Figure 1.11c shows that sampling degrades our ability to discriminate between signals in general, not just between sinusoids. Whenever two signals agree at the sample points, their sampled representations agree. The basic phenomenon of aliasing is this: Signals that only differ between the sample points are indistinguishable after sampling. The exercises at the end of this chapter include some computer programs that can help you make sampling demonstrations like the one in Figure 1.12. If you print out squarewave patterns and various sampling arrays, using the programs provided, you can print various patterns onto overhead transparencies and explore the effects
1.3. SAMPLING AND ALIASING
19
(a)
(b)
(c)
Sample locations
Figure 1.11: Aliasing of signals results when sampled values are the same but inbetween values are not. (a,b) The continuous sinusoids on the left have the same values at the sample positions indicated by the black squares. The values of the two functions at the sample positions are shown by the height of the stylized arrows on the right. (c) Undersampling may cause us to confuse various functions, not just sinusoids. The two curves at the bottom have the same values at the sampled points, differing only in between the sample positions.
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
20
Low frequency squarewave Rotated sampling grid
High frequency squarewave Figure 1.12: Squarewave aliasing. The squarewave on top is seen accurately through the grid. The squarewave on the bottom is at a higher spatial frequency than the grid sampling. When seen through the grid, the pattern appears at a lower spatial frequency and rotated.
1.3. SAMPLING AND ALIASING
21
of sampling. Figure 1.12 shows an example of two squarewave patterns seen through a sampling grid. After sampling, the high frequency pattern appears to be a rotated, low frequency signal. Sampling is a Linear Operation. The sampling transformation takes the retinal image as input and generates a portion of the retinal image as output. Sampling is a linear operation as the following thought experiment reveals. Suppose we measure the sample values at the cone positions when we present image ; call the intensities at the sample positions . Now, measure the intensities at the sample positions for a second image, ; call the sample intensities . If we add together the two images, the new image, , contains the sum of the intensities in the original images. The values picked out by sampling will be the sum of the two sample vectors, . Since sampling is a linear transformation, we can express it as a matrix multiplication. In our simple description, each position in the retinal image either falls within a cone inner segment or not. The sampling matrix consists of rows representing the sampled values. Each row is all zero except at the entry corresponding to that row’s sampling position, where the value is . Aliasing of harmonic functions. For uniform sampling arrays we have already observed that some pairs of sinusoidal stimuli are aliases of one another (part (a) of Figure 1.11). We can analyze precisely which pairs of sinusoids form alias pairs using a little bit of algebra. Suppose that the continuous input signal is . When we sample the stimulus at regular intervals, the output values will be the value of the cosinusoid at those regularly spaced sample points. Suppose that within a single unit of distance there are sample points, so that our measurements of the stimulus takes place every units. Then the sampled values will be . A second cosinusoid, at frequency ¼ will be an alias if its sample values are equal, that is, if . ¼
With a little trigonometry, we can prove that the sample values for any pair of cosinusoids with frequencies and will be equal. That is,
(To prove this we must use the cosine addition law to expand the right sides of the following equation. The steps in the verification are left as exercise 5 at the end of the chapter.) The frequency is called the Nyquist frequency of the uniform sampling array; sometimes it is referred to as the folding frequency. Cosinusoidal stimuli whose
22
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
frequencies differ by equal amounts above and below the Nyquist frequency of a uniform sampling array will have identical sample responses. Experimental Implications. The aliasing calculations suggest an experimental method to measure the spacing of the cones in the eye. If the cone spacing is uniform, then pairs of stimuli separated by equal amounts above and below the Nyquist frequency should appear indistinguishable. Specifically, a signal
that is above the Nyquist frequency will appear the same as the signal
that is an equal amount below the Nyquist frequency. Thus, as subjects view interferometric patterns of increasing frequency, as we cross the Nyquist frequency the perceived spatial frequency should begin to decrease even though the physical spatial frequency of the diffraction pattern increases. Yellott (1982) examined the aliasing prediction in a nice graphical way. He made a sampling grid from Polyak’s (1957) anatomical estimate of the cone positions. He simply poked small holes in the paper at the cone positions in one of Polyak’s anatomical drawings. We can place any image we like, for example patterns of light and dark bars, behind the grid. The bits of the image that we see are only those that would be seen by the visual system. Any pair of images that differ only in the regions between the holes will be an aliased pair. Yellott introduced the method and proper analysis, but he used Polyak’s (1957) data on the outer segment positions rather than on the positions of the inner segments (Miller and Bernard, 1983). This experiment is relatively straightforward for the S cones. Since these cones are separated by about minutes of visual angle, there are about six S cones per degree of visual angle. Hence, their Nyquist frequency is cycles per degree of visual angle (cpd). It is possible to correct for chromatic aberration and to present spatial patterns at these low frequencies through the lens. Such experiments confirm the basic predictions that we will see aliased patterns (Williams and Collier, 1983).
1.4 The L and M Cone Mosaic Experiments using a visual interferometer to image a high frequency pattern at high contrast on the retina are a powerful way to analyze the sampling mosaic of L and M cones. But, even before this was technical feat was possible, Helmholtz’ (1896) noticed that extremely fine patterns, looked at without any special apparatus, can appear wavy. He attributed this observation to sampling by the cone mosaic. His perception of a fine pattern and his graphical explanation of the waviness in terms of sampling by the cone mosaic are shown in part (a) of Figure 1.13 (boxed drawings). G. Byram was the first to describe the appearance of high frequency interference gratings (Byram, 1944). His drawings of the appearance of these patterns are shown
1.4. THE L AND M CONE MOSAIC
23
H2
H1
B1 B2
W1
B3
W2
W3
Figure 1.13: Drawings of perceived aliasing patterns by several different observers. Helmholtz’ observed aliasing of fine patterns which he drew in part H1. He offered an explanation of his observations, in terms of cone sampling, in H2. Byram’s (1944) drawings of three interference patterns at 40, 85 and 150 cpd are labeled B1, B2, and B3. Drawings W1,W2 and W3 are by subjects in Williams’ laboratory who drew their impression of aliasing of an 80 cpd and two patterns at 110 cpd
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
24
in part (b) of the figure. The image on the left shows the appearance of a low frequency pattern diffraction pattern. The apparent spatial frequency of this stimulus is faithful to the stimulus. Byram noted that as the spatial frequency increases towards 60 cpd, the pattern still appears to be a set of fine lines, but they are difficult to see (middle drawing). When the pattern significantly exceeds the Nyquist frequency, it becomes visible again but looks like the low-frequency pattern drawn on the right. Further, he reports that the pattern shimmers and is unstable, probably due to the motion of the pattern with respect to the cone mosaic. Over the last 10 years D. Williams’ group has replicated and extended these measurements using an improved visual interferometer. Their fundamental observations are consistent with both Helmholtz and Byram’s reports, but greatly extend and quantify the earlier measurements. The two illustrations on the left of part (c) of Figure 1.13 show Williams’ drawing of 80 cpd and 110 cpd sinusoidal gratings created on the retina using a visual interferometer. The third figure shows an artist’s drawing of a 110 cpd grating. The drawing on the left covers a large portion of the visual field, and the appearance of the patterns varies across the visual field. For example, at 80 cpd the observer sees high contrast stripes at some positions, while the field appears uniform in other parts of the field. The appearance varies, but the stimulus itself is quite uniform. The variation in appearance is due to changes in the sampling density of the cone mosaic. Cone sampling density is lower in the periphery than in the central visual field, so aliasing begins at lower spatial frequencies in the periphery than in the central visual field. If we present a stimulus at a high enough spatial frequency we observe aliasing in the central and peripheral visual field, as the drawings of the 110 cpd patterns in Figure 1.13 show. There are two extensions of these ideas on aliasing you should consider. First, the cone packing in the fovea occurs in two dimensions, of course, so that we must ask what the appearance of the aliasing will be at different orientations of the sinusoidal stimuli. As the images in Figure 1.12 show, the orientation of the low frequency alias does not correspond with the orientation of the input. By trying the demonstration yourself and rotating the sampling grid, you will see that the direction of motion of the alias does not correspond with the motion of the input stimulus2 . These kinds of aliasing confusions have also been reported using visual interferometry (Coletta and Williams, 1987). Second, our analysis of foveal sampling has been based on some rather strict assumptions concerning the cone mosaic. We have assumed that the cones are all of the same type, that their spacing is perfectly uniform, and that they have very narrow sampling apertures. The general model presented in this chapter can be adapted if any one of these assumptions fails to hold true. As an exercise for yourself, a new analysis with altered assumptions might change the properties of 2
Use the Postscript program in the appendix section to print out a grid and a fine pattern and try this experiment.
1.5. SUMMARY AND DISCUSSION
25
the sampling matrix.
Visual Interferometry: Measurements of Human Optics There is one last idea you should take away from this chapter: Using interferometry, we can estimate the quality of the optics of the eye. Suppose we ask an observer to set the contrast of a sinusoidal grating, imaged using normal incoherent light. The observer’s sensitivity to the target will depend on the contrast reduction at the optics and the observer’s neural sensitivity to the target. Now, suppose that we create the same sinusoidal pattern using an interferometer. The interferometric stimulus bypasses the contrast reduction due to the optics. In this second experiment, then, the observer’s sensitivity is limited only by the observer’s neural sensitivity. Hence, the sensitivity difference between these two experiments is an estimate of the loss due to the optics. The visual interferometric method of measuring the quality of the optics has been used on several occasions. While the interferometric estimates are similar to estimates using reflections from the eye, they do differ somewhat. The difference is shown in Figure ?? which includes the Westheimer’s estimate of the modulation transfer function, created by fitting data from reflections, along with data and a modulation transfer function obtained from interferometric measurements. The current consensus is that the optical modulation transfer function is somewhat closer to the visual interferometric measurements than the reflection measurements. The reasons for the differences are discussed in several papers (e.g. Campbell and Green, 1965; Williams 1985; Williams et al., 1995).
1.5 Summary and Discussion The S cones are present at a much lower sampling density, and they are absent in the very center of the fovea. Because they are sparse, we can measure the S cone positions behaviorally using small points of light. The behavioral estimates of the S cones are also consistent with anatomical estimates of the S cone spacing. The wide spacing of the S cones can be understood in terms of the chromatic aberration of the eye. The eye is ordinarily in focus for the middle-wavelength part of the visual spectrum, and there is very little contrast beyond 2-3 cycles per degree in the short-wavelength part of the spectrum. The sparse S cone spacing is matched to the poor quality of the retinal image in the short-wavelength portion of the spectrum. The L and M cones are tightly packed in the central fovea, forming a triangular grid
26
CHAPTER 1. THE PHOTORECEPTOR MOSAIC
that efficiently samples the retinal image. Ordinarily, optical defocus protects us from aliasing in the fovea. Once aliasing between two signals occurs, the confusion cannot be undone. The two signals have created precisely the same spatial pattern of photopigment absorptions; hence, no subsequent processing, through cone to cone interactions or later neural interpolation, can undo the confusion. The optical defocus prevents high spatial frequencies that might alias from being imaged on the retina. By creating stimuli with a visual interferometer, we bypass the optical defocus and image patterns at very high spatial frequencies on the cone mosaic. From the aliasing properties of these patterns, we can deduce some of the properties of the L and M cone mosaics. The aliasing demonstrations show that the foveal sampling grid is regular and contains approximately 120 cones per degree of visual angle. These measurements, in the living human eye, are consistent with the anatomical images obtained of the human eye reported by Curcio and her colleagues (Curcio, et al., 1991). The precise arrangement of L and M cones within the human retina is unknown, though data on this point should arrive shortly (e.g., Bowmaker and Mollon, 1993). Current behavioral estimates of the relative number of L and M cones suggest that there are about twice as many L cones as M cones (Cicerone and Nerger, 1989). The cone sampling grid becomes more coarse and irregular outside the fovea where rods and other cells enter the spaces between the cones. In these portions of the retina, high frequency patterns presented through interferometry no longer appear as regular low frequency frequency patterns. Rather, because of the disarray in the cone spacing, the high frequency patterns appear to be mottled noise. In the periphery, the cone spacing falls off rapidly enough so that it should be possible to observe aliasing without the use of an interferometer (Yellott, 1982). In analyzing photoreceptor sampling, we have ignored eye movements. In principle, the variation in receptor intensities during these small eye movements can provide information to permit us to discriminate between the alias pairs. (You can check this effect by studying the images you observe when you experiment with the sampling grids.) The effects of eye movements are often minimized in experiments by flashing the targets briefly. But, even when one examines the interferometric pattern for substantial amounts of time, the aliasing persists. The information available from small eye movements could be very useful; but, the analysis assuming a static eye offers a good account of current empirical measurements, This suggests that the nervous system does not integrate information across minute eye movements to improve visual resolution (Packer and Williams, 1992).
1.5. SUMMARY AND DISCUSSION
27
Phosphor B2
Relative power
Phosphor B1
400
500 600 Wavelength (nm)
700
400
500 600 Wavelength (nm)
700
Figure 1.14: Choosing monitor phosphors.
Exercises 1. Answer the following questions related to image properties on the retina. (a) Use a diagram to explain why the retinal image does not change size when the pupil changes size (b) Compute the visual angle swept out by a building that is 200 meters tall seen from a distance of 400 meters. (c) Suppose a lens has a focal length of 100mm Where will the image plane of a line one meter from the center of the lens be? Suppose the line is 5 mm high. Using a picture, show the size of the image. (d) Use the lensmaker’s equation (from Chapter ??) to calculate the actual height on the retina. (e) Good quality printers generate output with 600 dots per inch. How many dots is that per degree of visual angle. (Assume that the usual reading distance is 12 inches.) (f) Good quality monitors have approximately 1000 pixels on a single line. How many pixels is that per degree of visual angle. (Assume that the usual monitor distance is 0.4 meters and the width of a line is 0.2 meters.) (g) Some monitors can only turn individual pixels on or off. It may be fair to compare such monitors with the printed page since most black and white printers can only place a dot or not place one at each location. But, it is not fair to compare printer output with monitors capable of generating different gray scale levels. Explain how gray scale levels can improve the accuracy of reproduction without increasing the number of pixels. Justify your answer using a matrix-tableau argument.
28
CHAPTER 1. THE PHOTORECEPTOR MOSAIC 2. A manufacturer is choosing between two different blue phosphors in a display (B1 or B2). The relative energy at different wavelengths of the two phosphors are shown in Figure 1.14. Ordinarily, users will be in focus for the red and green phosphors (not shown in the graph) around 580 nm. (a) Based on chromatic aberration, which of the two blue phosphors will yield a sharper retinal image? Why? (b) If the peak phosphor values are 400 nm and 450 nm, what will be the highest spatial frequency imaged on the retina by each of the two phosphors? (Use the curves in Figure ??.) (c) Given the highest frequency imaged at 450 nm, what is the Nyquist sampling rate required to estimate the blue phosphor image? What is the Nyquist sampling rate for a 400 nm light source? (d) The eye’s optics images light at wavelengths above 500 nm much better than wavelengths below that level. Using the curves in Figure 1.3, explain whether you think the S cones will have a problem due to aliasing those longer wavelengths. (e) (Challenge). Suppose the eye is always in focus for 580 nm light. The quality of the image created by the blue phosphor will always be quite poor. Describe how you can design a new layout for the blue phosphor mosaic on the screen to take advantage of the poor short-wavelength resolution of the eye. Remember, you only need to match images after optical defocus. 3. Reason from physiology to behavior and back to answer the following questions. (a) Based purely on the physiological evidence from procion yellow stains, is there any reason to believe that the cones in Figure 1.7 are the S cones? (b) What evidence do we have that the measurements of Williams et al. are due to the positions of the S cones rather than from the spacing of neural units in the visual pathways that are sensitivie to short-wavelength light? 4. Give a drawing or an explanation to each of the following questions on aliasing. (a) Draw an example of aliasing for a set of sampling points that are evenly spaced, but do not use a sinusoidal input pattern. (b) Consider the sensor sample positions in Figure 1.15. with the positions unevenly spaced, as shown. Draw the response of this system to a constant valued input signal.
1.5. SUMMARY AND DISCUSSION
29
Spatial position of sample points Figure 1.15: Sample positions of a set of sensors. (c) Now, draw a picture of a stimulus that is non-uniform and that yields the same response as in the previous question. (d) What rule do you use to make sure the stimuli yield equivalent responses? (e) Suppose that we put a lens that strongly defocuses the stimuli prior to their arrival at the sensor positions. This defocus means that it will be impossible to generate patterns that vary rapidly across space. If this blur is introduced into the optical path, will you be able to deliver your stimulus to the sensor array? Explain. (f) Suppose that somebody asks you to invest in a company. The main product is a convolution operation that is applied to the output of a digital discrete sensor array built into a still camera. The purpose of the filter is to eliminate aliasing due to the sensors spatial sampling. How much would you be willing to invest in the company? 5. Perform the following aliasing calculations (a) In this chapter I asserted that . Multiply out the arguments of the functions and write them both in the form of . (b) Use the trigonometric identity expand the two functions.
to
(c) What is the value of ? What is the value of ? Use these values to obtain the final equality. (d) Suppose that we represent a signal using a vector with ten entries. Suppose the signal is sampled at five locations, and we describe the sampling operation using a sampling matrix consisting of zeros and ones. How many rows and columns would the sampling matrix have? (e) Write out the sampling matrix for a one-dimensional sampling pattern whose sample positions are at 1,3,5,7,9. (f) Write out the sampling matrix for a non-uniform, one-dimensional pattern in which the sample positions are spaced at locations 1,2,4,and 8. 6. Answer each of the following questions about the relationship between the sampling mosaic and optics of the eye.
30
CHAPTER 1. THE PHOTORECEPTOR MOSAIC (a) From time to time, some investigators have thought that the long-wavelength photopigment peak was near 620 nm, not 580 nm. Using Figure ??, discuss what implication such a peak wavelength would have for the Nyquist sampling rate required of these receptors. (b) In fact, as you can see from Figure 1.3, the M and L cones both have peak sensitivities in the range near 550 nm to 580 nm. What is required of their spacing in order to accurately capture the retinal image? (c) We have been assuming that the sensors in our array are equally sensitive to the incoming signal. Suppose that we have a sensor array that consists of alternating S and L cones. Draw the response of this array to an uniform field consiting of 450 nm light. Now, draw the intensity pattern that would have the same affect when the light is 650 nm. 7. Here are two Postscript programs, written by Arturo Puente, to create squarewave patterns and sampling patterns. Use the programs to print out the grids and patterns, and then copy the printouts onto an overhead transparency. View the patterns through the grids to see the effects of aliasing. %!PS-Adobe-1.0 % Description: % Parameters: % /widthx1 10 def % 1L$A(6,h_i5j9_k^lPM$&JIK$ZM/2(6,'_mBBBBA
C W Db c b %c W
D
npoGqsrtqsru
Ff")(*$+-,'.3/213$Bv]&w9(*_ BBABBBBA
x
D
E13$G;69S]$'&5j,'.k9&N$U9j4lyBBABBBBA
z
X
{|_LZM7M/5M/ ,l9&N$U9j4l
BBABBBBA
w
c
!#IK7M;69&k}~,'Q@([_i9_3I$A(6,h_
BBABBBBA
W
z
&958/3,'&QR9;*(*%9/2(6,'_V}~9/9 ABBBBA
XD
xS
BBABBBBA
XC
z`$7M&9;0^](856Z:9&([/>(*$U5(*_`Ig9/BABBBBA
X%W
(*_k,'IK7M;69&k`$7M&,'_i59_i^H/213$BP:,=,';U~S]Z:,h/>1L$]58(65
BBBA
X%z
Db,h_i,'IK7M;89&idZ:9/2(69;UY:$IK$'ZM/>(["%$ \](*$;8^5:,'.L(6,h_(*_~")(6587:9;UIg,'& />$'?3 E13$'&N$ 1i9")$Av]$'$_OQ9_3S¬9^U"9_3I$]5M(*_,'7M&i7M_k^U$&58/9_k^U(*_i,'.3"%(8567:9;UI,'&/2$?,h"%$'&k/213$A;8956/ /2+~$_L/>S]>M")$S]$]9&5")$'_~/ ,U^9SK®%,'7M&i")(*$+-,'.3")(6587:9;]Ig,'& />$'?#(65MIK1k9_i(*_k&w9ZM(6^U;*S]¯_3$'+ &J$]587M;*/5/>1k9/3IK1k9_i$l,'7M&G,'")$&9;[;U")(*$+-5°,'Q$/2(*Q@$U5j58$$'Q±/ ,9&&(*")$ +~$$'²];*Sg{|_`/>1L$ v]$]([_3_3([_i,'.0/213(85IK1k9ZM/>$'&>®{3+~(*;*;=&N$'"%([$+³+`1i9/%(65I,'Q@QR,'_3;*S9IKI$ZM/2$]^lIg,'_LIK$&_L(*_i ")(6587:9;UIg,'&/2$?LE0,h+9&^5/>1L$A$'_i^U®{3+`(*;*;U([_3/2&K,U^U7MIK$5°,'Q$,'.L/>1L$Av]&,U9^U$&iIK;89(*Q5M/>1k9/L1k9")$ v]$$'_`QR9^U$9v,'7M/%/>1L$B&J$;69/2(6,'_k561L(*Zv]$/2+~$'$_~")(6587:9;UIg,h&/>$'?9_k^ZM$'&NI$ZM/2(6,'_i´$G+~(*;*; / 9²]$A7MZ/213$A(85°587M$,'.3Ig,'_3_L$IK/2(*_kI,'&/2$?g®I,'Q@ZM7M/ 9/>(8,'_3®L9_i^58$$'(*_i9]9(*_`(*_O/>1L$A;69/2$& IK1k9ZM/>$'&5°
Ff;69/>$'&9;U")(*$'+µ,'.0/213$ v]&9(*_`(6558²]$/2IK1L$]^(*_~\](67M&J$T E13$ 137MQR9_`Ig,h&/>$'?H(65:9jDQ@Q /213(*IK²5813$'$/¶,'.0_3$'7M&K,h_i5M+~(*/219587M&.¦9IK$l9&N$U9,'.icbb5°7:9&J$BIK$_L/>([Q@$/2$&w5°YA9/>1L$&k/21i9_ ;*(*_L(*_iA/213$l56²]7M;[;*®L95/213$ &N$'/>(*_k9j;*(*_L$]5M/>1L$A$S]$'®/>1L$A")(6587:9;UIg,'& />$'?H(65M;*(*²]$l9jIK& 7MQZM;*$]^5813$$'/ 58/>7M.·.$U^T([_3/ ,A/>1L$58²]7M;*;69IK1~;6,hIg9/>(8,'_V+~1L$&J$A/213$ .¦,';6^U$U^I,'&/2$?H.,'&QR5j9&(8^$A"%(856(*v];[$ .&,'Q-/213$ $?g/2$&(6,h&i(65Ig9;[;*$]^9¸¹%º»¼®+~1L(*;*$A$U9IK1@561k9;*;6,'+m.>7M&&,'+m/21i9/i58$Z:9&w9/>$U59jZ:9(*&,'. S]&(](65MIg9;*;[$]^9:¼»]½¿¾»¼E1L$ Z:9/2/>$'&_,'.k567M;[IK(9_i^S]&(^U(*.·.$'&kIg,'_k56(8^]$'&9v];*S9IK&,U55 58ZM$IK(*$U5°/213$ 137MQ9_`v]&9(*_`Ig,'_3/ 9(*_k5Q,h&N$l567M;*I(]/>1k9_,'/213$'&iZM&(*Q9/2$Av]&w9(*_i5E1L$&J$9&J$ 9;65,l58(6_3(*MI9_3/^U(*.·.$'&N$'_3IK$U5v]$'/>+`$$'_~137MQR9_~v]&9([_i58®%9;*/>1k,'7:1V/213$ v]&K,U9^¬,'7M/2;*(*_L$]5:,'. /213$587M;*Ig9;M9_i^S]&9;]Z:9/2/>$'&_i59&N$ 7:567:9;*;[SZM&N$U56$'_3/i9_k^H&N$'Ig,U_L(*%9v];*$T9IK&,U55j^U(*.·.$'&N$'_3/ ZM$],hZM;*$]E13$lS]&(9_i^587M;*IK(9&J$AI,'_3")$_L(*$_L/3;69_k^UQ9&²58®v]7M/3/213$'SZM&K,'v9v];[SO1k9")$A_i, .7M_LIK/2(6,'_i9;58(6_L(*MIg9_3I$] E13$ Q,U58/3")(658(*v];*$l567M;*I(9&N$ 7:56$U^95QR9&²]$&5/,AZ:9& />(*/2(6,'_~/>1L$A137MQR9_~v]&9([_~(*_L/,A.,'7M& ½ÁÀ¦à ¼hE13$A;8,'v]$]5:9&N$ Ig9;*;*$U^iĺÅÀgÆ]ÇÉÈg½ËÊÌÈgºÍËÃÇÉÈg½ËÊ)ÇÉÃwÎGÌÀgº>Ƚ=9_k^lÀ¾¾ÍËÌ%Í[ÇÉȽ/ ,^U$]58IK&(*v]$ />1L$(*& C
x
nkÏ~Ð#ÑAr't¨ÓÒÔ¬r'Ï~tnpoG¨r'£nkТ#¨:tÑM¨tuetqsrÐVr'£oGq Þ×Ü=ßÚ6Ö]Û0Û[ÜhÝiÙ
Õ3Ö]×°Ø[ÙÚ6Ö]Û0Û[ÜÝÙ ãää Ø[â0Ø[Ú6Ö]Û0Û[ÜhÝiÙ Õ0×Ø[áÖ]×8åOæ)Ø[çèiÖ]Û ä Ü×ÚJÙ°é
àÙáâ0Ü×Ö]Û0Û[ÜhÝiÙ \](67M&J$O LêUëÃG¾|ÀgºÇìÃí
(65j581i,'+`_(*_V;69/2$&w9;%")(*$'+T 958$]^¬,'_(*/ 5î,'")$&9;[;561k9ZM$®39_i9/ ,'Q@(658/ 5 ^U(*")(6^U$/>1L$
1L7MQ9_¬v]&9(*_(*_L/,.¦,'7M&&N$U(6,'_i5iIg9;*;[$]^`/213$`,'IKI(*ZM(*/ 9;*®]/>$'Q@Z:,'&w9;*®]Z:9&(*$'/9;¶9_i^ .&,'_L/9;U;8,'v]$]5 j958$]^~,'_`([/50(*_L/>$'&_i9;Ig,'_3_L$IK/2(6,'_k56®/213$GIg,'& />$'?
I9_Ov]$p.>7M&/>1L$&p^U(*")(6^U$U^#(*_3/ , Q9_LS9_k9/,hQ@(*Ig9;[;*S^U(658/2(*_3IK/:9&J$]95]4(6587:9;](*_LZM7M/0/,B/213$v]&9([_«9&&(*")$]5¶([_VZM&(*QR9&S~")(6587:9; Ig,'& />$'?e®L9&J$]9j4H®+~1L(*IK1~(85;6,hIg9/>$U^l(*_~/213$l,'IKIK(*ZM([/9;];6,hv]$] &J$;69/2(*")$AZ:,U58(*/2(6,'_k5îï58$$ \](67M&N$ ð9IK1~;8,'v]$AIg,h_3/ 9(*_i5Q9_3S¬^U(658/2(*_3IK/Lv]&9(*_~Ⱥ·Ãȼ¦®/21i9/ (65MIg,'_L/>(67:,h7:5î&K,h7MZ:5î,'.3I,'&/2(*Ig9;)_3$'7M&K,'_k5/21i9/k9ZMZM$]9&/,A.>7M_3IK/2(6,'_V([_9_~([_3/2$&&J$;69/2$]^ Q9_L_3$'&FIg,'&/2(*Ig9;9&J$]9j(65M(6^U$_L/>([M$]^l(*_58$")$'&9;U+9S58®/21i,'7:1VZM$'&N1k9Z:5/213$ Q,U58/ 58(6_3(*MI9_3/3(85v]ST(*/ 59_k9/,hQ@(*Ig9;]I,'_3_L$IK/2(6,'_k5+~([/>1,h/>1L$&iZ:9&/ 5,h.0/>1L$Av]&9([_i9IK1~v]&9([_ 9&J$]9jQ9²]$U59^U(658/2(*_3IK/2(*")$BZ:9/2/>$'&_,h.9_i9/ ,'Q@([Ig9;UIg,'_L_3$'IK/>(8,'_i5+`(*/21,'/213$'&iv]&9(*_9&J$]95 E13$ (*_3ZM7M/ 59& &(*")(*_iB/ ,,'_3$l9&N$U9jIg,'Q$A.&,'Q,h_3;*S9j.$'+-,h/>1L$&iZM;69IK$U5([_~/213$Av]&w9(*_3®%9_k^ /213$,h7M/>ZM7M/ 5$Q$&(*_iA.>&K,hQ-/>1k9/i9&J$]99&J$58$_L/L/ ,956ZM$'IK(*MI58$/¶,'.^U$U56/2(*_i9/2(6,'_9&N$U95° {|_O/>1L$BZM&(*QR9/>$'®/>1L$&J$]9/LZ:9&/k,h.3/213$A")(6587:9;58(6_i9;U.>&K,hQ-/>1L$A&J$/2(*_i99_k^H/>1L$A;69/2$&9; $_L(*IK7M;69/2$A_L7MIK;*$'7:5j9&&(["%$U59/¶958(*_i;*$l9&N$U9j+~(*/213([_V/>1L$,'IIK(*ZM(*/ 9;];6,'v]$l,'.3/213$ Ig,'&/2$? Ig9;*;[$]^HÈgºÅÃÈñGòK®),'&kÌ%ºÍ¿ÎGÈgº>¹lóÍÁ¼»È½=¾|ÀgºÇìÃíLE13(65M(65:9j;69&K$ Ig,'&/2(*Ig9;9&J$]9®I,'Q@ZM&(856(*_k &,'7:13;*Sôõ÷öløRôwùú3_L$7M&,'_i58®gQ9_LSQ,'&J$A/21i9_V/213$~ôwùû3_L$7M&,'_i5M(*_`/>1L$A;69/2$&9;M$_3([IK7M;69/2$ _37MI;*$7:5FH&J$]9j4HIg9_~v]$ (6^U$_L/>(*M$U^lv]S¬9jZM&K,'Q(*_3$'_3/k56/2&(69/2(6,'_VQR9^U$A7MZ,'.i9^]$'_i58$ Ig,';[;*$IK/2(6,'_,'.0Q@S]$';*(*_k9/>$U^¬9?L,'_k5+`(*/213(*_,h_3$,h.3/213$A;89S]$&5,'.3")(6587:9;]Ig,'&/2$?LE13$58/2&(69/2(6,'_ (65MIg,'$'?e/2$_k56(["%$A+~(*/219&J$]94H9_i^9ZMZM$]9&w5:95:9j+~1L(*/2$Av9_i^l/ ,A/>1L$A_k9²]$]^H$S]$ü¦ $I97:56$ ,'.L(*/5ZM&K,hQ@(*_L$_3I$®(*QZ:,'&/ 9_3/9_k9/,'Q(*Ig9;U;8,'Ig9/2(6,'_9_k^;89&K$l56(*$lïzB57:9&N$ IK$'_3/2(*Q@$'/>$'&5°ðì®L9&N$U9j4l1k95v]$'$_`/>1L$587MvýÅ$I/i,'.L(*_3/2$_k56$l56/27:^USgg´$G+~(*;[;Uv]$](*_`/>1L(65 IK1k9ZM/>$'&k+~(*/219j&J$")(*$'+µ,'.L/>1L$9_i9/ ,'Q(*Ig9;9_i^H$';*$I/>&,'ZM13S58(6,';8,U(*Ig9;%.$U9/>7M&J$]5,'.i9&J$]9j4H þ*ÿ "!# $%'&"!()* + ,()+()%'$-() () %'./+()./010)/2 354 617358:9&,'_i")(6587:9;U(*_3ZM7M/ E13$9_k9/,hQ@SK®g$;[$IK/2&K,hZM13S58(6,';6,US9_i^HIg,hQ@ZM7M/ 9/>(6,h_i9; ZM7M&Z:,U58$,h.3/213$]58$9&J$]959&J$A_k,'+³7M_k^]$'&9IK/2(*")$58/27:^]S9_k^H+~(*;*;v]$9_O(*Q@Z:,h&/9_L/3/ ,'ZM(*Iî.,'& 58/>7:^UST.¦,'&iQR9_3SS]$U9&5/ ,AIg,'Q$]´$ +`(*;*;U&J$")(*$'+µ5,'Q@$,'.3/213$ ZM&N$';*(*Q@([_i9&ST$?gZM$'&(*Q@$'_3/ 5 /21i9/31k9")$ v]$'$_~ZM$'&.¦,h&Q@$U^([_~/213$]58$A")(6587:9;9&J$]95:9/3/213$ $_i^,h.0/>1L(65MIK1i9ZM/2$& {|_~;69/2$& IK1k9ZM/>$'&5Ig,h_3IK$'&_3(*_kBQ,'/2(6,'_9_k^HIg,';6,h&>®g+~$ +~(*;[;U&N$'/>7M& _V/,AI,'_i58(6^U$&k/213$A.7M_3I/>(6,h_i9;]&,';*$ ,'.L/>1L$]58$B")(6567:9;M9&N$U9595M+~$';*;+ï R$'²](*®%%Wz®)b¯\]$';*;*$QR9_9_k^H49_~556$'_3®%%ð> ,U58/i,h.0+~1k9/L+`$A²]_i,h+µ9v,'7M/LIg,'&/2(*Ig9;)")(6587:9;9&N$U95I,'Q@$U5.>&K,'Q $?gZM$&([Q@$_L/9;M56/27:^U(*$]5 ,'.LIg9/i9_k^HQ,'_L²]$SgE13$&J$9&J$58(6_L(*MIg9_3/k^U(*.·.$&J$_LIK$]5(*_~/213$l9_i9/ ,'Q@S9_k^H.7M_3I/>(6,h_i9; ZM&,'ZM$&/2(*$U5j,'.0/213$ Ig,'&/2(*IK$U5j,'.i^]([.2.>$&J$_L/k58ZM$IK([$]5E1L$]58$^U(*.·.$'&N$'_3IK$U5Ig9_`v]$^U$QR,'_i58/2&9/>$U^ (*_56(*QZM;*$A$'?gZM$&(*Q$_L/9;Q9_3([ZM7M;69/>(8,'_i5g\,h&i$?L9Q@ZM;*$'®deZM&97M$B$/¶9;6]ï>%WWðe1i9")$ 581i,'+`_V/>1k9/L&J$QR,'"9;,'.3/213$ Ig9/3ZM& (*Q9&ST")(6587:9;UIg,'& />$'?V^,'$U5_k,'/3v];*([_i^l/>1L$AIg9/ />1L$ 9_3([Q9;hý*7MQ@Z:58®g& 7M_i58®%9_i^9ZMZM$U9&5_k,'&QR9;U/,A/213$ Ig9587:9;,'v58$&")$& ~7MQ@ZM1L&N$'S¬ï%Wc%ð 1i95:58/>7:^U(*$U^/213$ v]$1k9"%(8,'&G,'.i9jQR,'_3²]$'ST+~1i,=56$l9&N$U9j4l+95&J$QR,'")$]^{|_3(*/2(69;*;[S/>1L$ ;*$U56(6,h_9ZMZM$]9&J$]^H/ ,Av];*(*_k^l/>1L$AQ,h_3²]$STIg,hQ@ZM;*$'/>$';*Sg!H")$&i/2(*Q$®1i,h+~$")$'&>®/213$ Q,'_L²]$S &J$Ig,'")$'&N$U^5,'Q@$ ")(6567:9;].>7M_3IK/2(6,'_9_i^H+95:9v];*$ /,A+9;*²¬9&K,h7M_i^,'výÅ$'IK/ 56®eIK;*(*Qv¬9j/>&J$$'® 9_i^l$'"%$'_`M_k^9_i^lZM(*IK²7MZ«58Q9;*;=Ig9_i^USZM$;[;*$/ 5([_~13$'&kZM;69S¬9&J$]9{|_O137MQR9_3®/213$A;8,U5°5,'. 9&J$]9j4H(65j^U$'"¶958/ 9/>(*_kB/,l9;[;U")(6567:9;=.7M_3I/>(6,h_i $'Ig97:58$,'.0/213$U56$l^U(*.·.$&J$_LIK$]58®{i^U$U56IK& (*v]$ Q@$U9567M&J$Q$_L/5:,'.0/213$ 137MQ9_`v]&9(*_`+~13$'_3$'"%$'&kZ:,U556([v];*$®%9_k^QR9(*_3;[S{31i9")$ &N$U56/2&(*IK/2$]^ /213(65&N$'"%([$+³/ ,AZM&(*QR9/>$U
E13$'&N$ (659&J$]9/k^U$]9;M,h.3ZM&J$IK(658(6,'_~(*_~/213$ (*_3/2$&JIg,'_L_3$I/>(6,h_i5,'.LIg,'&/2(*Ig9;]")(6587:9;9&N$U95°E13$ 58ZM$IK(*MIjZ:9/>/2$&_,'.0Ig,'_L_3$'IK/>(8,'_i5&N$'IK$(*")$U^v]S¬9&J$]9j4Hj.&,'Q-/213$ />+,A&N$'/>([_i9$A")(69j/213$ ;69/2$&9;M$_3([IK7M;69/2$A_37MI;*$7:5&N$U567M;*/ 5([_~IK$'&/9([_~&J$]7M;69&([/>(*$U5j,'.3/>1L$9&JIK13(*/2$I/>7M&J$O,'.3ZM& (*Q9&S ")(6587:9;UIg,'&/2$?Lg´«$G&J$")(*$+ª/>1L$9_k9/,'Q(*Ig9;58/>&N7MIK/27M&N$O,'.k9&N$U9j4ljM&w56/ gE1L$_3®+~$A&J$")(*$'+ 1i,h+³/213$AZ:9/2/2$&_,h.3Ig,'_L_3$'IK/>(8,'_i5.>&K,hQ-/>1L$A/2+,A&J$/2(*_i9$ (*Q@Z:,=56$U59_,'")$&9;*;M,'&K]9_L(*%9/2(6,'_ ,'_`/>1L$A")(6587:9;](*_3.,'&Q9/2(6,'_~&N$'ZM&N$U56$'_3/2$]^l(*_~I,'&/2(*Ig9;9&N$U9j4l SUT/VXWY[Z/V\^]`_BaNY\VYcbed f(*²]$AI,'&/2$?
(*_$'_3$&w9;*®%9&N$U9j4l:(65j9;89S]$&J$]^56/2& 7MI/>7M&J$]\](87M&N$T D%9A561k,'+59IK&,U5°5
58$IK/2(6,'_,h.3/213$B")(6587:9;UIg,'& />$'?3de$")$&9;=Q9JýJ,h&i;69S]$&5MIg9_`v]$A(6^U$_L/>([M$]^l$]958(*;*SgF#&N$U94lj(85 58$]&J$]]9/2$]^(*_3/ ,56([?
;89S]$&5v956$U^,'_^U(*.·.$'&N$'_3IK$U5(*_`/>1L$A&J$;69/2(*")$^U$'_i58(*/>S,'.L_3$'7M&K,'_k56® 9?L,'_i59_i^58S]_i9Z:58$]59_i^H(*_L/>$'&NIg,h_3_3$'IK/2(6,'_i5¶/ , /213$ &N$U56/k,'.0/213$Av]&w9(*_iE13$l567MZM$'&MIK(69; ;69S]$'&pj1i95M")$&S.>$+³_L$7M&,'_i5v]7M/3QR9_3S¬9?L,'_i58®L^U$_i^U&([/>$U59_k^56S]_k9Z:56$U56®g+~1L(*IK1 Ig,';[;*$IK/2(*")$;[S@9&J$AI9;*;*$]^Æû]ºÅÀKÌ%Í¿½*g f9S]$&5MDO9_i^HXIg,h_i58(656/ 5î,'.k9^U$_i58$H9&&9S,'.3I$;*;Uv,=^]([$]5
z
nkÏ~Ð#ÑAr't¨ÓÒÔ¬r'Ï~tnpoG¨r'£nkТ#¨:tÑM¨tuetqsrÐVr'£oGq
jl®
xq Fq
¢¯ ®
£y¤¦¥ , , ¤¦§ , C5LïìvðE13$
,'&]9_3(*%9/2(6,'_@,'.)/213$_3$'7M&9;g(*_3ZM7M/ 5i9_i^`,'7M/2ZM7M/50/ ,#9&J$]9i4lG9&J$ 581i,'+`_i E13$Z:9& "¶,hIK$;*;[7M;69&l9_k^¬QR9_i,'IK$';*;*7M;69&G(*_LZM7M/5jQR9²]$Ig,'_3_L$IK/2(6,'_k5î(*_;69S]$&GcZM7M/ 59&J$ 58$_3//,,h/>1L$&MIg,'& />(*I9;39&J$]958®v9IK²/,j/213$;69/2$&9;%$'_3(*IK7M;89/>$_37MIK;*$'7:5¶9_k^O,h/>1L$&:587Mv]Ig,'&/2(*Ig9; _37MI;*$(6
ÒÔ K%Ô¬r'Ï~tÐH¨AnkÏ~£'r'tn#r§i¨:toG¤ÑM¨:£(*Ij(*_3/2$&JIg,'_L_3$I/>(6,h_i5E13$]58$G;69S]$&59ZMZM$]9&/,A&J$IK$'(*")$9^U(*&J$IK/ (*_LZM7M/L.>&K,'Q />1L$A(*_L/>$'&NIg9;89/>$U^T;69S]$'&5:,'.0/213$A;89/>$'&9;$_L(*IK7M;69/2$95M+~$';*;ïì\](*/>Z:9/2&(*I²`$'/k9;6 ® %zX¯`$_i^U&S¬9_k· ^ ¶k,U5813(8,'²9®Lc%ðÉ®)9_k^/213$l,'7M/2ZM7M/5.&,'Q-;69S]$'&5DO9_k^HXO9&N$l56$'_3/%/, ,'/213$'&iIg,'&/2(*Ig9;9&J$]95g fj9S]$'&5MDO9_i^lXT9&J$A1i9&^/,l^U(658/2(*_i7M(6581v958$]^,'_56(*QZM;*$ 13(856/ ,';6,U([Ig9;56/ 9(*_i5:,'.0/213$ Ig,'&/2$?L\]7M_3IK/2(6,'_k9;*;*SK®e;69S]$'&5>XO9&N$l,'./2$_&K,'7MZM$U^l/,U$'/>1L$& 9_i^58(*Q@ZM;[SIg9;*;*$U^H/>1L$A¼»Ìú ¸ ¾ÍÁÈg½g½Áȹú>¼p,h.3/213$AI,'&/2$?L f9S]$&kc1k95v]$'$_@567Mv^U(*")(6^U$U^([_3/ ,56$'"%$'&9;UZ:9&/ 5j950/213$B([_3/2$&JIg,'_3_L$IK/2(6,'_k5+~([/>1,'/>1L$& v]&9(*_9&N$U959_k^l;69S]$&5M1i9")$ v]$Ig,'Q$AIK;89&(*M$]^g f9S]$&kcZM7M/L/,A/213$ 7MZMZM$&k1k9;*.¶,'.0/213(65;69S]$&2®+~1L(*IK1`(65 Ig9;*;[$]^lcv < ¹T+`13(*;*$G/213$AZ:9& "¶,hIK$;*;[7M;69&i_3$'7M&K,'_k5QR9²]$AIg,'_L_3$'IK/>(8,'_i5(*_~/213$ ;6,'+~$'&i1i9;*.>® Ig9;*;[$]^lcN < º^ fj9S]$'&kc &N$'IK$(["%$U59j;89&K$ (*_3ZM7M/L.&,'Q-cv < ¹9_k^56$'_i^5M(*/5,'7M/2ZM7M/3/ ,,'/>1L$& Ig,'& />(*I9;9&N$U95°g f9S]$&kc Ig9_`v]$^U$M_L$]^9_k9/,'Q(*Ig9;*;[SOv]S/213$ ZM&N$U56$'_3IK$l,'.0/213$ ;69&K$ 58/>&(89/>(6,h_3®eIg9;*;*$U^H/>1L$ ¼Ç>ºÍÁÈîÀż Ä »iÃÆ]ÆÈgºÍÁ®+~1L(*IK1V(65MIg,'QZ:,U58$]^lQ9(*_L;*S¬,'.0I,'&/2(*Ig9;9?L,'_i5 f9S]$&kCI,'_3/ 9(*_i5&N$';69/2(*")$;*ST.$'+³IK$';*;=v,=^]([$]5I,'Q@Z:9&J$]^l/ ,A/>1L$587M&&K,h7M_i^U(*_i;69S]$'&5°{|/ 58$_i^5:9jQ9Jý6,'&p,'7M/2ZM7M/0/ , />1L$T587MZM$&(6,h&kIg,';*;[(*IK7M;*7:58®09A56/2& 7MIK/27M&J$
([_`/213$BQ(6^Uv]&9(*_k$ fj9S]$'&px (65:^U$_i58$A+`(*/21~IK$';*;65:9_i^56$'_i^5:9;69&$T,'7M/2ZM7M/3v9IK²/ ,A/>1L$A;69/2$&w9;$_L(*IK7M;69/2$B_37MI;*$7:5 ïÉE3,'S,'QR9®L%x%ðF5:9B$'_3$&w9;U/>1k,'7:1_i,h/k9v5,';*7M/2$B& 7M;[$®.¦,h&+9&^,h7M/>ZM7M/ 5/ ,A_3$+ Ig,'& />(*I9;9&N$U95/2$_k^/ ,AIg,'Q$A.>&K,'Q />1L$T567MZM$'&MIK(69;=;69S]$&5:9_i^l/2$&Q(*_i9/2$A([_~;69S]$'&kc%E1L$ .$'$]^Uv9IK²ZM&,ýÅ$'IK/2(6,'_i5/>$'_i^l/,AI,'Q@$ .&,'Q-/213$^U$'$ZV;69S]$'&5j9_k^H/>$'&Q@([_i9/2$B(*_O;69S]$&5 9_i^xBïìYB,hIK²];69_i^¬9_i^HP:9_k^US9JýÅ®%%W¯\]$';*;*$QR9_9_k^H"¶9_`5°58$_L®% ½´½ ð> E13$ +~(*&([_i^U(69&9Q±(*_~\](67M&J$T Dv¬581i,'+5/21i9/%/>1L$T58(6_i9;65M/ ,9_i^H.&,'Q9&J$]9j4H9&J$ Ig,'QZM;*$?`9_k^1L(613;[S@58ZM$I(*MIg!H_L$AQ@7:58/i587MZMZ:,U58$A/21i9/L/>1L$A(*_L/>$'&NIg,h_3_3$'IK/2(6,'_i5+~(*/213(*_ 9&J$]9j4HB9&N$l56ZM$'IK(*MIK®g/ ,U,UYB,h7:13;*ST/2+~$_L/>S]>M")$BZM$&JIK$_L/i,'.L/>1L$A_L$7M&,'_i5(*_9;*;=;69S]$&5:9&J$ (*_L13(*v](*/ ,'&SO(*_L/>$'&_3$'7M&K,'_k56®L9_k^H/>1L$(*&i(*_L/>$'&NI,'_3_L$IK/2(6,'_k5Q@7:58/3v]$H],'")$&_L$]^v]ST/>1L$ ZM&J$]58$_3I$,'.Lv](6,'IK1L$Q@([Ig9;]Q9&²]$'&5/21i9/%(6^U$_L/>([.ST+~1L(*IK1~_L$7M&,'_i5561k,'7M;6^lIg,'_L_3$'IK/i9_k^ 1i,h+TFH_i9/ ,'Q(*Ig9;UIK;895°58(*MIg9/2(6,'_«,'.L/>1L$AIK$';*;U/2S]ZM$]5+~(*/213(*_`/>1L$A")(6587:9;]Ig,'&/2$?g®L9_i^ (6^U$_L/>([MIg9/>(8,'_,'.0/213$ ;6,'Ig9;=IK(*&JIK7M(*/2&SK®e+~(*;*;=ZM&K,'")(6^U$B7:5M+~([/>1`Q9_3SQR,'&J$BIK;*7M$U59v,'7M/L/>1L$ .7M_LIK/2(6,'_i9;58(6_L(*MIg9_3I$T,'.3/213(85j9&J$]9 SUT/VX¾/Y¿T/ÀÁY[Z¿_ÃY\^VYÄbed
E13$l56/2& 7MIK/27M&J$O,'.3/213$T9_k9/,hQ@(*Ig9;=Z:9/>1L+9S5;[$]9^U(*_iG.&,'Q-/213$A/2+,A&J$/2(*_k9$A/ ,A/>1L$AIg,'& />$'? ^U$M_L$]5MQ9_3S¬,h.3/213$A.>7M_i^9Q@$'_3/ 9;UZM&,'ZM$&/2(*$U5j,'.9&J$]9j4HFHQR,'_iA/213$ Q,U58/i58(6_L(*MIg9_3/ ZM&,'ZM$&/2(*$U5(65M/>1k9/k9&J$]9j4H(*_`$]9IK1O13$'Q@(658ZM13$'&N$ 1i95:,'_3;[S¬9j&N$U56/2&(*IK/2$]^M$;8^,'.0")(*$+FH&J$]9 4Hj(*_~/213$ ;*$.>/kïì&(613/ ð1L$Q@(856ZM1L$&J$,'_L;*ST&N$'IK$(*")$U5")(6587:9;U(*_LZM7M/3Ig,'_LIK$& _3(*_k/213$ &(61L/iïÉ;[$./ ð 1i9;[.i,'.L/>1L$A")(6587:9;]M$;6^ ´$pIg9_58$$G1i,'+ /213(65:9&(658$]5Mv]SIg,'_k56(6^U$'&(*_iA1k,'+³&J$/2(*_k9;56(8_i9;650Q9²]$ />1L$(*&+9S/ ,l9&J$]9
b
nkÏ~Ð#ÑAr't¨ÓÒÔ¬r'Ï~tnpoG¨r'£nkТ#¨:tÑM¨tuetqsrÐVr'£oGq
4HE1L$l,'ZM/2(*I/2&9IK/LMv]$&5.&,'Q-/213$A/2+,A&J$/2(*_k9$AIg,'Q$A/ ,U$/213$'& 9/L/213$GÀKÌ%ÇÍ¿¾i¾8ë%ÍËÈK¼ÎG®)95 581i,'+`_V(*_O\](67M&N$O X%E1L$&J$A/213$AMv]$'&5:9&N$l5°,'& />$U^T(*_L/,A/2+,A_L$+-&K,'7MZ:5/>1k9/3$U9IK1 Ig,'_L_3$'IK/3/ ,,'_3;[S¬,'_3$l56(6^U$l,'.0/213$Av]&w9(*_iFH?L,'_k5.>&K,hQ]9_i;*(8,'_VIK$';*;65M+~1i,=56$ &N$'IK$ZM/2(*")$ M$;8^5:9&J$A;6,'I9/>$U^([_~/213$l½ËÃìÄÇLóÍÁ¼»È=½ ¸Ã¢½ ±O58$_k^l/>1L$(*&p,h7M/>ZM7M/ 5/ ,'+9&^5/213$A;89/>$'&9; $_L(*IK7M;69/2$A_L7MIK;*$'7:5j,'_`/>1L$A&(61L/i58(6^U$,'.L/>1L$Av]&9([_3®+~1L(*;*$l9?3,h_i5j,h.i]9_i;[(6,'_VIK$';*;65M+~(*/21 &J$IK$'ZM/>(*")$ M$;6^5M(*_~/213$ &(613/3"%(8567:9;UM$';6^lIg,'QQ@7M_3([Ig9/>$ />1L$(*&G,h7M/>ZM7M/L/,A/213$ ;*$.>/k58(6^U$,'. /213$Av]&w9(*_i(8,'_i5([_^U(*.·.$'&N$'_3/%;69S]$&5,'. /213$A;89/>$'&9;$_L(*IK7M;69/2$A_L7MIK;*$'7:5°E13$GZ:9&",'IK$;[;*7M;69&G9_i^lQR9_i,'IK$';*;*7M;69&¶;69S]$&58®g+~1L(*IK1 9&J$A_37MQv]$&J$]^95:¦x®&J$IK$'(*")$B(*_LZM7M/L.>&K,'Q />1L$A&J$/2(*_i9,'_~/>1L$ Ë 59Q@$'®2,'ZMZ:,U58(*/2$®2,'ZMZ:,U58(*/2$®259Q@$'®2,'ZMZ:,U58(*/>$'®25°9Q $ ÌM58(6^U$,'.L13$U9^]®&N$U56ZM$'IK/>(["%$';*SgE13$ Ig,'_L_3$'IK/>(8,'_i5î,h.3/213$]58$A;89S]$&5M.¦,'&i/213$ ;*$.>/3;69/2$&w9;$_L(*IK7M;69/2$A_L7MIK;*$7:5:9&J$A(*;*;[7:56/2&9/2$]^(*_ \](67M&J$T X%´©13S/213(85Z:9&/2(*IK7M;69&iZ:9/2/2$&_,'.,'IK7M;89&iIg,'_3_L$IK/2(6,'_k5$'?e(856/ 5(65:9jQ@S58/2$&SgE13$ $S]$'¦,'.>¦,'& (6(*_.¦,h&i/>1L$A(*_L/>$'&NIg9;89/>$U^;89S]$&58®g+~13([IK1~.9;*;Uv]$/2+~$'$_O/>1L$AZ:9&",'I$;*;*7M;89&G9_i^ Q9_k,'IK$';*;*7M;69&i;69S]$'&58®1i95M_i,'/LS]$/%v]$$_^U$Q,h_i58/>&9/2$]^ E13$l56(6_k9;65M.&,'Q-/213$A/2+,A$'S]$]5M&N$'Q9(*_56$U&N$U]9/>$U^95M/213$S¬9& &(*")$T9/%/>1L$A(*_LZM7M/L;69S]$'&5j,h. 9&J$]9j4H!H_3$ Ig9_,hv56$'&")$B/>1L(65j58$]&J$]]9/2(6,'_~v]SQ@$]9587M&(*_kB/>1L$A$;[$IK/2&K,hZM13S58(6,';6,U([Ig9; &J$]58Z:,'_i58$]5,'.3/213$ 7M_3(*/ 5(*_`;69S]$&¶c1L$A&J$Ig,'&^U(*_i$;[$IK/2&K,=^]$ />&w9"%$';65+`(*/213(*_V;89S]$&kc/k950/ , +~1L(*IK1O$S]$H^U&(*")$]5M/>1L$ 7M_L(*/ {_O;89S]$&kc/L.>&K,hQ ,'_L$ $S]$ /,A/213$l,'/213$&¶/9²]$U5ZM;69IK$l,'")$&p9^U(658/ 9_3IK$l,'.0;*$U5°5M/>1k9_~CU b ÍÎF#v,'")$9_k^v]$';6,'+ ;69S]$'&kc<s/>1L$58(6_i9;65M.&,'Q-/213$ />+,A$S]$U5Ig,h_3")$&$T,'_L/,58(*_i;[$B_3$'7M&K,'_k56®%9;[/>1k,'7:1V/213$&J$ (65:56/2(*;*;9:/>$'_i^U$_LIKST.¦,'&i(*_k^U(*")(6^U7:9;=_3$'7M&K,h_i5/ ,A&J$IK$'(*")$A(*_LZM7M/5ZM&N$U^,hQ@(*_k9_3/2;*ST.&,'Q,'_L$ $S]$l,'&p9_k,'/>1L$&G9_k^/213(85Z:9/2/>$'&_V(65:9;*(6_L$]^l+~([/>1`/>1L$A(*_LZM7M/LZ:9/2/>$'&_iE13$ />&9_k56([/>(6,h_ v]$/2+~$'$_`$S]$H,'.,'&(8(*_V(65M;*$]55:9v]& 7MZM/3([_~/213$587MZM$& MIK(69;U;69S]$'&58®eZM$'&N1k9Z:5$'?g/>$'_i^U(*_i,h"%$'& bU b Í*ÎE13$ &N$';69/2(*")$58$]&J$]]9/2(6,'_«,'.L(*_3.,'&Q9/2(6,'_«9I&K,U55/213$ Ig,';*7MQ_i5+~(*/21V&J$]58ZM$IK/%/, /213$A$'S]$,'.k,'&(6([_V(65MIg9;*;*$U^lÀ¾»]½ËÈgFº ±ÀgÎ#Í[ÆÈgÆ]¾|Ãî¾8À½[»]ÎHƼpïÉ~7Mv]$';M9_k^´([$]58$;*®)%Wz¯ (8561k,'ZM® %zc%ð {|_@9^^]([/>(6,h_V/,A$'")(6^U$_3I$ .>&K,'Q $;*$'IK/>&,'ZM1LS56(6,h;6,U(*Ig9;%Q@$U9567M&J$Q$_L/58®%,'_3$l9;65, Ig9_~7:58$ 9_i9/ ,'Q(*Ig9;UQ$/21i,U^5/ ,A")(6587:9;*(*$A/213$l,'IK7M;69&^,hQ@(*_k9_3IK$ Ig,';*7MQ_i5j9_k^^]$'Q,'_k56/2&9/2$ /213$([&k$?g(658/>$'_3IK$UgF#./2$&k([_eýÅ$'IK/2(6,'_V(*_L/,,h_3$A$'S]$®/213$ />1L$A/2&(*/2(69/2$]^¬9Q@(*_k,9IK(6^HZM&,';*(*_L$B+~(*;[; v]$A/2&9_k56Z:,'& />$U^T.>&K,'Q />1L$A&J$/2(*_i9î/ ,A/>1L$AIg,'& />$'?V9IK&,U55/213$T58S]_i9ZM/2(*IîI,'_3_L$IK/2(6,'_k5°g S ÅlÆ
ÒÔ K%Ô¬r'Ï~tÐH¨AnkÏ~£'r'tn#r§i¨:toG¤ÑM¨:£1L$]58$ &J$/2(*_i9;:&N$U(6,'_k5QR9²]$Ig,'_3_L$IK/2(6,'_k5B+~(*/21 58$Z:9&w9/>$;69S]$'&5A(*_s/>1L$;*$.>/;69/>$'&9;A$_3([IK7M;69/2$ _37MI;*$7:5~$7M&,'_k53(*_`/213$pQR9_i,'IK$';*;*7M;69&î9_i^Z:9&",'IK$;[;*7M;69&¶;89S]$&5,'.%/>1L$p;69/>$'&9;3$_3([IK7M ;69/2$T58$_i^/213$([&G,'7M/>ZM7M/ 5¶/ ,AIg,'&/2(*Ig9;%;69S]$&w5cv < ¹@9_k^TcN < º]®&N$U56ZM$'IK/2(*")$;*Sg=E1L$58(6_i9;65.&,'Q $]9I1$S]$
9&J$
56$U&N$U]9/>$U^#(*_3/ ,
^]([.2.>$&J$_L/]v9_i^5L+~([/>1L(*_9&N$U94Hd(8_i9;65L.>&K,hQ±/213$U56$v9_k^5 Ig,'_L")$&$T,'_~([_i^U(*")(6^U7:9;U_3$'7M&K,h_i5([_~/213$587MZM$& MIK(69;];69S]$'&5,h.3/213$BIg,h&/>$'?L
D
nkÏ~Ð#ÑAr't¨ÓÒÔ¬r'Ï~tnpoG¨r'£nkТ#¨:tÑM¨tuetqsrÐVr'£oGq
\](67M&J$ c%HêUëÃÀg¾»]½ÁÈgºö±ÀgÎ#Í[ÆÈgÆ]¾|Ã`¾|Àg½[»]Î#ƼOÍ[ÆVÈgºÅÃÈñ òTIg9_Rv]$")(6587:9;*(*$U^7:58(*_k 99T&w9 ^U(6,U9IK/2(*")$¬Q9&²]$'&>®:/>& (*/>(89/>$U^ ZM&K,';[(*_3$U´©1L$_s/213$Q9&²]$'&H(65A([_eýÅ$'IK/2$]^ (*_L/,,'_3$$S]$(*/p(65 /2&9_i58Z:,'&/2$]^@")(69/>1L$;69/2$&9;$_L(*IK7M;69/2$T_37MIK;[$7:5/ ,T/213$TIg,h&/>$'?L E13$&9^U(6,U9I/>(*")$T7MZM/ 9²]$ (65&J$")$U9;*$]^(*_/>1L(65^9&²@M$';6^¬ZM1k,'/ ,U&9ZM1iGE1L$;*(61L/iv9_i^5(*_/213(65j/ 9_i$'_3/2(69;j56$'IK/>(8,'_ 581i,'+ />1L$ ZM;89IK$]50+~1L$&J$ /213$G&9^U(6,U9I/>(*")$GQ9&²]$'&k+953;8,'Ig9/2$]^9_i^#/2137:50&J$")$]9;/>1L$l,hIK7M;69& ^,'Q@([_i9_3I$AIg,';[7MQ@_i5%ïÉd3,'7M&NI$]~7Mv]$;[®´(*$]58$;[®)9_i^lde/>&S]²]$'&>®L%Wzð>
ÒÔ K%Ô¬r'Ï~tÐH¨AnkÏ~£'r'tn#r§i¨:toG¤ÑM¨:£$'&_,'.L;*(61L/3v9_i^5M/>1k9/LQR9&²T&N$U(6,'_k5&J$IK$'(*")(*_iB(*_LZM7M/L.>&K,'Q />1L$ (*_gýÅ$I/>$U^$'S]$¯/213$ (*_3/2$&")$'_3(*_kO^9&²¬9&J$]95M&J$IK$'(*")$A(*_LZM7M/3.&,'Q-/213$l,'ZMZ:,U58(*/2$B$S]$U{|_~/213$ Q,h_3²]$ST/213$U56$ v9_i^5$U9IK1@56Z:9_9ZMZM&K,'?g(*QR9/>$';*STcbô b Í*Îl®/21i,'7:1V([_~/213$A1L7MQ9_O/>1L$S 58Z:9_9ZMZM&,'?g(*Q9/2$;*S,'_L$AQ(*;*;*(*Q$/2$&pïì~7Mv]$';U$/¶9;6 ®%%Wz¯,'&/ ,'_9_i^l,'S]/>®L%ð> {|_O/>1L$T587MZM$&MIK(89;U;69S]$&w5,'.i9&N$U9j4lQR9_3S_L$7M&,'_i5&N$U56Z:,'_k^T/ ,58/>(*Q7M;*(U.>&K,'Q v,'/>1~$S]$]58¯ (*_`/>1L$A_i,h&Q9;UQR,'_3²]$'ST$(61L/>STZM$&JIK$'_3/¶,'.0/213$A_L$7M&,'_i5(*_~/213$l567MZM$'&(*MIK(89;U;69S]$&w5j,'.k9&N$U9 4H9&N$ v](*_i,'I7M;69&;*S^U&(*")$_kE1L$^U$")$';6,'ZMQ@$'_3/,'.0/213$ (*_3/2$&JIg,'_L_3$'IK/>(8,'_i5_L$IK$U5°59&ST/ , ^U&(*")$A/213$ v](*_i,hIK7M;69&i_3$'7M&K,h_i5j^U$'ZM$_i^507MZ:,'_`$?gZM$&([$_3I$^U7M&(*_kQR9/>7M&w9/>(6,h_i~7Mv]$; 9_i^l´(*$U56$';0ï>%xC%ð3581i,'+`$]^l/21i9/i9& />(*MI9;*;*STIK;6,U58(*_kO,h_3$A$'S]$,'&¶IK7M/>/2(*_k`9_,'IK7M;69&kQ7:56I;*$ 58/>&,'_i;[S9.·.$'IK/ 5/213$^U$'"%$';6,'ZMQ$_3/¶,'.0_L$7M&,'_i5(*_9&J$]9:4ldeZM$IK(*MI9;*;*SK®/213$Bv]([_i,'IK7M;89& _3$'7M&K,h_i5.9(*;U/ ,^U$")$;6,hZ: $'1i9")(6,'&w9;*;*SK®g(*.k,'_3$ $S]$G(65²]$'ZM/3IK;8,U56$U^l.¦,'&G9jIK& (*/>([Ig9;]ZM$&(8,U^ ^U7M&(*_kO^U$'"%$';6,'ZMQ$_3/2®/213$l9_3(*QR9;U+~(*;[;U&N$'Q9(*_`v];*(*_k^([_~/213(65M$S]$ .¦,'&i/213$ &N$U56/k,'.3(*/ 5;[(*.$U E13(65M(65:7M(*/>$l^U(*.·.$&J$_L/3.>&K,'Q />1L$A&J$]587M;*/i,'.3IK;6,U58(*_iT9_@9^U7M;*/3$'S]$ .,'&G9j.$'+³Q,h_3/21i58¯e/213(85 1i95M_i,H56(6_L(*MIg9_L/0$.·.$'IK/¶ïÉ~7Mv]$';*®´(*$U56$';M9_i| ^ f$'"¶9S®%%WW¯d1k9/>O9_k^ld/2&S]²]$&2®L%Wz¯ R(*/2IK1L$;*;*®%%zz¯©,'"561k,'_9_i^l"9_~de;*7MS]/2$&58®L%zð{|_`/>1L$AIg9/2®_i,h&Q9;^U$'"%$';6,'ZMQ$_3/¶,'. ,'IK7M;89&G^,hQ@(*_k9_3IK$ Ig,';*7MQ_i58®%9_i^lZM&N$U567MQR9v];*ST/>1L$Av](*_k,'IK7M;69&¶(*_3/2$&JIg,'_L_3$I/>(6,h_i595M+~$;[;*® ^U$ZM$'_i^507MZ:,'_~_L$7M&9;9I/>(*")(*/2S@,'&(6(*_k9/>([_iB(*_~/213$ />+,A&N$'/>([_i9ïÉde/>& S]²]$& 9_i^H9&&(658® %zxð AÈ]>¦¬gLK¦KJ,øg ÊÉ {|_L.¦,'& Q9/2(6,'_V.>&K,'Q ^]([.2.>$&J$_L/3IK;69558$]5:,'.3&J$/2(*_i9;]9_k;*(6,'_ IK$';*;65M&N$'Q9(*_k558$]&J$]]9/2$]^¬9;6,'_k/213$ Z:9/>1`/,G/>1L$BIg,'&/2$?L~$'7M&K,'_k5(*_O/213$BQR9_i,'IK$';*;*7M;69& ;69S]$'&5&J$I$(*")$BMv]$'&5.>&K,hQ-/>1L$AZ:9&95,';]IK$';*;658¯_3$'7M&K,'_k5([_~/213$AZ:9& "¶,hIK$;*;[7M;69&i;69S]$&w5 &J$IK$'(*")$AMv]$&w5.&,'Q-/213$ Q@(6^$'/k]9_k;*(6,'_VI$;*;65{|/%(657M_LIK$& /9(*_~ZM&N$'IK(658$;*S+`13(*IK1`&N$'/>([_i9; ]9_i;[(6,'_VIK$';*;65MZM&K,¦ýÅ$I/3/ ,A/>1L$A(*_L/>$'&NIg9;89/>$U^T;69S]$'&5E1L$58$]&J$]]9/2(6,'_,'.i56(6_k9;65MIg,'_3/2(*_L7M$]5 / ,A/>1L$A(*_LZM7M/k,'.i9&N$U94l´(*/213([_`;69S]$'&ic1L$A7MZMZM$'&k1i9;*.kïÉcv < ¹¶ð&J$IK$'(*")$]5M/>1L$T9?L,'_i5 .&,'Q-/213$ Q9_k,'IK$;[;*7M;69&i;69S]$'&5+`13(*;[$A/213$A;8,'+~$'&k1i9;*.kïÉcN < ºù]&J$I$(*")$]5/>1L$AZ:9&",'I$;*;*7M;89& (*_LZM7M/E13$ _3$7M&,'_k5(*_`/>1L$A(*_L/>$'&NIg9;89/>$U^;89S]$&556$'_i^H/213$([&G,'7M/>ZM7M/L/,A/213$l567MZM$'&MIK(69; ;69S]$'&5DO9_k^lX% ÷
>
²È,ú% >¦ E13$58Z:9/2(69;UZ:,U58(*/2(6,'_,'.3/>1L$]9_i;[(6,'_~I$;*;U+`(*/213(*_`/>1L$A&J$/2(*_i9 G (65MZM&N$U56$'&")$]^lv]ST/>1L$58Z:9/>(89;,'&]9_3(*%9/2(6,'_,h.0/>1L$A_L$7M&,'_i5+~(*/213(*_`/>1L$A;69/2$&w9;$_L(*IK7M;69/2$ _37MI;*$7:5M;69S]$&w5°E13$Gv9IK²,'.3/213$ _37MIK;*$'7:5I,'_3/ 9(*_i5_3$'7M&K,'_k5+`1i,U58$B&J$IK$'ZM/>(*")$ M$;6^5:9&N$ _3$U9&k/213$A.,'")$]9F5M+~$GQ@$U9567M&J$A/ ,'+9&K^5/213$ .&,'_3/k,'.0/213$A_L7MIK;*$'7:56®/213$ &N$'IK$ZM/2(*")$BM$;8^ ;6,'I9/>(6,h_i5v]$'Ig,'Q$A(*_LIK&N$U956([_i;*SOZM$&([ZM13$&w9;6E13(65:56Z:9/2(69;U;89S,'7M/3(65Ig9;*;*$U^Vº·ÃwÇÍ[ÆÀÇÉÀKÌ%Í¿¾ ,'&]9_3(*%9/2(6,'_v]$'Ig97:58$ /213$ /,'Z:,h;6,U(*Ig9;,'&]9_3([%9/>(6,h_«,'.3/213$ &N$'IK$ZM/2(*")$BM$;8^5M(*_`/>1L$ ;69/2$&9;M$_3([IK7M;69/2$AZ:9&9;*;[$;65M/>1L$,'&]9_3([%9/>(6,h_(*_~/213$ &N$'/>(*_k9
c
nkÏ~Ð#ÑAr't¨ÓÒÔ¬r'Ï~tnpoG¨r'£nkТ#¨:tÑM¨tuetqsrÐVr'£oGq
E13$l56(6_k9;65M(*_9&J$]9j4H9&N$l9;65, &N$'/>([_i,'/ ,'ZM(*Ig9;[;*S9& &9_i$U^\]&,'Q-$';*$IK/2&,'ZM13S58(6,';6,=S~(*_ Q,h_3²]$S58®L,'_3$ Ig9_~Q$]9587M&N$ />1L$A;6,hIg9/>(8,'_,'.0&J$I$ZM/2(*")$AM$;8^5M+~([/>19_O$;[$IK/2&K,=^]$A/>1k9/ ZM$_L$/2&9/2$]5M/9_k$_3/2(69;*;[SO/213&,'7:1;69S]$'&kc1L(65Z:9/219&J$A;6,hIg9/>$U^ 58S56/2$QR9/>(*I9;*;*SO.&,'Q-/213$A.,'")$]9j/ ,A/>1L$AZM$'&(*ZM13$'&SgE13(65M/2&N$'_i^(65([_3/2$&&N7MZM/>$U^T;6,hIg9;*;*Sv]S 58Q9;*;*®%9v]&N7MZM/eý*7MQ@Z:5:9/L/213$l,'IK7M;69&G^,'Q(*_i9_LIK$Av,'&^U$&5´(*/>1L(*_O/>1L$AM&58/,'IK7M;89& ^,'Q@([_i9_3I$AIg,';[7MQ@_~/213$A&N$'IK$ZM/2(*")$AM$';6^lIK$_L/>$'&kZ:,U58(*/>(8,'_i5I1i9_i$l56QR,U,'/213;*S]¯L95:,'_3$ Z:9556$U5(*_L/,A/213$ _3$'?e/¶,'IK7M;89&G^,hQ@(*_k9_3IK$ &N$U(6,'_V/213$'&N$ (659_9v]& 7MZM/k561L(*./k,'.3/213$ &N$'IK$ZM/2(*")$ M$;8^Z:,=56(*/2(6,'_k5$U7:9;/,9v,h7M/L1k9;*.i,'.L/>1L$58Z:9IK$58Z:9_3_L$]^Hv]S&J$IK$'ZM/>(["%$ M$;6^5M(*_~/213$ M&58/ Ig,';[7MQ@_i`7Mv]$;9_k^´([$]58$;3ï%WWð3^U$U56IK& (*v]$A/213(65,'&]9_3(*%9/2(6,'_9_k^l&N$'.$&¶/,A(*/¶9´ 5 û/>+, 58/>$'Z:5.,'&+9&^@9_k^,'_3$l56/2$Zv9I²¦ü {|_O/>1L$B;6958/LM.>/>$'$_`S]$]9&58®g(*/L1k95v]$'Ig,'Q$AZ:,U556([v];*$A/ ,A$]58/>([Q9/2$58Z:9/>(69;[;*ST;6,'Ig9;*([$]^9IK/2(*")(*/>S (*_`/>1L$A137MQR9_~v]&9([_i $](*_L_3(*_kA+~(*/21`ÌÀK¼wÍ[ǺÅÀgÆBÃÎ#ÍÁ¼¼ÍËÀgÆHÇìÀgÎGÀ¸%º>ÈKÌë¹`ªï ýÊþêðL58/27:^]([$]58®39_k^ Q,h&N$B&J$I$_3/2;*STv]S7:58(*_iMÄ»]Æ]¾Ç>ÍÁÀgÆÈg½ÎGÈK¸%ÆÃwÇÍ[¾iºÅüÀÆÈgÆ]¾|ÃpÍ[ÎpÈK¸%Í[ƸOï Ä ÿ¦ðÉ®+~$ Ig9_~Q$]9587M&N$ 9IK/2(*")(*/2S`(*_O",';*7MQ$]5j,h.0/>1L$AIg,h&/>$'?V95:58Q9;*;95:bBIK7Mv]([IQ@(*;*;[(*Q@$'/>$'&56®I,'_3/ 9(*_3([_iT9j.$'+ 137M_k^U&N$U^/21i,h7:5°9_k^T_L$7M&,'_i5 h ~7MQR9_9&J$]94Hj(65M;6,'Ig9/2$]^+~([/>1L(*_~/213$H¾8Ƚ[¾|ÈgºÍ¿ÆÃp587M;*IK7:5([_~/213$,hIKIK(*ZM(*/ 9;];6,'v]$UE1L$ Ig9;*I9&(*_3$l567M;[IK7:5(*_`Q@Sv]&9([_3®%9_i^l(*/ 5M&N$'/>(*_k,'/ ,'ZM(*I#,'&]9_3(*%9/2(6,'_L®(65:581i,'+`_~(*_`\](67M&N$ C% ~$'7M&K,h_i5+`(*/21~&J$IK$'ZM/>(*")$BM$';6^5([_`/213$BI$_3/2&9;U")(6587:9;UM$';6^9&N$ ;6,'Ig9/2$]^(*_O/>1L$AZ:,U58/>$'&(6,'& Ig9;*I9&(*_3$l567M;[IK7:56®g+`13(*;*$G_3$'7M&K,h_i5+`(*/21V&N$'IK$ZM/2(*")$AM$';6^5([_~/213$AZM$'&(*ZM1L$&S¬9&J$A;6,hIg9/>$U^T([_ /213$9_L/>$'&(6,'&iZ:,h&/>(8,'_i5î,h.3/213$587M;*IK7:5FH/¶9(*")$_^U(658/ 9_3IK$l9;6,'_iB/213$l567M;*I7:56®/213$ &J$IK$'ZM/>(*")$ M$;6^5:9&N$ ;6,'Ig9/2$]^¬9;6,'_k956$'Q@(*>IK(*&JIK;*$B(*_`/>1L$A")(6587:9;UM$;8^`$7M&,'_i5M+~(*/21 &J$IK$'ZM/>(*")$ M$;6^5:,'_V/213$A7MZMZM$'&>®Q@(6^^U;*$®]9_i^l;6,'+`$&p58$IK/2(6,'_k5î,'.L/>1L$58$Q@([IK(*&JIK;*$'®L9&N$ .¦,h7M_i^,'_V/213$ ;6,'+~$'&>®Q(6^^U;*$l9_k^l7MZMZM$&¶Z:,h&/>(8,'_i5î,h.0/>1L$AIg9;[Ig9&(*_L$®&J$]58ZM$IK/2(*")$;*S ïÉ,';*Q$]58®%%W®)cC¯,h&/,h_«9_i^H,hS]/>®L%¯{|_k,'7MS]$®%b%ð _i$';U$/¶9;6)ïc%ðQ@$U9567M&J$]^l/213$A1L7MQ9_`&N$'/>(*_k,'/ ,'ZM(*I#,'&]9_3(*%9/2(6,'_V.>&K,hQ-.¦,'")$U9/ , ZM$& (*ZM13$'&Sv]ST7:56(*_kA/>1L$58/>([Q@7M;*7:5561k,'+~_V([_~\](67M&J$T x9E13$l56/2(*Q@7M;[7:5Ig,h_i58(656/2$]^,'.i9 58$&(*$U5,h.56;8,'+~;*ST$'?eZ:9_k^U(*_iA&([_i]58¯$]9IK1`&(*_iB+959I,';*;*$'IK/>(8,'_«,'. (*I²]$&(*_kT5°7:9&J$]5 E13$ &(*_iAv]$U]9_95:956QR9;*;58Z:,'/3;8,'Ig9/2$]^9/L/213$ M?L9/>(6,h_Q9&²]®%9_k^/213$'_~(*/¶&J$+m7M_L/>(*;=(*/ /2&9")$;*$U^v]$'S,'_i^l/213$B$U^$l,'.3/>1L$A")(6587:9;UM$;6^F59j&([_iB.¦9^U$]^H.>&K,hQ-")(*$+B®([/L+95 &J$ZM;69IK$U^lv]S¬9j_3$+ &(*_kT56/ 9&/2(*_iO9/L/213$ IK$_L/>$'& $I97:56$l,'.3/213$ &N$'/>(*_k,'/ ,'ZM(*I#,'&]9_3(*%9/2(6,'_ ,'.L/>1L$BIg9;*Ig9& (*_3$'®$]9IK1O&(*_kI97:56$U59j/2&9")$;*([_iB+9")$H,'.0_3$'7M&9;9IK/2(*")(*/2STv]$](*_L_3(*_k([_~/213$ Z:,U58/>$'&(6,'&Ig9;[Ig9&(*_L$9_i^H/2&9")$;[(*_iB(*_`/>1L$9_3/2$& (6,'&G^U(*&J$IK/2(6,'_k
ÿ%'+! %'&g"! .¼"!%'F = D%' ()$() C./ =%'&$$ ,0'C+( ^+()%'? |()+! +!$ @ ./+!$%'N%' C( A 0)%' %' ö%'&: ,()+()%'x()N!$(10)%'%'· + =.Á$· ,0y"( ^()+()()$()+| ,()ö ()%'$ !%'() ö()C ö ,y(1%'"( ()+? @ !& ^ ( $0'""[y(1&"&" =Cµ()+!$0)%'0C%'C + ,"()%'/%'&[0)%'%'N% ?Uÿ%'"!+!$(1$ = x ,()%'+( (1"D$ "!BC! ^(10)%'0g0)%'%'N% $+()%' =y "% ^ C 0) : = # %' "%ö"!$ ,0^"( ()+ %' F$ ()!$0)- -' %' ^