he Fourth Intei Particle Physics and the Early Universe
Jihn E. Kim Pyungwon Ko Kimyeong Lee
Proceedings of the Fourth International Workshop on Particle Physics and the Early Universe
Proceedings of the Fourth International Workshop on Particle Physics and the Early Universe
Jeju Island, Korea
4 - 8 September 2000
Editors
Jihn E. Kim Seoul National University, Korea
Pyungwon Ko Korea Advanced Institute of Science and Technology, Korea
Kimyeong Lee Korea Institute for Advanced Study, Korea
V ^ % World Scientific « •
New Jersey • London • Sine Singapore • Hong Kong
Published by World Scientific Publishing Co. Pte. Ltd. P O Box 128, Farrer Road, Singapore 912805 USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
COSMO-2000 Proceedings of the Fourth International Workshop on Particle Physics and the Early Universe Copyright © 2001 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN 981-02-4762-1
Printed in Singapore by Uto-Print
INTERNATIONAL ADVISORY COMMITTEE
R. Arnowitt, R. Barbieri, A. Bottino, D. Caldwell, A. De Rujula, S. Dimopoulos, M. Dine, A. Dolgov, J. Ellis, K. Enqvist, J. Frieman, F. Halzen, S. Hawking, G. Kane, T. W. B. Kibble, J. E. Kim, E. Kolb, P. Langacker, D. Lyth, R. N. Mohapatra, H. P. Nilles, J. Primack, G. G. Ross, L. Roszkowski, V. Rubakov, G. Senjanovic, A. Smirnov, A. Starobinsky, P. Steinhardt, G. Veneziano, A. Vilenkin, S. Weinberg, M. Yoshimura
LOCAL ORGANIZING COMMITTEE
K. Choi (Korea Advanced Institute of Science and Technology) J. E. Kim (Seoul National University), Chair P. Ko (Korea Advanced Institute of Science and Technology) K. Lee (Korea Institute for Advanced Study), Co-chair C. H. Lee (Hanyang University) I. Yi (Korea Institute for Advanced Study) P. Yi (Korea Institute for Advanced Study)
V
lsJ
:
*
m
•
I
*i
i:
v
•*
i
0.3 at the 2-cr level 9 . A weak lensing study of a supercluster 1 5 on a scale of 6 h _ 1 Mpc, yields a very low value of Qm (~ 0.05). T h e m a t t e r density for the Corona Borealis supercluster (at a scale of ~ 20 h _ 1 Mpc) has been found t o be Qm ~ 0 . 4 . 2 8 Under the assumption of a flat universe, global limits can also be placed on Qm from studies of type la supernovae (see next section); currently the supernova results favor a value f2 m ~ 0.3. The measurement of the total m a t t e r density of the Universe remains 19
20 an important and difficult problem, and all of the methods for measuring fim are based on a number of underlying assumptions. These include, for example, how the mass distribution traces the observed light distribution, whether clusters are representative of the Universe, the properties and effects of dust grains, or the evolution of the objects under study. T h e accuracy of any m a t t e r density estimate must ultimately be evaluated in the context of the validity of the underlying assumptions upon which the m e t h o d is based, but it is non-trivial to assign a quantitative uncertainty in m a n y cases. However, systematic effects (choices and assumptions) may be the dominant source of uncertainty. 3
Q,A
T h e cosmological constant A has come and gone m a n y times in the history of cosmology. Skepticism about a non-zero value of the cosmological constant dominated until recently due to the discrepancy of >120 orders of magnitude between current observational limits and estimates of the vacuum energy density based on current standard particle theory 5 . A non-zero value for A also begs an explanation for the coincidence t h a t we happen to be living now at a special epoch when the cosmological constant has begun to affect the dynamics of the Universe (other t h a n during a time of inflation). Despite these critical problems, there is no known physical principle t h a t d e m a n d s A = 0. 5 Standard particle theory and inflation provide a physical interpretation of A, if not a mechanism for producing such a small value as current observations imply: it is the energy density of the vacuum 3 2 . T h e best evidence for a non-zero value for the vacuum energy density has come from the study of high-redshift supernovae. There are m a n y advantages of using type la supernovae for measurements of Q\. T h e dispersion in the nearby type la supernova Hubble diagram is very small (0.12 mag or 6% in distance 2 4 . Supernovae are bright and therefore can be observed to large distances. Potential effects due t o evolution, chemical composition dependence, changing dust properties are all amenable to empirical tests and calibration. There are two large teams studying type la supernovae at high redshift 2 4 ' 2 0 . Both groups have found t h a t the high redshift supernovae are fainter (and therefore further), on average, t h a n implied by either an open ( Q m = 0.2) or a flat, m a t t e r - d o m i n a t e d (Qm = 1) universe. T h e observed differences are ~ 0 . 2 5 and 0.15 mag, respectively 2 4 , 2 0 or equivalently ~ 1 3 % and 8% in distance. T h e results are consistent with a non-zero and positive value for £l\ ~ 0.7, and a small m a t t e r density, Clm ~ 0 . 3 , under t h e assumption t h a t £2m + ^ A = 1. If a flat universe is not assumed, the best fit 20 yields Qm
21 = 0.73, A A = 1-32. Both of the supernova t e a m s are actively searching for possible systematic errors t h a t might produce this observed effect, but none has been identified. Although the observed difference in the luminosities of h i g h - and lowredshift supernovae provide at the m o m e n t strong evidence for a non-zero vacuum energy, it is important to keep in mind t h a t other astrophysical factors which might produce this effect must be ruled out convincingly. This is a tall order. For example, for the known properties of dust in the interstellar medium, the ratio of total-to-selective absorption, ( R g = Ajg / E(B-V)), (the value by which the colors are multiplied to correct the blue magnitudes), is ~ 4. Hence, very accurate photometry and colors are required. For example, a relative error of only 0.03 mag in color could contribute 0.12 m a g to the observed difference in magnitude, a large fraction of the observed difference. Other subtle astrophysical effects must also be ruled out; for example whether there are intrinsic differences in the supernovae between high and low redshift. Furthermore, to date the high-redshift supernova Q\ studies have been based on a relative comparison with the same set of low-redshift supernovae. T h e nearby supernovae searches are time consuming since locally galaxies have large angular diameters on the sky and must be studied individually (unlike at high redshift where many supernovae can be discovered in a single CCD field). At present, the evidence for f2^ comes from a differential comparison of the nearby sample of supernovae at z < 0.1, with those at z ~ 0.3-1.3. Hence, the absolute calibrations, completeness levels, and any other systematic effects pertaining to both datasets are critical. For several reasons, the search techniques and calibrations of the nearby and the distant samples are different. Moreover, the intense efforts to search for high-redshift objects have now led to the situation where the nearby sample is now smaller t h a n the distant samples. While the different search strategies m a y not necessarily introduce systematic differences, increasing the nearby sample will provide an i m p o r t a n t check. Such searches are now underway. A 0.25 mag difference between the nearby and distant samples is large, but the history of measurements of Ho may provide an interesting context for comparison. In the case of Ho determinations, a difference of 0.25 mag in zero point only corresponds to a difference between 60 and 67 k m / s e c / M p c ! Current differences in the published values for Ho result from a number of arcane factors: the adoption of different calibrator galaxies, the adoption of different techniques for measuring distances, t r e a t m e n t of reddening and metallicity, and differences in adopted photometric zero point. In fact, despite the considerable progress on the extragalactic distance scale and the Hubble
22 constant, recent Ho values tend to range from about 60 to 80 k m / s e c / M p c (see below). The strongest evidence for a non-zero vacuum energy density currently comes from type la supernovae, but several more indirect lines of evidence also favor such a model: the observed large scale distribution of galaxies, clusters, and voids, as well as the difference between the ages of the oldest stars and the expansion age (exacerbated if Q,m = 1). Further tests and limits on A may come from gravitational lens number density statistics 1 3 , 1 6 , 1 0 plus more stringent limits t o the numbers of closeseparation lenses. T h e numbers of strong gravitational lenses detected depends on the volume surveyed; hence, the probability t h a t a quasar will be lensed is a very sensitive function of QA- In a flat universe with Q,\ = 0, almost an order of magnitude fewer lenses are predicted t h a n for a universe with ^ A = 1. Gravitational lens number density limits 1 6 , 1 0 favor a universe with ft A less t h a n about 0.7 (95% confidence) for flm + f2A = 1. This m e t h o d is sensitive t o several factors 1 6 , 8 : t h e uncertainties in modelling of t h e lensing galaxies (generally as isothermal spheres with core radii), the observed luminosity functions, core radii of the galaxies, and the resulting magnification bias (that results due to the fact t h a t the lensed quasar images are amplified, and hence, easier to detect t h a n if there were no lensing) all need to be treated carefully. If the current results from supernovae are correct, then the numbers of close-separation lenses should be significantly larger t h a n predicted for A = 0 models. Complications for the lens number density statistics arise due to a number of factors which are hard to quantify in an error estimate, and which become increasingly more i m p o r t a n t for smaller values of A: for example, galaxies evolve (and perhaps merge) with time, galaxies contain dust, the properties of the lensing galaxies are not well-known (in particular, the dark m a t t e r velocity dispersion is unknown), and the numbers of lensing systems for which this type of analysis has been carried out is still very small. However, the sample of known lens systems is steadily growing, and new limits from this method will be forthcoming.
4
Ho
Obtaining an accurate value for the Hubble constant has proved an extremely challenging endeavor, a result primarily of the underlying difficulty of establishing accurate distances over cosmologically significant scales. Fortunately, the past 15 years has seen a series of substantive improvements leading toward the measurement of a more accurate value of Ho. Indeed, it is quite likely t h a t the l - u uncertainty in HQ is now approaching 10%, a significant advance over
23 the factor-of-two uncertainty t h a t lingered for decades. Briefly, the significant progress can be mainly attributed to the replacement of photographic cameras (used in this context from the 1920's to the 1980's) by solid-state detectors, as well as t o b o t h the development of several completely new, and the refinement of existing, methods for measuring extragalactic distances and Ho 1 1 . There are several routes to the measurement of Ho; these fall into the following completely independent and very broad categories: 1) the gravitational lens time delay method, 2) the Sunyaev-Zel'dovich m e t h o d for clusters, and 3) the extragalactic distance scale. In the latter category, there are several independent methods for measuring distances on the largest scales (including supernovae), but most of these m e t h o d s share common, empirical calibrations at their base. In the future, another independent determination of Ho from measurements of anisotropics in the cosmic microwave background may also be feasible, if the physical basis for the anisotropics can be well-established, and the degeneracies amongst several parameters can be broken. Each of the above m e t h o d s carries its own susceptibility to systematic errors, but the methods as listed here, have completely independent systematics. T h e history in this field offers the following i m p o r t a n t message: systematic errors have dominated, and continue to dominate, the measurement of Ho. It is therefore vital to measure Ho using a variety of m e t h o d s , and to test for the systematics t h a t are affecting each of the different kinds of techniques. Not all of these methods have yet been tested to the same degree. I m p o r t a n t progress is being m a d e on all fronts; however, some methods are still limited by sample size and small-number statistics. For example, m e t h o d 1), the gravitational time delay method, has only two well-studied lens systems to date: 0957+561 and P G 1115. T h e great advantage of both m e t h o d s 1) and 2), however, is t h a t they measure Ho at very large distances, independent of the need for any local calibration.
4-1
1) Gravitational
Lenses
Refsdal 2 2 ' 2 3 noted t h a t the arrival times for the light from two gravitationally lensed images of a background point source are dependent on the path lengths and the gravitational potential traversed in each case. Hence, a measurement of the t i m e delay and the angular separation for different images of a variable quasar can be used to provide a measurement of Ho- This method offers tremendous potential because it can be applied at great distances and it is based on very solid physical principles 3 . A weakness of this method is t h a t astronomical lenses are galaxies whose underlying (luminous or dark) mass
24 distributions are not independently known, and furthermore they m a y be sitting in more complicated group or cluster potentials. A degeneracy exists between the mass distribution of the lens and the value of HoUnfortunately, to date, there are very few systems known which have b o t h a favorable geometry (for providing constraints on the lens mass distribution) and a variable background source (so t h a t a time delay can be measured). T h e two systems to date t h a t have been well-studied yield values of Ho in the approximate range of 40-70 and converging around 65 k m / s e c / M p c 25 , 17 » 34 > 35 with an uncertainty of ~ 2 0 - 3 0 % .
4-2
Sunyaev
Zel'dovich
Effect and X-Ray
Measurements
T h e inverse-Compton scattering of photons from the cosmic microwave background off of hot electrons in the X-ray gas of rich clusters results in a measurable decrement in the microwave background spectrum known as the SunyaevZel'dovich (SZ) effect 29 . Given a spatial distribution of the SZ effect and a high-resolution X-ray m a p , the density and temperature distributions of the hot gas can be obtained; the mean electron temperature can be obtained from an X-ray spectrum. T h e m e t h o d makes use of the fact t h a t the X-ray flux is distance-dependent, whereas the Sunyaev-Zel'dovich decrement in the t e m p e r a t u r e is not. T h e advantages of this m e t h o d are t h a t it can be applied at large distances and, in principle, it has a straightforward physical basis. Some of the m a i n uncertainties result from potential d u m p i n e s s of the gas (which would result in reducing Ho), projection effects (if the clusters observed are prolate, Ho could be larger), the assumption of hydrostatic equilibrium, details of the models for the gas and electron densities, and potential contamination from point sources. Published values of Ho based on the Sunyaev-Zeldovich (SZ) m e t h o d have ranged from ~ 4 0 - 80 k m / s e c / M p c 2 . T h e most recent t w o dimensional interferometry SZ d a t a for well-observed clusters yield Ho ~ 60 ± 10 k m / s e c / M p c . T h e systematic uncertainties are still large, but the n e a r term prospects for this method are improving rapidly as additional clusters are being observed, and higher-resolution X-ray and SZ d a t a are becoming available 2 1 ' 1 8 .
4-3
The Extragalactic
Distance
Scale
One of the motivating reasons for building the Hubble Space Telescope (HST) was to allow an accurate measurement of the Hubble constant. A ten-year
25 HST Key Project, to measure the Hubble constant has just been completed . The Key Project was designed to use Cepheid variables to determine primary distances to a representative sample of nearby galaxies with a goal of calibrating a number of m e t h o d s for measuring relative distances to galaxies, and measuring Ho to an accuracy of ± 1 0 % , including systematic errors. The excellent image quality of H S T extends the limit out to which Cepheids can be discovered by a factor of ten from ground-based searches, and the effective search volume by a factor of a thousand. Furthermore, H S T offers a unique capability in t h a t it can be scheduled optimally and independently of the phase of the Moon, the time of day, or weather, and there are no seeing variations. In each nearby target spiral galaxy in the Key Project sample, Cepheid searches were undertaken in regions active in star formation, but low in apparent dust extinction. Since each individual secondary m e t h o d is likely to be affected by its own (independent) systematic uncertainties, to reach a final overall uncertainty of ± 1 0 % , the numbers of calibrating galaxies for a given m e t h o d were chosen initially so t h a t the final (statistical) uncertainty on the zero point for t h a t method would be only ~ 5 % . Cepheid distances were obtained for 18 galaxies. Since the dominant sources of error are systematic in nature, the approach taken in the Key Project was to measure Ho by intercomparing several different methods so t h a t the systematic errors could be assessed and quantified explicitly. Calibrating 5 secondary methods with Cepheid distances, Freedm a n et al.12 find Ho = 72 ± 3 (random) ± 7 (systematic) k m / s e c / M p c . Type la supernovae are the secondary method which currently extends out to the greatest distances, ~ 4 0 0 Mpc. All of the methods (Types la and II supernovae, the Tully-Fisher relation, surface brightness fluctuations, and the fundamental plane) are in extremely good agreement: four of the methods yield a value of H 0 between 70-72 k m / s e c / M p c , and the fundamental plane gives Ho = 82 k m / s e c / M p c . Figure 1 displays the results graphically in a composite Hubble diagram. T h e Hubble line plotted in this figure has a slope of 72 k m / s e c / M p c . As described in detail in Freedman et al12, the largest remaining sources of error result from (a) uncertainties in the distance to the Large Magellanic Cloud (the galaxy which provides the fiducial comparison for more distant galaxies) , (b) photometric calibration of the HST Wide Field and Planetary Camera 2, (c) metallicity (elements heavier t h a n helium) calibration of the Cepheid period-luminosity relation, and (d) cosmic scatter in the density (and therefore, velocity) field t h a t could lead to observed variations in Ho on very large scales. These systematic uncertainties affect the determination of Ho for all of the relative distance indicators, and they cannot be reduced by simply
26 ~\—i—i—r 3 x l 0 4 -i — r • I-band Tully-Fisher A Fundamental Plane • Surface Brightness • S u p e r n o v a e la ' 4 a S u p e r n o v a e II 2xl0
—i—r-
79
r
2
•651
104
,2
o a
f—h-1-
0
km/a
^ 100 o S 80 ^ £
I I I
-\—h M i l H 0 = 72
imtf ^
60 40 t_l
0
L
i
100
± J I I L I 200 300 Distance (Mpc)
400
Figure 1. Composite Hubble diagram of velocity versus distance for Type la supernovae (solid squares), the Tully-Fisher relation (solid circles), surface-brightness fluctuations (solid diamonds), the fundamental plane (solid triangles), and Type II supernovae (open squares). In the bottom panel, the values of Ho are shown as a function of distance. The Cepheid distances have been corrected for metallicity. The Hubble line plotted in this figure has a slope of 72 km/sec/Mpc, and the adopted distance to the LMC is taken to be 50 kpc.
combining the results from different methods: they dominate the overall error budget in the determination of HQ.
27 5
t0
In standard Big Bang cosmology, the universe expands uniformly: in the nearby universe, v = Ho d , where v is the recession velocity of a galaxy at a distance d, and Ho is the Hubble constant, the expansion rate at the current epoch. T h e inverse Hubble constant HQ" sets the age of the universe, to, and the size of the observable universe, R0bs = cto, given a knowledge of the total energy density of the universe. In Big Bang cosmology, the Friedmann equation relates the density, geometry and evolution of the universe: 87rGpm k A ~ ^ _ ~ J + 3 where the average mass density is specified by pm. T h e curvature term is specified by Ok = -k /&QHQ, and for t h e case of a flat universe (k = 0), flm + Q A — 1- Given an independent knowledge of the other cosmological parameters (Ho, flm, O A , and Q^), & dynamical age of the Universe can be determined by integrating the Friedmann equation. Consider three different cosmological models. In the case of a flat, m a t t e r dominated Qm = 1 universe, the age is given simply by: 2
H
=
'o -
o^o
For an open, low-density Qm < 1 universe: to
= Ho\fnSlm 2(ilm
„Aoos-1(2n-1 - i) - ^ ( f i m - i)i — 1)2
\tm
For f i m = 0.3,
*o ~ 7-n 0
.
Finally, for the case of a flat universe with £2A > 0:
(l-i2A)2
•5 where, for £2m = 0.3, 57A = 0.7, to ~ H0
.
28 In principle, with an accurate measurement of Ho and the age of the universe measured independently of the expansion, To, the product of H ^ 1 To can provide a powerful constraint on cosmology. Age-dating of the oldest known objects in the Universe has been carried out in a number of ways. T h e most reliable ages are generally believed to come from the application of theoretical models of stellar evolution to observations of the oldest clusters in the Milky Way, the globular clusters. For about 30 years, the ages of globular clusters have remained reasonably stable, at about 15 billion years 3 1 ' 6 ; however, recently these ages have been revised downward. The new Hipparcos calibration has led to a downward revision of the globular cluster ages from ~ 1 5 billion years to 12-13 billion years 7 . Ages can also be obtained from radioactive dating or nucleocosmochronology 2 6 . Generally, these ages are consistent with the higher-accuracy globular cluster ages, ranging from about 10 to 20 billion years; the largest sources of uncertainty in these estimates are again systematic in nature. A further lower limit can be estimated from the cooling rates for white dwarfs 1 9 . Up until very recently, the strong motivation from inflationary theory for a flat universe, coupled with a strong theoretical preference for Q\ = 0, favored a m a t t e r - d o m i n a t e d Q,m = 1 universe. Such a model was consistent with the ages of globular clusters at ~ 1 5 Gyr 6 . However, for a value of Ho = 72 k m / s e c / M p c , the Clm — 1 model yields a very young expansion age of only 9 ± 1 Gyr, significantly younger t h a n the earlier globular cluster age estimates. For Ho = 72 k m / s e c / M p c , fim = 0.3, the age of the Universe increases from 9 to t-o = 11 Gyr. If the high-redshift supernovae d a t a are confirmed, the implication of these results is t h a t the deceleration of the Universe due t o gravity is progressively being overcome by a cosmological constant term, and t h a t the Universe is in fact accelerating in its expansion. Allowing for €IA = 0.7, under the assumption of a flat (fi m + Q,\ = 1) universe, increases the expansion age yet further t o t 0 = 13.5 Gyr. A non-zero value of the cosmological constant helps to avoid a discrepancy between the expansion age and other age estimates. An expansion age of 13.5 ± 1.5 Gyr is consistent. to within the uncertainties with the recent globular cluster ages.
6
Summary
Accumulating evidence for a universe of low m a t t e r density, the conflict in ages resulting for fim = 1, measurements of anisotropics in the cosmic microwave background, and the evidence from type la supernovae for an accelerating universe, are all consistent with a cosmological model in which Qm ~ 0.3, OA ~ 0.7, h = 0.65, and t = 13 Gyr. This model preserves a flat universe and
29 is still consistent with inflation. T h e question of the nature of both the dark m a t t e r and dark (vacuum) energy remain with us. These unknowns rank as two of the most fundamental unsolved problems in cosmology. The progress in measuring cosmological parameters has been impressive; still, however, the accurate measurement of cosmological parameters remains a challenging task. Improvements in accuracy should be forthcoming in the next few years as measurement of C M B anisotropics, (from balloons and space with MAP, and Planck), the Sloan Digital Sky Survey, Hubble Space Telescope, C h a n d r a X-ray Observatory, radio mterferometry, gravitational lensing studies, weakly interacting massive particle (WIMP) cryogenic detectors, neutrino experiments, the Large Hadron Collider (LHC) yield new results. References 1. N. A. Bahcall & Fan, Publ. Nat. Acad. Sci. 9 5 , 5956 (1998). 2. M. Birkinshaw Phys. Rep. 0 0 0 , 000 (1999). 3. R. Blandford fe T . Kundic, in The Extragalactic Distance Scale eds. M. Donahue & M. Livio (Cambridge University Press, 1997), pp. 60-75. 4. Carlberg, R. G., et ai, Astrophys. J. 4 6 2 , 32 (1996). 5. Carroll, Press, & Turner, Astron. Rev. Astron. Astrophys. 30, 499 (1992). 6. B. Chaboyer, P. Demarque, P. J. Kernan & L. M. Krauss, Science, 2 7 1 , 957 (1996). 7. B. Chaboyer, P. Demarque, P. J. Kernan & L. M. Krauss, Astrophys. J. 4 9 4 , 96 (1998). 8. Y.-C. N. Cheng & L. M. Krauss Astrophys. J. 5 1 4 , 25 (1999). 9. A. Dekel, D. Burstein & S. D. M. White in Critical Dialogs in Cosmology, ed. N. Turok (World Scientific, 1997). 10. E. E. Falco, C. S. Kochanek, & J. A. Munoz, Astrophys. J. 4 9 4 , 47 (1998). 11. W. L. Freedman in Critical Dialogs in Cosmology, ed. N. Turok (World Scientific, 1997b), p. 92. 12. W. L. Freedman et ai, Astrophys. J. 0 0 0 , 000 (2001), astro-ph/0012376. 13. M. Fukugita & E. Turner, Mon. Not. Royal Astr. Soc. 2 5 3 , 99 (1991). 14. N. Kaiser & G. Squires, Astrophys. J. 4 0 4 , 441 (1993). 15. N. Kaiser Astrophys. J. 4 9 8 , 26 (1998). 16. C. S. Kochanek, Astrophys. J. 4 6 6 , 638 (1996). 17. L. V. E. K o o p m a n s & C. D. Fassnacht Astrophys. J. 5 2 7 , 513 (1999). 18. B. S. Mason, S. T . Myers, k A. C. S. Readhead Astrophys. J. Lett. 0 0 0 , 000 (2001), preprint.
30 19. T . D. Oswalt, J. A. Smith, M. A. Wood k P. Hintzen, Nature, 3 8 2 , 692 (1996). 20. S. Perlmutter et al, Astrophys. J. 5 1 7 , 565 (1999). 21. E. D. Reese et al. Astrophys. J. OOO, 000 (2000), astroph/9912071. 22. S. Refsdal, Mon. Not. Royal Astr. Soc. 1 2 8 , 295 (1964). 23. S. Refsdal, Mon. Not. Royal Astr. Soc. 1 3 2 , 101 (1966). 24. A. Riess et al. Astron. J. 116, 1009 (1998). 25. P. Schechter et al., Astrophys. J. Lett. 4 7 5 , 85 (1997). 26. D. N. Schramm, in Astrophysical Ages and Dating Methods, eds. E. Vangioni-Flam et al. (Edition Frontieres: Paris, 1989). 27. I. Smail, R. S. Ellis, M. J. Fitchett k A. C. Edge, Mon. Not. Royal Astr. Soc. 2 7 3 , 277 (1995). 28. T. Small, C.-P. Ma, k W. Sargent, Astrophys. J. 4 9 2 , 44 (1998). 29. R. A. Sunyaev k Y. B. Zel'dovich, Astrophys. & SS 4, 301 (1969) 30. J. Tonry k M. Franx, Astrophys. J. 5 1 5 , 512 (1999). 31. D. A. VandenBerg, M. Bolte, k P. B. Stetson, Astron. Rev. Astron. Astrophys. 34, 461 (1996) 32. S. Weinberg, Rev. Mod. Phys. 6 1 , 1 (1989). 33. S. D. M. White, J. F. Navarro, A. E. Evrard k C. S. Frenk, Nature, 3 6 6 , 429 (1993). 34. L. L. R. Williams k P. Saha, Astron. J. 1 1 9 , 439 (2000). 35. H. J. Witt, S. Mao, k C. R. Keeton Astrophys. J. 5 4 4 , 98 (2000).
RELIC N E U T R A L I N O S : A N OVERVIEW * A. B O T T I N O Dipartimento di Fisica Teorica, Universita di and INFN, Sez. di Torino, Via P. Giuria 1, 1-10125 e-mail
[email protected] Torino Torino,
Italy
We first establish the sensitivity range of current experiments of direct search for WIMPs, once the uncertainties in the relevant astrophysical quantities are taken into account. We then analyse the discovery capabilities of these experiments, when their results are analyzed in terms of relic neutralinos. We performe our analysis employing various supersymmetric schemes, and point out the main particle physics uncertainties which have to be taken into account for a correct comparison of theory with experimental data. We evaluate the local and the cosmological densities of the relevant neutralinos and prove that a part of the susy configurations probed by current W I M P experiments entail relic neutralinos of cosmological interest. However, no a priori cosmological constraint is imposed on the analysed supersymmetric configurations.
1
Introduction
It was already stressed in Ref. : how remarkable is the discovery potential of the experiments of direct search for Weakly Interacting Massive Particles (WIMP) when their data are interpreted in terms of relic neutralinos. This situation has further progressively improved with the current experiments 2 ' 3,4 (for a review on WIMP direct searches see, for instance, Ref. 5 ) . In the light of these facts, in Ref. 6 we have analysed to which extent the supersymmetric parameter space is probed by WIMP direct searches with current sensitivities, taking into account the following relevant points: i) current uncertainties in astrophysical properties, ii) uncertainties in hadronic quantities, iii) new bounds from LEP searches for Higgs and supersymmetric particles, iv) improved evaluations of cosmological parameters. Here we report the main results of Ref. 6 , to which we refer for more details. Previous investigations of the possible interpretation of the annual-modulation effect 2 in terms of relic neutralinos were reported in 7>8>9.10. For susy analyses of the experimental data of Ref. 2 by other authors see, for instance, Refs. 11,12,13,14,15,16
Prior to the analysis of the experimental data of Ref. 2 specifically in terms of susy models, we determine the sensitivity range for current experi* BASED ON WORK DONE IN COLLABORATION WITH F. DONATO, N. FORNENGO AND S. SCOPEL
31
32
ments of WIMP direct search, under some more general assumptions. To this purpose, we have, however, to specify the phase-space distribution function for the WIMPs in our halo and some generic property of the WIMP-nucleus cross section. Let us discuss first the phase-space distribution function. We assume here that this factorizes as pw • f(v), though this is not the most general case 1 7 ). In turn, pw is factorized in terms of the local value for the non-baryonic dark matter pi, i. e. pw = £.• Pi, where £ = pw I Pi- The range used here for pi is 0.2 GeV c m - 3 < pi < 0.7 GeV c m - 3 , where the upper side of the range takes into account the possibility that the matter density distribution is not spherical, but is described by an oblate spheroidal distribution 17>18. The default choice for f(v) is the one derived from the isothermal-sphere model, i. e. the isotropic Maxwell-Boltzmann distribution in the galactic rest frame. However, it has been recently shown that deviations from this standard scheme, either due to a bulk rotation of the dark halo 19,2 ° or to an asymmetry in the WIMP velocity distribution 21 . 22 > 23 j influence the determination of the WIMP-nucleus cross sections from the experimental data quite sizeably. In a typical plot, where the WIMP-nucleus cross section is given in terms of the WIMP mass, the effect introduced by the mentioned deviations from the Maxwell-Boltzmann is generically to elongate the contours towards larger values of the WIMP mass. This is the case for the the annual-modulation region of the DAMA Collaboration 2 . In Ref. 9 it is shown that, by implementing the dark halo with a bulk rotation according to the treatment in Ref. 20 , the annualmodulation region moves towards larger values of the WIMP mass, i. e. up to mw ~ 200 GeV. A similar effect is obtained by introducing an asymmetry in the WIMP velocity distribution f(v). In most analyses in terms of relic neutralinos all these effects, which are extremely important when experimental results of WIMP direct detection are being compared with theoretical models for specific candidates, have been overlooked. As for the the WIMP-nucleus cross section we assume that the coherent part is dominant over the spin-dependent one and that the WIMP couples equally to protons and neutrons. This entails that the WIMP-nucleus cross section may be expressed in terms of a WIMP-nucleon scalar cross section (nucleoli) scalar
On the basis of the previous considerations, and taking into account the present experimental data 2 ' 3 we derive that, in the mw range of particular interest
40 GeV < mw
< 200 GeV,
(1)
33
the sensitivity of current WIMP direct experiments 2 ' 3 may be stated as 4 • 10" 1 0 nbarn < a ^ °
n )
< 2 • 10" 8 nbarn.
(2)
a re Eqs. (1-2) define in the space mw - ^ c X r g i ° n R which represents the current sensitivity region of WIMP direct searches. Notice that, in case of the neutralino the mass range of Eq. (1) is quite appropriate. In fact, the lower extreme is indicative of the LEP lower bound on the neutralino mass mx 24 . For the high side of the range we remark that, though a generic range for mx might extend up to about 1 TeV, requirements of no excessive fine-tuning 25 would actually favour an upper bound of order 200 GeV.
2
Supersymmetric Models
The calculations presented in this paper are based on the Minimal Supersymmetric extension of the Standard Model (MSSM), in a number of different schemes. The essential elements of the MSSM are described by a Yang-Mills Lagrangian, the superpotential, which contains all the Yukawa interactions between the standard and supersymmetric fields, and by the soft-breaking Lagrangian, which models the breaking of supersymmetry. Implementation of this model within a supergravity scheme leads naturally to a set of unification assumptions at a Grand Unification (GUT) scale, MGUT'- i) Unification of the gaugino masses: M^MQUT) = mi/s, ii) Universality of the scalar masses with a common mass denoted by mo: TJI^MGUT) = mo, hi) Universality of the trilinear scalar couplings: Al(MGUT) = Ad(MGUT) = AU(MGUT) = A0m0. This scheme is denoted here as universal SUGRA (or simply SUGRA). The relevant parameters of the model at the electro-weak (EW) scale are obtained from their corresponding values at the MGUT scale by running these down according to the renormalization group equations (RGE). By requiring that the electroweak symmetry breaking is induced radiatively by the soft supersymmetry breaking, one finally reduces the model parameters to five: mi/2,mo,Ao, tan/3(= vz/vi) and sign /j, (the Vi's are the Higgs vacuum expectation values and fi is the coefficient of the Higgs mixing term). In the present paper, these parameters are varied in the following ranges: 50 GeV < mi/2 < 1 TeV, m 0 < 1 TeV, - 3 < A < +3, 1 < tan/3 < 50. In Ref. 26 relic neutralinos in the window mo ~ 1-3 TeV, allowed by fine-tuning arguments 25 , are specifically considered.
34
Models with unification conditions at the GUT scale represent an appealing scenario; however, some of the assumptions listed above, particularly ii) and iii), are not very solid, since, as was already emphasized some time ago 27 , universality might occur at a scale higher than MQUT ~ 1016 GeV, e.g., at the Planck scale. More recently, the possibility that the initial scale for the RGE running, Mj, might be smaller than MQUT ~ 1016 has been raised 15 28 ' , on the basis of a number of string models (see for instance the references quotes in 1 5 ). In Ref. 15 it is stressed that Mj might be anywhere between the EW scale and the Planck scale, with significant consequences for the size of the neutralino-nucleon cross section. An empirical way of taking into account the uncertainty in Mj consists in allowing deviations in the unification conditions at MQUT- The properties of these non-universal SUGRA schemes are discussed in 6 and in the references quoted therein. Here, alternatively to the universal SUGRA scheme, we only consider a phenomenological susy model whose parameters are defined directly at the electroweak scale. This effective scheme of MSSM, denoted here as effMSSM, provides, at the EW scale, a model described in terms of a minimum number of parameters: only those necessary to shape the essentials of the theoretical structure of an MSSM. A set of assumptions at the electroweak scale is implemented: a) all trilinear parameters are set to zero except those of the third family, which are unified to a common value A; b) all squark soft-mass parameters are taken degenerate: m^ = rriq; c) all slepton softmass parameters are taken degenerate: m;- = mj; d) the U(l) and 5(7(2) gaugino masses, Mi and M2, are assumed to be linked by the usual relation Mi = (5/3) tan 2 9wM2 (this is the only GUT-induced relation we are using, since gaugino mass unification appears to be better motivated than scalar masses universality). As a consequence, the supersymmetric parameter space consists of seven independent parameters. We choose them to be: M2,/J.,tan/3,mA,mq,m^A and vary these parameters in the following ranges: 50 GeV < M 2 < 1 TeV, 50 GeV < \fi\ < 1 TeV, 80 GeV <mA< 1 TeV, 100 GeV < mg~,TO,~ < 1 TeV, - 3 < A < +3, 1 < tan/3 < 50 {mA is the mass of the CP-odd neutral Higgs boson). Effective MSSM schemes at the EW scale have been used in analyses of relic neutralinos, usually with the further assumption of slepton-squark mass degeneracy: rriq = mr) 7,10,12,29,30,31 In the results hereby displayed in the form of scatter plots only configurations with rriq > mi are shown. However, it is interesting to note that some configurations with inverse hierarchy rriq < mr produce some increase in ffs""kr at low mx values. The neutralino is defined as the lowest-mass linear superposition of photino (7), zino (Z) and the two higgsino states (H^, H%): \ =• ° i 7 +
35 A. Bottino, F. Donato, N. Fornengo, S. Scopel (2000)
10-
i
10- 7 r
i
4-.
10-e ;r
c
10-
10- 11
• • .*
0.05 (Qxh2 < 0.05). The hatched region on the right is excluded by theory. The hatched region on the left is excluded by present data from LEP 2 4 and CDF 3 7 . The solid line represents the 95% C.L. bound reachable at LEP2, in case of non discovery of a neutral Higgs boson
account. As discussed in the Introduction, effects due to a possible bulk rotation of the dark halo or to an asymmetry in the WIMP velocity distribution would move this boundary towards higher values of mx. Our results in Figs. 6-7 show that the susy scatter plots reach up the annual-modulation region of Ref. 2 , even with the current stringent bounds from accelerators (obviously,
41 A. BOLLIQO, F. Donato, N. Fornengo, S. Scopel (2000)
Figure 5. Same as in Fig. 4 for configurations in effMSSM. The dashed line denotes to which extent the scatter plot expands if set 2 for the quantities mq < qq > ' s is used
more easily in effMSSM than in SUGRA). 4
Conclusions
We have established the extent of the current sensitivity of WIMP direct searches, in terms of the WIMP-nucleon cross section and of the WIMP mass, by taking into account possible effects due to a halo bulk rotation and/or to asymmetries in the WIMP velocities distribution. To simplify the matter
42 A. Boltirio, F. Donato, N. F o r n e n g o , S. Scopel (2000)
10-
T
10" a
10-
c •£ io- 10
^ t * ^
•Sfe i o - 1 1 b
ioio- 13
io-
50
100
150
200
250
rri (GeV)
Figure 6. Scatter plot of ^ ^ ' l a r 6 versus mx in case of universal SUGRA. Set 1 for the quantities mq < qq > ' s is employed. Crosses (dots) denote configurations with flxh2 > 0.05 (Qxh2 < 0.05). The dashed line delimits the upper frontier of the scatter plot, when the inputs of Ref. 1 3 are used. The solid contour denotes the 3cr annual-modulation region of Ref. 2 (with the specifications given in the text)
we have also assumed that in the WIMP-nucleus cross section the coherent part is dominant over the spin-dependent one and that the WIMP couples equally to protons and neutrons (this is actually the case for neutralinos whose total elastic cross section off nuclei is in the current range of experimental sensitivity). We have then shown that the current direct experiments for WIMPs,
43 A. Bottino, F. Donato, N. Fornengo, S. Scopel (2000)
10-
io-
10- 9
^ i 3 N ^ & ^ » ^ & ^ .**• *y e*K*
c CO
-g 10- 1D
Hh
10-11 10-
10-
io-
io-
50
^J 100
I i - » , ••«**, ..SI
150
200
250
(GeV)
Figure 7. Same as in Fig. 6 in case of effMSSM
when interpreted in terms of relic neutralinos, are indeed probing regions of the supersymmetric parameter space compatible with all present bounds from accelerators, and that part of the susy configurations explored by WIMP experiments entail relic neutralinos of cosmological interest. However, also neutralinos which might contribute only partially to the required amount of dark matter in the universe have been included in our analysis. In the course of our analysis, we have also stressed the role of uncertainties in the determination of Higgs-quark-quark and neutralino-quark-squark couplings in the link between the WIMP-nucleon cross section and the neutralino
44 relic abundance. Acknowledgements This work was partially supported by the Research Grants of the Italian Ministero delFUniversita e della Ricerca Scientifica e Tecnologica (MURST) within the Astroparticle Physics Project. References 1. A. Bottino, F. Donato, G. Mignola, S. Scopel, P. Belli and A. Incicchitti: Phys. Lett. B 402, 113 (1997) 2. R. Bernabei et al. (DAMA Collaboration): Phys. Lett. B 480, 23 (2000); preprint ROM2F/2000-26, INFN/AE-00/10, http://mercury.lngs.infn.it/lngs/preprint 3. R. Abusaidi et al. (CDMS Collaboration): Phys. Rev. Lett. 84, 5699 (2000) 4. A comparative discussion of the experimental features and implications of the DAMA 2 and CDMS 3 experiments may be found in B. Bernabei et al: preprint ROM2/2000-32, to appear in the Proceedings of the PIC20 Conference, http://www.lngs.infn.it/lngs/htexts/dama/dama7.html 5. A. Morales: Proceedings of TAUP99, Nucl. Phys. B (Proc. Suppl.) 87, 477 (2000) 6. A. Bottino, F. Donato, N. Fornengo, S. Scopel: hep-ph/0010203, ftp://wftp.to,infh.it/pub/apg. 7. A. Bottino, F. Donato, N. Fornengo, S. Scopel: Phys. Lett. B 423, 109 (1998), Phys. Rev. D 59, 095003 (1999) 8. A. Bottino, F. Donato, N. Fornengo, S. Scopel: Phys. Rev. D 59, 095004 (1999) 9. P. Belli, R. Bernabei, A. Bottino, F. Donato, N. Fornengo, D. Prosperi, S. Scopel: Phys. Rev. D 6 1 , 023512 (2000) 10. A. Bottino, F. Donato, N. Fornengo, S. Scopel: Phys. Rev. D 62, 056006 (2000) 11. R. Arnowitt, P. Nath: Phys. Rev. D 60, 044002 (1999) 12. V.A. Bednyakov, H.V. Klapdor-Kleingrothaus: Phys. Rev. D 62, 043524 (2000) 13. J. Ellis, A. Ferstl, K.A. Olive: Phys. Lett. B 481, 304 (2000) 14. E. Accomando, R. Arnowitt, B. Dutta, Y. Santoso: Nucl.Phys. B 585, 124 (2000)
45
15. E. Gabrielli, S. Khalil, C. Munoz, E. Torrente-Lujan, hep-ph/0006266 v2. 16. J. Ellis, A. Ferstl, K.A. Olive, hep-ph/0007113. 17. J. Binney, S. Tremaine: Galactic Dynamics, Princeton University Press, Princeton, 1987. 18. E.I. Gates, G. Gyuk, M.S. Turner: Astrophys. J. Lett. 449, L123 (1995) 19. M. Kamionkowski, A. Kinkhabwala: Phys. Rev. D57, 3256 (1998) 20. F. Donato, N. Fornengo, S. Scopel: Astrop. Phys. 9, 303 (1999) 21. J.D. Vergados: Phys. Rev. 83, 3597 (1998), Phys. Rev. D 62, 023519 (2000); P. Ullio, M. Kamionkowski, hep-ph/'0006183 22. N.W. Evans, C M . Carollo, P.T. de Zeeuw, astro-ph/0008156 23. A. M. Green, astro-ph/0008318 24. I.M. Fisk and K. Nagai, talks at the XXXth Int. Conf. on High Energy Physics, Osaka, July 2000, http://www.ichep2000.hep.sci.osaka-u.ac.jp 25. V. Berezinski, A. Bottino, J. Ellis, N. Fornengo, G. Mignola, S. Scopel: Astrop. Phys. 5, 1 (1996) 26. J.L. Feng, K.T. Matchev, F. Wilczek: Phys. Lett B482 388 (2000) 27. N. Polonsky, A. Pomarol: Phys. Rev. Lett. 73, 2292 (1994) and Phys. Rev. D 5 1 , 6532 (1995); M. Olechowski, S. Pokorski: Phys. Lett. B 334, 201 (1995); D. Metalliotakis, H.P. Nilles: Nucl. Phys. B 435, 115 (1995); A. Pomarol, S. Dimopoulos: Nucl.Phys. B 453, 83 (1995); H. Murayama: talk given at the 4th International Conference on Physics Beyond the Standard Model, Lake Tahoe, USA, 13-18 december 1994, hep-ph/9503392; J.A. Casas, A. Lleyda, C. Mufioz: Phys. Lett. B 389, 305 (1996) 28. S.A. Abel, B.C. Allanach, F. Quevedo, L.E. Ibafiez, M. Klein, hepph/0005260 29. A. Bottino, V. de Alfaro, N. Fornengo, G. Mignola, S. Scopel: Astrop. Phys. 1, 61 (1992) 30. L. Bergstrom, P. Gondolo: Astrop. Phys. 6, 263 (1996) 31. V. Man die, A. Pierce, P. Gondolo, H. Murayama, hep-ph/0008022 v2 32. P.J. Donan (ALEPH Collaboration), March 2000, http://alephwww.cern.ch/ALPUB/seminar/lepc_mar2000/lepc2000.pdf 33. A. Bottino, V. de Alfaro, N. Fornengo, G. Mignola M. Pignone: Astrop. Phys. 2, 67 (1994) 34. A. Bottino, F. Donato, N. Fornengo, S. Scopel: Astrop. Phys. 13, 215 (2000) 35. A. Corsetti, P. Nath, hep-ph/0003186 36. See, for instance, W. Freedman: talk at COSM02K, Korea, September 2000
46
37. J.A.Vails (CDF Coll.) FERMILAB-Conf-99/263-E CDF; http://fnalpubs.fnal.gov/archive/1999/conf/Conf-99-263-E.html. 38. A.B. Lahanas, D.V. Nanopoulos, V.C. Spanos, hep-ph/0009065 39. Talks given by D. Schlatter (ALEPH Collaboration), T. Camporesi (DELPHI Collaboration), J.J. Blaising (L3 Collaboration), C. Rembser (OPAL Collaboration) at the special seminar at CERN on September 5, 2000 (see links to the LEP experiments at http://cern.web.cern.ch/CERN/Experiments.html)
N E U T R A L I N O P R O T O N CROSS SECTION A N D D A R K MATTER DETECTION R. A R N O W I T T , B. D U T T A A N D Y. S A N T O S O Center
For Theoretical
Physics, Department of Physics, Texas A&M College Station TX 778+3-4242, USA
University,
We consider the neutralino proton cross section for detection of Milky Way dark matter for a number of supergravity models with gauge unification at the G U T scale: models with universal soft breaking (mSUGRA), models with nonuniversal soft breaking, and string inspired D-brane models. The parameter space examined includes mj/2 < 1 TeV and tan/3 < 50, and the recent Higgs bound of mj, >114 GeV is imposed. (For grand unified models, this bound is to be imposed for all tan/3.) All coannihilation effects are included as well as the recent NLO corrections to b —> s-f for large tan/3, and coannihilation effects are shown to be sensitive to AQ for large tan/3. In all models, current detectors are sampling parts of the paramater space i. e. tan/3 ~ 25 for mSUGRA, tan/3 ~ 7 for nonuniversal models, and tan ,3 ~ 20 for D-brane models. Future detectors should be able to cover almost the full parameter space for fi > 0. For /J < 0, cancellations can occur for m
i / 2 ~ 450 GeV, allowing the cross sections to become ~ 10~ 1 0 pb for limited ranges of tan/3. (The positions of these cancellations are seen to be sensitive to the value of awff.) In this case, the gluino and squarks lie above 1 TeV, but still should be accessible to the LHC if rrij/2 < 1 TeV.
1
Introduction
The existance of dark matter, which makes up about 30% of all the matter and energy in the universe, is well documented astronomically. However, what it is made of is unknown, and there have been many theoretical suggestions: wimps, axions, machos, etc. The Milky way consists of perhaps 90% dark matter, and so is a convenient "laboratory" for the study of dark matter, particularly by direct detection by terrestial detectors. We consider here the case of supersymmetric (SUSY) wimp dark matter and its detection by scattering by nuclear targets. In SUSY models with R-parity invariance, the wimp is almost always the lightest neutralino, x?> and for heavy nuclei, the spin independent scattering dominates the cross section. Since then neutron and proton cross sections in the nuclei are nearly equal, it is possible to extract the X? — p cross section, cr%o_p, from any data (subject, of course, to astronomical uncertainties). Current detectors (DAMA, CDMS, UKDMC) are sensitive to cross sections -p
~lx!0_6pb 47
(1)
48
with perhaps an improvement on this by one or two orders of magnitude in the near future. More long range, future detectors (GENIUS, Cryoarray) plan on a significant increase in sensitivity, i.e. down to ^o_p£(l(r9-10-10)pb
(2)
We discuss here how such sensitivities might relate to what is expected from supersymmetry models. We consider here three SUSY models based on grand unification of the gauge coupling constants at the GUT scale of MQ = 2 X 1016 GeV: 1. Minimal Supergravity GUT Models (mSUGRA) 1 . Here there are universal soft breaking masses occurring at scale MQ. 2. Non-universal Soft Breaking Models 2 . Here the first two generation of squarks and sleptons soft breaking masses are kept universal (to suppress flavor changing neutral currents) and the gaugino masses are universal at MQ, while nonuniversalities are allowed in the Higgs soft breaking masses and the third generation squark and sleptons masses at MQ. 3. D-brane String Models (based on type IIB Orientifolds) 3 ' 4 . Here the 5f7(2)idoublet scalar masses are different from the singlet masses at MQ, and the gaugino masses are similarly not degenerate. The three types of models have varying amount of complexity in the soft breaking parameters, and while the first two models arise from natural phenomenological considerations in supergravity theory, there are also string models that can realise such soft breaking patterns. Though physically very different, all the models turn out to lead to qualitatively similar results: Current detectors are sensitive to a significant part of the SUSY parameter space, and future detectors should be able to cover most of the parameter space except for some special regions where accidental cancellations can occur which make oxa_v anomalously small. Thus dark matter experiments offer significant tests of supersymmetry over the same time scale (the next 5-10 years) that accelerator experiments will. While each of the above models contain a number of unknown parameters, theories of this type can still make relevant predictions for two reasons: (i) they allow for radiative breaking of SU{2) x U(l) at the electroweak scale (giving a natural explanation of the Higgs mechanism), and (ii) along with calculating a^,o_p, the theory can calculate the relic density of X?J i-e ^x° = Px°/Pc where p^o is the relic mass density of x? and pc = 3H02/8TTGN (HO is the Hubble constant and GN is the Newton constant). Both of these greatly
49 restrict the parameter space. In general one has fi^o/i2 ~ (JQ ! dx(aeinnv)) x (where crann is the neutralino annihilation cross section in the early universe, v is the relative velocity, Xf = kTf/niyO, Tf is the freeze out temperature, (...) means thermal average and h = Ho/100 km s _ 1 M p c _ 1 ) . The fact that these conditions can naturally be satisfied for reasonable parts of the SUSY parameter space represents a significant success of the SUGRA models. In the following we will assume H0 = (70 ± 10)km s _ 1 M p c _ 1 and matter (m) and baryonic (b) relic densities of O m = 0.3 ± 0.1 and fi;, = 0.05. Thus fi^o/i2 = 0.12 ± 0.05. The calculations given below allow for a 2cr spread, i.e. we take 5 0.02 < J\o/i 2 < 0.25.
(3)
It is clear that accurate determinations of the dark matter relic density will greatly strengthen the theoretical predictions, and already, analyses using combined data from the CMB, large scale structure, and supernovae data suggests that the correct value of the relic density lies in a relatively narrow band in the center of the region of Eq. (3) 6 . We will here, however, use the conservative range given in Eq. (3). 2
Theoretical Analysis
In order to get accurate predictions of the maximum and minimum cross sections for a given model, it is necessary to include a number of theoretical corrections. We list here the main ones: (i) In relating the theory at MQ to phenomena at the electroweak scale, the two loop gauge and one loop Yukawa renormalization group equations (RGE) are used, iterating to get a consistent SUSY spectrum, (ii) QCD RGE corrections are further included below the SUSY breaking scale for contributions involving light quarks, (iii) A careful analysis of the light Higgs massTOA.is necessary (including two loop and pole mass corrections) as the current LEP limits impact sensitively on the relic density analysis, (iv) L-R mixing terms are included in the sfermion (mass) 2 matrices since they produce important effects for large tan/3 in the third generation, (v) One loop corrections are included to mt, and mT which are again important for large tan/3, (vi) The experimental bounds on the b —> S"f decay put significant Constraints on the SUSY parameter space and theoretical calculations here include the leading order (LO) and NLO corrections. We have not in the following imposed b — r (or t — b — r) Yukawa unification or proton decay constraints as these depend sensitively on unknown post-GUT physics. For example, such constraints do not naturally occur in the string models where SU(5) (or 5O(10)) gauge symmetry is broken by Wilson lines
50
at MQ (even though grand unification of the gauge coupling constants at MQ for such string models is still required). All of the above corrections are now under theoretical control. In particular, the b —>• sj SUSY NLO corrections for large tan/? have recently been calculated 7 ' 8 . We find that the NLO corrections give significant contributions for large tan/3 for /i >0. (We use here Isajet sign conventions for the /J, parameter.) There have been a number of calculations of o^ 0 -? given in the literature 9 > 1 °. 11 . 12 . 13 . 14 . 15 j a n c j w e f} n( j w e are in general numerical agreement in those regions of parameter space where the authors have taken into account the above corrections. Accelerator bounds significantly limit the SUSY parameter space. As pointed out in 16 , the LEP bounds on the Higgs mass has begun to make a significant impact on dark matter analyses. Since at this time it is unclear whether the recently observed LEP events 17 represent a Higgs discovery, we will use here the current LEP lower bound of 114 GeV 18 . There are still some remaining errors in the theoretical calculation of the Higgs mass, however, as well as uncertainty in the i-quark mass, and so we will conservatively assume here for the light Higgs (h) that m/j > 110 GeV for all tan/3. (For the MSSM, the Higgs mass constraint is significant only for tan/3 ~ 9 (see e.g. Igo-Kemenes18) as Ah production with TUA — rnz can be confused with Zh production. However, in GUT models radiative breaking eliminates such regions of parameter space and the LEP constraint operates for all tan/3.) LEP data also produces a bound on the lightest chargino (xt) °f "tv± > 102 GeV 19 . For b —> sj we assume an allowed range of 2a from the CLEO data 20.
1.8 x 1CT4 < B{B -+ Xsj)
< 4.5 x 1 0 - 4
(4) 21
The Tevatron gives a bound of m-g > 270 GeV( for m , = m,g)' . Theory allows one to calculate the x?-quark cross section and we follow the analysis of 22 to convert this to x? ~ P scattering. For this one needs the 7r — N sigma term, oViv = - K
+ md)(p\uu + dd\p),
(5)
a0 = a^N — (m-ii + «v)(p|ss|p) and the quark mass ratio r = ms/(l/2)(mu+md). We use here a0 = 30 MeV 10 , and r = 24.4± 1.523. Recent analyses, based on new IT — N scattering data gives a„N = 65 MeV 24 ' 25 . Older IT - N data gave anN — 45 MeV 26 . We will use in most of the analysis below the larger number. If the smaller number is used, it would have the overall effect in most of the parameter space of reducing x? - P by about a
51
factor of 3. However, in the special situation for /z 40 and also &x°-p increases with tan/3. o^a_p decreases with m ^ for large 77^/2. The maximum value of cr^o_J) arises then for large tan/3 and small mi/2This can be seen in Fig.l where (cr^o_ p ) max is plotted vs. m^o for tan /3=20, 30, 40 and 50. Fig. 2 shows fi^o/i2 for tan/3 = 30 when the cross section takes on its maximum value. Current detectors obeying Eq (1) are then sampling the parameter space for large tan/3, small m^a and small fl^o/i2 i.e t a n / 3 - 2 5 , m^o ~90GeV, fi^o/i2 ~ 0 . 1
(6)
Further, as can be seen from Fig. 3, rrih does indeed exceed the current LEP bound over this entire region. As discussed in n , coannihilation effects in the early universe can significantly influence the relic density calculation. To discuss the minimum cross section, it is convenient then to consider first m^o ~ 150 GeV (mj/ 2 < 350) where no coannihilation occurs. The minimum cross section occurs for small tan/3. From Fig.4 one sees a^o_p ~ 1 x 10 _ 9 pb; m^o ~ 140GeV; t a n £ = 6
(7)
which would be accessible to detectors that are currently being planned (e.g. GENIUS). For larger m^o, i.e. mj/2 ~ 350 the phenomena of coannihilation can occur in the relic density analysis since the light stau, fi, (and also en, fip) can become degenerate with the x°. The relic density constraint can then be
52
^^so tf,
x%?r\ 0.2
\
%3
\
\ % X*
0.1
X.
100
120
m^o (GeV)
Figure 1. (cr-o_„)max for mSUGRA obtained by varying AQ and mo over the parameter Xj
P
space for tan/3 = 20, 30, 40, and 50 1 4 . The relic density constraint, Eq.(3) has been imposed.
30
90
100
110
m«jO ( G e V )
Figure 2. Ci-oh2 for mSUGRA when (cr-o_„) takes on its maximum value for tan/3 = 30 1 xx
Xi
v
satisfied in narrow corridor of mo of width Amo ~ 25 GeV, the value of mo increasing as m i / 2 increases and this was examined for low and intermediate tan p in n . Since mo and mj/2 increase as one progresses up the corridor, &x°-p will generally decrease. We consider first the case of fi > 0 2 7 . Coannihilation effects generally begin for m i / 2 ~ 400 GeV (m^o ~ 160 GeV), and it is of interest to see what
53
90
100
m%o (GeV) Figure 3. m/, for mSUGRA as a function of m^o for tan/3 =30, when cr.~o_„ takes on it maximum value 1 4 .
90
100
110
120
130
140
m^o (GeV) Figure 4. (CvO_„)min for mSUGRA is plotted as a function ofm^o x1 p xl for u > 0 , tan/3 = 6.
occurs for large tan/3. For large tan/3, there is only a coannihilation region left in the parameter space, and the allowed regions, exhibiting the allowed narrow corridors of parameter space are shown in Fig. 5 for tan/3 = 40. In this domain the lightest stau (fi) is the lightest slepton due to the large L-R mixing in the (mass2) matrix, and so dominates the conanihilation effects. We note that the allowed corridors are sensitive to AQ , and large Ao can allow large mo as mi/2 increases. The thickness of the allowed corridors also decrease as A0 increases. There is also a lower bound on m i / 2 for the allowed regions due to the b —>• sj constraint, this bound decreasing with increasing Ao. (We note that this lower bound is sensitive to the NLO corrections discussed in Sec. 2 above.) Since larger AQ allows for larger mo in the coannihilation region, the
54
300
400
500
600
700
800
900
1000
nij ( G e V )
Figure 5. Allowed corridors for mSUGRA in the mo —TI1/2 plane satisfying the relic density constraint of Eq(3) for /x > 0 , tan/3 = 40 (from bottom to top) Ao = m i / 2 , 2 m i | 2 , 4 m y 2 27
scattering cross section is a decreasing function of AQ • This is shown in Fig. 6 where cr^o_p is plotted as a function of m i / 2 for tan/? = 40 and A0 = 2mx/2, Am 1/2-
400
500
600
700
800
900
1000
mj (GeV) 2
Figure 6. o^-o
as a function of m ^ j for mSUGRA, n > 0 , tan/3 = 40 and A0 = 2 m 1 / 2
(upper curve), 477^/2 (lower curve).
We consider next \i < 0. As discussed in 12 , for low and intermediate tan /3, an accidental cancellation can occur in the heavy and light Higgs amplitudes in the coannihilation region which can greatly reduce cr^o_p. We investigate here what happens at larger tan/3, and what is the domain over which this cancellation occurs. In Fig. 7 we have plotted a^o_p in the large m i / 2 region,
55
for tan/3 = 6 (short dash), 10(solid), 20(dot-dash), and 25(dashed). One sees that the cross section dips sharply for tan/? = 10, reaching a minimum at mi/2 =725 GeV, and then rises. Similarly, for tan/3 = 20, the minimum occurs at m ^ = 830 GeV while for tan/3 = 25 at m i / 2 = 950 GeV. As a consequence, cr^o_p will fall below the sensitivity of planned future detectors for 77ii/2 >450 GeV in a restricted region of tan/3, i.e. a^l-p < 1 x 10" 1 0 for450GeV < m 1 / 2 < ITeV; 5 ~ tan/3 ~ 30; fi < 0. (8) At the minima, the cross sections can become quite small, e.g. 1 x 10 _ 1 3 pb, without major fine tuning of parameters, corresponding to almost total cancellation. Further, the widths of the minima at fixed tan/3 are fairly broad. While in this domain proposed detectors would not be able to observe Milky Way wimps, mSUGRA would imply that the squarks and gluinos then would lie above 1 TeV, but at masses that would still be accesible to the LHC. Also mSUGRA implies that this phenomena can occur only in a restricted range of tan/3, and for \x 0 with 63, 54, 5i < 0, fo >0. 1 Lower curve is for tan /3 = 7 and the upper curve is for tan ft = 12.
For fi 450 GeV. As in mSUGRA these can produce sharp minima in the cross sections in the region tan/? =10-25. 5
D-Brane Models
Recent advances in string theory has stimulated again the building of string inspired models. We consider here models based on Type IIB orientifolds where the full D = 10 space was compactified on a six torus T 8 3 . These models can contain 9-branes and 5-branes which can be used to embed the Standard Model. We consider here a model in which SU(3)c x U(l)y is associated with one set of 5-branes, 5i and SU(2)L is associated with a second intersecting set 52 4 . Strings beginning and ending on 5j will have massless modes carrying the SU(Z)c x U(l)y quantum numbers (i.e. the R quarks and R leptons), while strings starting on 5 2 and ending on 5i will have massless modes carrying the joint quantum numbers of the two branes (i.e. the quark, lepton and Higgs doublets). This then leads to the following soft breaking pattern at MQ'mi - m3 = -A0 = V^cosdbQie~taim3/2
(14)
m2 = V 3 c o s 0 i ( l - © ? ) 1 / V J / 2 where and rhi are the gaugino masses, and 2 m v2
= (1 - 3/2sin26>6)77i;;/2 for qL, lL, Hu H2 2
m\ = (1 - 3 sin 0&)m|/2 for UR, dR, eR.
(15)
59 Thus the SU(2) doublets are all degenerate at MQ but are different from the singlets. We note Eq. (15) implies 9i . Thus the maximum cross sections will arise from large #(, and large tan/3. This is illustrated in Fig. 10, where c^o_p is plotted as a function of m^o for n > 0 for tan/3 =20, and 9t, — 0.2. Thus we see that current detectors obeying the bound of Eq. (1) are sampling the parameter space for (16)
tan/3~20
We note that when tan/3 is close to its minimum value, m^o is also close to it's current LEP bound of m^o >37 GeV 29>30. The minimum value of cr^,o_ri Xi
Xi
f
i
0.9
^
0.8 0.7 0.6
'
0.5 38
40
42
44
46
48
50
m^o (GeV)
Figure 10. cr-o_ for D-brane model for /j, > 0, 9^ — 0.2 and tan/3 = 20. The gap in the curve is due to excessive early universe annihilation through s-channel Z and h poles.
will occur at low 8;,, low tan/3, and large m 3 / 2 (i.e. large m^o). In the large m^o region, coannihilation can occur between the sleptons and the neutralino in a fashion similar to the SUGRA models, with the effective slepton mo parameter and effective neutralino mj/2 parameter being given by ml = (1 - 3 s i n 2 Wl/2
9i)m\ / 2
cos9bQi m 3 / 2
(17)
60 0.012 ,-,
0.01
•a. ? 0
0.00 8
„•-
0.00 6 0 .004 0 .002
125
150
175
200
225
250
275
300
m^o (GeV) Figure 11. Minimum (7-o_
for the D-brane model for n < 0 and tan/3 = 6.
Fig. 11 exhibits the minimum cross section for \i > 0 as a function of the neutralino mass. One sees that CT*O_P ~ 1 x 10~ 9 pb for fj, > 0
(18)
which is accessible to planned detectors. We note also that coannihilation is possible between the light chargino and neutralino. However, this occurs for only a very small region of parameter space. As in mSUGRA, a cancellation of matrix elements can occur for fi < 0, allowing for the cross sections to fall below the sensitivities of planned future detectors. This is exhibited in Fig. 12, where cr^o_p is plotted for tan/3 = 6 (solid curve), 12 (dot-dash curve), and 20 (dashed curve). (The tan/3 = 6 curve terminates at low m^a due to the mh constraint, while the higher tan/3 curves terminate at low m^o due to the b -> sj constraint. The upper bound on m^o, corresponding to m-g 114 GeV 17 (which for GUT models holds for all tan/3), and the recent theoretical determination of the large tan/3 corrections to the NLO b —> sj decay 7 , s , both of which produce significant effects on limiting the SUSY parameter space. Despite the physical differences between the different models, the general picture resulting is somewhat similar. Thus current detectors obeying Eq(l) are sensitive to significant parts of the parameter space. For mSUGRA they are sampling the regions where tan/3 ~ 25. The nonuniversal models can have cross sections a factor of 10 or larger (with an appropriate choice of nonuniversalities) and so can sample the parameter space with tan/3 ~ 7. The D-brane models require t a n / 3 ~ 20. Coannihilation effects play a crucial role for large m^o in all the models, and for large tan/3, they are sensitive to the value of A0. Large A0 leads to coannihilation corridors where rao can get quite large, thus lowering the value of the OyO_p cross section. For /i > 0, the cross sections will generally still be accessible to planned future detectors obeying Eq. (2), i.e. a^o_p ~ 1 x 10" 1 0 pb for m 1 / 2 < 1 TeV, /x > 0
(19)
However, in all models, a special cancellation of the Higgs amplitudes can occur for /j, < 0 allowing the cross section to fall below the above bound when mj/2 ~ 450 GeV. For mSUGRA, these cancellations produce minima where
the cross section essentially vanishes for a range of mi/-2 when 8 ~ tan/? ~ 30, forTO1/2< 1 TeV (see Fig.7) with similar results holding for the nonuniversal models. The cancellations for the D-brane models occur for 10 ~ tan/3 ~ 15. We note that at fixed tan /3 the cancellations can occur over a wide range of m 1 / 2 , e.g. for mSUGRA tan/3 = 10, o-o_ p < 10~ 10 pb for 400GeV < m 1 / 2 < ITeV 31 . In such regions of parameter space, dark matter detectors would not be able to observe Milky Way dark matter. However, these regions of parameter space would imply that gluinos and squarks lie above 1 TeV, but still should be accessible to the LHC if the parameter space is bounded by mi/2 < 1 TeV. Thus other experimental consequences of the models would be observable. References 1. A.H. Chamseddine, R. Arnowitt and R Nath, Phys. Rev. Lett. 49, 970 (1982); R. Barbieri, S. Ferrara and C.A. Savoy, Phys. Lett. B 119, 343 (1982); L. Hall, J. Lykken and S. Weinberg, Phys. Rev. D 27, 2359 (1983); P. Nath, R. Arnowitt and A.H. Chamseddine, Nucl. Phys. B 227, 121 (1983). 2. For previous analysis of nonuniversal models see: V. Berezinsky, A. Bottino, J. Ellis, N. Fornengo, G. Mignola and S. Scopel, Astropart. Phys. 5, 1 (1996); Astropart. Phys. 6, 333 (1996); P. Nath and R. Arnowitt, Phys. Rev. D 56, 2820 (1997); R. Arnowitt and P. Nath, Phys. Lett. B 437, 344 (1998); A. Bottino, F. Donato, N. Fornengo and S. Scopel, Phys. Rev. D 59, 095004 (1999); R. Arnowitt and P. Nath, Phys. Rev. D 60, 044002 (1999). 3. L. Ibanez, C. Munoz and S. Rigolin, Nucl. Phys. B 536, 29 (1998). 4. M. Brhlik, L. Everett, G. Kane and J. Lykken, Phys. Rev. D 62, 035005 (2000). 5. While the lower bound of Eq.(13) is somewhat lower than other estimates, it allows us to consider the possibility that not all the dark matter are neutralinos, i.e. the dark matter might be a mix of neutralinos, machos, axions etc. Further, the minimum values of cr^o_p are not particularly sensitive to the lower bound Q^o h2. 6. M. Fukugita,hep-ph/0012214. 7. G. Degrassi, P. Gambino and G. Giudice, hep-ph/0009337 8. M. Carena, D. Garcia, U. Nierste and C. Wagner, hep-ph/0010003 9. A. Bottino et al. in ref.2. 10. A. Bottino, F. Donato, N. Fornengo and S. Scopel, Astropart. Phys. 13,
63
215 (2000). 11. J. Ellis, T. Falk, K.A. Olive and M. Srednicki, Astropart. Phys. 13, 181 (2000). 12. J. Ellis, A. Ferstl and K.A. Olive, Phys. Lett. B 481, 304 (2000). 13. J. Ellis, T. Falk, G. Ganis and K.A. Olive, Phys. Rev. D 62, 075010 (2000). 14. E. Accomando, R. Arnowitt, B. Dutta and Y. Santoso, Nucl. Phys. B 585, 124 (2000). 15. R. Arnowitt, B. Dutta and Y. Santoso, hep-ph/0005154. 16. J. Ellis, G. Ganis, D. Nanopoulos and K. Olive, hep-ph/0009355. 17. L3 Collaboration (M. Acciarri et al.). CERN-EP-2000-140, hepex/0011043; ALEPH Collaboration (R. Barate et al.). CERN-EP-2000138, hep-ex/0011045. 18. P. Igo-Kemenes, talk presented at ICHEP 2000, Osaka, Japan, July 27August 2, 2000. 19. I. Trigger, OPAL Collaboration, talk presented at the DPF 2000, Columbus, OH; T. Alderweireld, DELPHI Collaboration, talk presented at the DPF 2000, Columbus, OH. 20. M. Alam et al., Phys. Rev. Lett. 74, 2885 (1995). 21. DO Collaboration, Phys. Rev. Lett. 83, 4937 (1999). 22. J. Ellis and R. Flores, Phys. Lett. B 263, 259 (1991); Phys. Lett. B 300, 175 (1993). 23. H. Leutwyler, Phys. Lett. B 374, 163 (1996). 24. M. Ollson, hep-ph/0001203. 25. M. Pavan, R. Arndt, I. Stravkovsky, and R. Workman, nucl-th/9912034, Proc. of 8th International Symposium on Meson-Nucleon Physics and Structure of Nucleon, Zuoz, Switzerland, Aug., (1999). 26. J. Gasser and M. Sainio, hep-ph/0002283. 27. R. Arnowitt, B. Dutta and Y. Santoso, hep-ph/0010244. 28. E. Accomando, R. Arnowitt and B. Dutta, Phys. Rev. D 61, 075010 (2000) 29. ALEPH Collaboration (R. Barate et al), hep-ex/0011047. 30. The above LEP bound is model dependent and holds for the MSSM. We have checked, however, that it still applies for the D-brane model. 31. We have considered here only the spin independent cross section. As discussed in 32 , when the above cancelation is almost complete, the true lower bound on cr^o _p would be set by the spin dependent part of the cross section. Precisely when this would occur depends on the nuclei used in the target detector. 32. V. Bednyakov and H. Klapdor-Kleingrothaus, hep-ph/0011233.
W H A T IS THE D A R K M A T T E R OF T H E UNIVERSE? G O R D O N L. K A N E Randall
Physics
Laboratory,
University of Michigan, Ann Arbor, MI E-mail: gkaneQumich.edu
4-8109-1120
Suppose the lightest superpartner (LSP) is observed at colliders, and WIMPs are detected in explicit experiments. We point out that one cannot immediately conclude that cold dark matter (CDM) of the universe has been observed, and we determine what measurements are necessary before such a conclusion is meaningful. We discuss the analogous situation for neutrinos and axions. In both cases there may be no way to determine the actual relic density. It is important to examine this issue for any CDM candidates.
1
IS T H E LSP T H E COLD D A R K M A T T E R ?
Let us assume that one day superpartners are found at colliders, and the LSP escapes the detectors. In addition, WIMP signals are seen in the "direct" underground detectors (DAMA, CDMS, and others), and perhaps in other large underground detectors and space-based detectors. Has the cold dark matter (CDM) of the universe been observed? Maybe, but those signals don't demonstrate that. The only way " to know if the CDM has been detected is to calculate its contribution CILSP t o the relic density f2, and show that FILSP ~ 0.3. In fact, to some extent a large scattering cross section, which makes direct detection easier, is correlated with a large annihilation cross section, which reduces the relic density, so naively direct detections is consistent with a small relic density. The calculation of the relic density depends on knowing tan /3 and various of the supersymmetry soft-breaking Lagrangian • parameters, including some of the phases. We expect the LSP to be the lightest neutralino (though the gravitino is a possibility, as are sneutrinos if the LSP relic density is small; we assume conservation of R-parity or an equivalent quantum number). The lightest eigenvalue of the neutralino mass matrix will be a linear combination of the four neutralino symmetry eigenstates. The coefficients in the "wave function" that specifies the linear combination each depend on the entries in the neutralino mass matrix, Mi, M-2, fx,^J4>2 + M,and tan/3, where a + • • •) is the scattering cross section for particles i, j into final states involving axinos, vle\ is their relative velocity,
75
rii is the tth particle number density in thermal bath, T(i —» a + • • •) is the decay width of the ith particle and (• • •} stands for thermal averaging. (Averaging over initial spins and summing over final spins is understood.) Note that on the r.h.s. we have neglected inverse processes since they are suppressed by n^. The main axino production channels are the scatterings of (s)particles described by a dimension-five axino-gaugino-gauge boson term in the Lagrangian (1). Because of the relative strength of as, the most important contributions will come from 2-body strongly interacting processes into final states involving axinos, i + j —> a + • • •. (Scattering processes involving electroweak interactions are suppressed by both the strength of the coupling and a smaller number of production channels 2 .) The cross section can be written as a (S) =
"
^UJNfn{s)
(3)
where yfs is the center of mass energy and n = A,... ,K refers to different channels which are listed in Table I in Ref. 2 . The diagrams listed in the Table are analogous to those involving gravitino production and we use the same classification. This analogy should not be surprising since both particles are neutral Majorana superpartners. In addition to scattering processes, axinos can also be produced through decays of heavier superpartners in thermal plasma. At temperatures T > m~ these are dominated by the decays of gluinos into LSP axinos and gluons. The relevant decay width is given by n>2
™2-
128*3 {fa/N)2
(
m2\3
[ m l )
}
and one should sum over the color index o, b = 1, • • •, 8. At lower temperatures mx < TR <J m~, neutralino decays to axinos also contribute while at higher temperatures they are sub-dominant. They only become important when the axino yield „TP
YF = ^ > 2
A
(5)
where s = (2-n /'45)gs*T is the entropy density, and normally gst = g* in the early Universe, becomes too small to be cosmologically interesting. The results are presented in Fig. 1 for representative values of fa = 1011 GeV and m~ = ra~ = 1 TeV. The respective contributions due to scatter-
76
y_TP
T fi (GeV) Figure 1. y~ TP as a function of T R for representative values of fa — 1 0 n G e V and m~ m~ = 1 TeV. 9
ing as well as gluino and neutralino decays are marked by dashed, dash-dotted and dotted lines. It is clear that at high enough TR, much above m~ and m~, scattering processes involving such particles dominate the axino production. For TR 3> mq,rrig, y s c a t grows linearly as TR becomes larger. In contrast, the decay contribution above the gluino mass threshold, Ydec ~ 5 x 10 4 [MpTg/ 91 " $
77
remains independent of TR. At TR roughly below the mass of the squarks and gluinos, their thermal population starts to become strongly suppressed by the Boltzmann factor e~ m / T , hence a distinct knee in the scattering contribution in Fig. 1. It is in this region that gluino decays (dash-dotted line) given by Eq. (4) become dominant before they also become suppressed by the Boltzmann factor due to the gluino mass. For mx < TR < m j , m j , the axino yield is well approximated by YTP « F d e c ~ 5 x lQ-A{MPY-g/Tl) e-m^TR, and depends sensitively on the reheating temperature. At still lower temperatures the population of strongly interacting sparticles becomes so tiny that at TR ~ mx neutralino decays start playing some role before they too become Boltzmann factor suppressed. We indicate this by plotting in Fig. 1 the contribution of the lightest neutralino (dotted line). It is clear that the values of Y?p in this region are so small that, as we will see later, they will not play any role in further discussion. We therefore do not present the effect of the decay of the heavier neutralinos. Furthermore, model-dependent dimension-four operators will change axino production cross section at lower TR ~ MSUSY but will be suppressed at high temperatures. We have not studied this point yet. We emphasize that axinos produced in this way are already out of equilibrium. Their number density is very much smaller than n 7 (except TR ~ 109 GeV and above) and cross sections for axino re-annihilation into other particles are greatly suppressed. This is why in Eq. (2) we have neglected such processes. Nevertheless, even though axinos never reach equilibrium, their number density may be large enough to give fij ~ 1 for large enough axino masses (keV to GeV range) as we will see later. 3.2
Non-Thermal
Production
The mechanism for non-thermal production (NTP) that we will consider works as follows. Consider some lightest ordinary superpartner (LOSP). Because axino LSP couplings to everything else are suppressed by l / / a , as the Universe cools down, all heavier SUSY partners will first cascade-decay to the LOSP. The LOSPs then freeze out of thermal equilibrium and subsequently decay into axinos. A natural (although not unique), candidate for the LOSP is the lightest neutralino. For example, in models employing full unification of superpartner masses (like the CMSSM/mSUGRA), a mechanism of radiative electroweak symmetry breaking typically implies fj,2 > M2, where M\ is the bino mass parameter. As a result, the bino-like neutralino often emerges as the lightest ordinary superpartner 9 ' 10,11 .
78
In the following we will assume that LOSP is the neutralino. It can decay to the axino and photon x —>• «7 with the rate 1 ' 2 w.. r
(
. =./> ^ ^ "
°JmClxl 128.3
mx3 / l {fa/Nf{ -^x)
m?\3 •
^
Here a e m is the electromagnetic coupling strength, Caxl = w {CaYY/cos8w)Zn, ith Z n standing for the bino part of the lightest neutralino. (We use the basis \i = Z^B + Zi2W3 + Zi3H° + Zi4H° (i = 1, 2, 3, 4) of the respective fermionic partners (denoted by a tilde) of the electrically neutral gauge bosons B and W3, and the MSSM Higgs bosons Hf, and Ht.) The corresponding lifetime can be written as
33SeC Q /a/JV 2 W0G rfv :-,;) °f M " 7 ) ( ^Y( < 1 ^-ClyyZlA^I^) V^GevJ { m J V m [X
2
x
(7) For large enough neutralino mass, an additional decay channel into axino and Z boson opens up but is always subdominant relative to x —>• «7 because of both the phase-space suppression and the additional factor of tan 2 8w • As a result, even at mx 3> rnz,ma, T{X —> aZ) ~ 3 . 3 5 T ( X —* 07). It is also clear that the neutralino lifetime rapidly decreases with its mass (~ l/mx). On the other hand, if the neutralino is not mostly a bino, its decay will be suppressed by the Z\\ - factor in Caxl. Other decay channels are the decay into axino and Standard Model fermion pairs through virtual photon or Z but they are negligible compared with the previous ones. We will discuss them later since, for a low neutralino mass, i.e., long lifetime, they can, even if subdominant, produce dangerous hadronic showers during and after nucleosynthesis. Additionally, in the DFSZ type of models, there exists an additional Higgs-higgsino-axino couplings, which may open up other channels 2 . These are model-dependent and I will not discuss them here. 3.3
Constraints
Several nontrivial conditions have to be satisfied in order for axinos to be a viable CDM candidate. First, we will expect their relic abundance to be large enough, Qah2 — 0.2. This obvious condition will have strong impact on other bounds. Next, the axinos generated through both T P and NTP will in most cases be initially relativistic. We will therefore require that they become non-relativistic, or cold, much before the era of matter dominance.
79 Furthermore, since NTP axinos will be produced near the time of BBN, we will require that they do not contribute too much relativistic energy density to radiation during BBN. Finally, axino production associated decay products will often result in electromagnetic and hadronic showers which, if too large, would cause too much destruction of light elements. In deriving all of these conditions, except for the first one, the lifetime of the parent LOSP will be of crucial importance. A detailed discussion of the bounds would take too much time, and space, that is available. I will therefore merely summarize the relevant results. First, the condition that the axinos give a dominant contribution to the matter density at the present time can be expressed as msYs, — 0.72 eV (p,ah2/0.2) which applies to both TP and NTP relics. It is worth mentioning here that, for the initial population of axinos, the yield at decoupling is approximately Y~a ~ V-EQ ~ 2 x 1(T 3 which gives m-a ~ 0.36keV(f! a /i 2 /0.2). This is an updated value for the RTW bound. Next, we want to determine the temperature of the Universe at which the axinos will become non-relativistic. In nearly all cases axinos are initially relativistic and, due to expansion, become nonrelativistic at some later epoch which depends on their mass and production mechanism. In the case of TP, axinos are not in thermal equilibrium but, since they are produced in kinetic equilibrium with the thermal bath, their momenta will have a thermal spectrum. They will become non-relativistic when the thermal bath temperature reaches the axino mass, TNR — rn„,NTP axinos generated through out-of-equilibrium neutralino decays will be produced basically monochromatically, all with the same energy roughly given by m x / 2 , unless they are nearly mass-degenerate with the neutralinos. This is so because the neutralinos, when they decay, are themselves already non-relativistic. Thus, due to momentum red-shift, axinos will become non-relativistic only at a later time, when 2 P ( 7 N R ) — rria- The temperature TNR can be expressed as TNR = 5 1/2 11 4.2 x lO- m-aCaYYZn (m x /100GeV) (10 GeV/fJN). This epoch has to be compared to the matter-radiation equality epoch given by T eq = 1.1 eV (p.ah?/0.2) which holds for both thermal and non-thermal production. In the TP case one can easily see that TNR > T eq is satisfied for any interesting range of ma- In the case of NTP the condition TNR 3> T eq is satisfied for m-a » 27keV
* f™^'* ( {f ) (2**) . (8) { CavyZn \ rnx ) V l O n G e V ; V 0.2 J ' If axinos were lighter than the bound (8), then the point of radiation-matter equality would be shifted to a later time around TNR- Note that in this case axino would not constitute cold, but warm or hot dark matter. In the NTP
80
case discussed here other constraints would however require the axino mass to be larger than the above bound, so that we can discard this possibility. BBN predictions provide further important constraints on axinos as relics. In the case of non-thermal production most axinos will be produced only shortly before nucleosynthesis and, being still relativistic, may dump too much to the energy density during the formation of light elements. In order not to affect the Universe's expansion during BBN, axino contribution to the energy density should satisfy Pal Pv < $N„, where pv is the energy density of one neutrino species. Agreement with observations of light elements requires J ^ = 0 . 2 - 1 . This leads to 2 ,
1 ( 1 1
,
v
1
1
fl00GeV\l/2
f
fa/N
\
fn-ah?\
m-a > 181 keV— ^-^[—— j [j^^) [^J • (9) Finally, photons and quark-pairs produced in NTP decays of neutralinos, if produced during or after BBN, may lead to a significant depletion of primordial elements. One often applies a crude constraint that the lifetime should be less than about 1 second which in our case would provide a lower bound on mx. First, photons produced in reaction \ ~~> «7 carry a large amount of energy, roughly mx/2. If the decay takes place before BBN, the photon will rapidly thermalize via multiple scatterings from background electrons and positrons. The process will be particularly efficient at plasma temperatures above 1 MeV which is the threshold for background ee pair annihilation, and which, incidentally, coincides with time of about 1 second. But a closer examination 12 shows that also scattering with the high energy tail of the CMBR thermalize photons very efficiently and so the decay lifetime into photons can be as large as 104 sec. By comparing this with Eq. (7) we find that, in the gaugino regime, this can be easily satisfied for mx < mz- It is only in a nearly pure higgsino case and mass of tens of GeV that the bound would become constraining. We are not interested in such light higgsinos for other reasons, as will be explained later. A much more stringent constraint comes from considering hadronic showers from ij^-pairs. These will be produced through a virtual photon and Z exchange, and, above the kinematic threshold for x —* aZ, also through the exchange of a real Z-boson. Here the discussion is somewhat more involved and the resulting constraint strongly depends on mx. One can show2 that at the end one gets roughly ma > 360 MeV for mx < 60 GeV which gives the strongest bound so far. However, the bound on ma decreases nearly linearly with mx and disappears completely for m x > 150 GeV. In summary, a lower bound ma > 0(300 keV) arises from either requiring the axinos to be cold at the time of matter dominance or that they do not con-
81
tribute too much to the relativistic energy density during BBN. The constraint from hadronic destruction of light elements can be as strong as m j > 360 MeV (in the relatively light bino case) but it is highly model-dependent and disappears for larger mx. 3.4
Relic Abundance from Thermal and Non-Thermal
Production
In the TP case the axino yield is primarily determined by the reheating temperature. For large enough TR (TR » m~,m~), it is proportional to TR/f2. In contrast, the NTP axino yield is for the most part independent of T R (so long as TR » Tf, the neutralino freezeout temperature). In the NTP case, the yield of axinos is just the same as that of the decaying neutralinos. This leads to 1 Qah2 = ma./mxQxh2, where Ctxh2 stands for the abundance that the neutralinos would have had today had they not decayed into axinos. In order to be able to compare both production mechanisms, we will therefore fix the neutralino mass at some typical value. Furthermore we will map out a cosmologically interesting range of axino masses for which n~ I T P ~ 1. Our results are presented in Fig. 2 in the case of a nearly pure bino. We also fix m x = 100 GeV and fa = 10 11 GeV. The dark region is derived in the following way. It is well known that Qxh2 can take a wide range of values spanning several orders of magnitude. In the framework of the MSSM, which we have adopted, global scans give Q.xh2 < 104 in the bino region at mx < 100 GeV. (This limit decreases roughly linearly (on a log-log scale) down to ~ 103 at m x ~ 400 GeV.) For mx = 100 GeV we find that the expectation £l^TF h2 ~ 1 gives 10 MeV <md<mx.
(10)
We note, however, that the upper bound Qxh2 1 GeV. Still, for the sake of generality, we have the much more generous bound (10) in Fig. 2 but marked a low range of m j with a light grey band to indicate the above point. Likewise, for reheating temperatures just above Tf standard estimates of Qxh2 become questionable. We have therefore indicated this range of TR with again a light grey color. It has also been recently pointed out 13 that even in the case of very low reheating temperatures T R below the LOSP freezeout temperature, a significant population of them will be generated during the reheating phase. Such LOSPs would then also decay into axinos as above.
82
i i null
I I MINI
i i 11mil
I 11 null—I • J • i nq—I I iniiq—I n u t
10 9
EXCLUDED
(^IPh2 > 1) 10 e T K (GeV)
10 s
H O T i WARM TF I
10"
I muni
il
il
i iniiiil
i ii
I
nJ
i 11 mill
i i
20
i
icr 3 m 5 (GeV)
Figure 2. The thick solid line gives the upper bound from thermal production on the reheating temperature as a function of the axino mass. The dark region is t h e region where nonthermal production can give cosmologically interesting results (fig p / i 2 ~ 1) as explained in the text. We assume a bino-like neutralino with mx = 100 GeV and fa = 10 11 GeV. The region of T R > Tf is somewhat uncertain and has been denoted with light-grey color. A sizeable abundance of neutralinos (and therefore axinos) is expected also for TR < T[ but has not been calculated. T h e vertical light-grey band indicates t h a t a low range of m.j corresponds to allowing SM superpartner masses in t h e multi-TeV range, as discussed in the text. Division of hot, warm and cold dark matter by the axino mass shown in lower left part is for axinos from non-thermal production.
83
We have not considered such cases in our analysis and accordingly left the region TR < Tf blank even though in principle we would expect some sizeable range of fla.h2 there. We can see that for large TR, the TP mechanism is more important than the NTP one, as expected. Note also that in the TP case the cosmologically favored region (0.2 < flah2 < 0.4) would form a very narrow strip (not indicated in Fig. 2) just below the f2j p = 1-boundary. In contrast, the NTP mechanism can give the cosmologically interesting range of the axino relic abundance for a relatively wide range of m j so long as TR < 5 X 104 GeV. Perhaps in this sense, the NTP mechanism can be considered as somewhat more robust. 3.5
Conclusions
The intriguing possibility that the axino is the LSP and dark matter possesses a number of very distinct features which makes this case very different from those of both the neutralino and the gravitino. In particular, the axino can be a cold DM WIMP for a rather wide range of masses in the MeV to GeV range and for relatively low reheating temperatures TR < 5 x 104 GeV. As TR increases, thermal production of axinos starts dominating over non-thermal production and the axino typically becomes a warm DM relic with a mass broadly in a keV range. In contrast, the neutralino is typically a cold DM WIMP. Low reheating temperatures would favor baryogenesis at the electroweak scale. It would also alleviate the nagging "gravitino problem". If additionally it is the axino that is the LSP and the gravitino is the NLSP, the gravitino problem is resolved altogether for both low and high TR. Phenomenologically, one faces a well-justified possibility that the bound Qxh2 < 1, which is often imposed in constraining a SUSY parameter space, may be readily avoided. In fact, the range Qxh2 3> 1 (and with it typically large masses of superpartners) would now be favored if axino is to be a dominant component of DM in the Universe. Furthermore, the lightest ordinary superpartner could either be neutral or charged but would appear stable in collider searches. The axino, with its exceedingly tiny coupling to other matter, will be a real challenge to experimentalist. It is much more plausible that a supersymmetric particle and the axion will be found first. Unless the neutralino (or some other WIMP) is detected in DM searches, the axino will remain an attractive and robust candidate for solving the outstanding puzzle of the nature of dark matter in the Universe.
84
Acknowledgements I am greatly indebted to Jihn E. Kim and other members of the Local Organizing Committee for setting up an inspiring meeting in a beautiful location, in the spirit of COSMO workshops. References 1. L. Covi, J.E. Kim and L. Roszkowski, Phys. Rev. Lett. 82, 4180 (1999). 2. L. Covi, H.B. Kim, J.E. Kim and L. Roszkowski, hep-ph/0101009. 3. J.E. Kim, Phys. Rev. Lett. 43, 103 (1979); M.A. Shifman, V.I. Vainstein, and V.I. Zakharov, Nucl. Phys. B166, 4933 (1980). 4. M. Dine, W. Fischler, and M. Srednicki, Phys. Lett. B104, 99 (1981); A.P. Zhitnitskii, Sov. J. Nucl. Phys. 31, 260 (1980). 5. J.E. Kim, Phys. Rep. 150, 1 (1987); M.S. Turner, Phys. Rep. 197, 67 (1990); G.G. Raffelt, Phys. Rep. 198, 1 (1990); P. Sikivie, hepph/0002154. 6. K. Tamvakis and D. Wyler, Phys. Lett. B112, 451 (1982). 7. K. Rajagopal, M. S. Turner, and F. Wilczek, Nucl. Phys. B358, 447 (1991). 8. E.J. Chun, J.E. Kim and H. P. Miles, Phys. Lett. B287, 123 (1992). 9. P. Nath and R. Arnowitt, Phys. Rev. Lett. 69, 725 (92). 10. R.G. Roberts and L. Roszkowski, Phys. Lett. B309, 329 (1993). 11. G.L. Kane, C. Kolda, L. Roszkowski, and J.D. Wells, Phys. Rev. D49, 6173 (1994). 12. J. Ellis, et al, Nucl. Phys. B373, 399 (1992). 13. G.F. Giudice, E.W. Kolb, and A. Riotto, hep-ph/0005123.
SIGNATURE FOR SIGNALS FROM THE DARK
UNIVERSE
R. B E R N A B E I , P. B E L L I , R. C E R U L L I , F . M O N T E C C H I A Dip. di Fisica,
Universitd
di Roma
"Tor Vergata" Rome, Italy; E-mail :bernabei@roma2.
and INFN
Sez. Roma2,
1-00133
infn.it
M. A M A T O , G. I G N E S T I , A. I N C I C C H I T T I , D. P R O S P E R I Dip. di Fisica,
Universitd
di Roma
"La Sapienza" Rome, Italy;
and INFN
Sez.
Roma,
1-00185
C.J. DAI, H.L. H E , H.H. K U A N G , J . M . MA IHEP,
Chinese
Academy,
P.O. Box 918/3
Beijing
100039,
China.
The DAMA experiment is located at the Gran Sasso National Laboratories of the I.N.F.N, and is searching for particle Dark Matter by using various scintillators as target-detector systems. In particular, the results obtained by analysing in terms of WIMP annual modulation signature the d a t a collected with the highly radiopure ~ 100 kg Nal(Tl) set-up during four annual cycles (total statistics of 57986 kg • day) are here reviewed.
1
Introduction
DAMA is searching for W I M P s mainly by detecting their elastic scattering on highly radiopure scintillator target-nuclei. Moreover, several results on different topics (such as f3B decays, electron stability, nucleon stability, P E P violating processes, etc.) are also achieved l . Studies on possible future experiments with very large mass, very highly radiopure Nal(Tl) have also been performed 2 . T h e main DAMA activities are focused on: i) the ~ 2 1 liquid Xenon pure scintillator; ii) the C a F 2 ( E u ) prototypes; iii) the ~ 100 kg Nal(Tl) set-up. Interesting results have been obtained with the different set-ups; however, here they will be only briefly mentioned referring to the bibliography, while a particular attention will be devoted to the W I M P annual modulation signature 3 , investigated by means of the ~ 100 kg Nal(Tl) set-up. 2
The LXe set-up
We pointed out the interest in using LXe pure scintillators in W I M P s search since ref. 4 . Several prototype detectors have been built and used since the 80s; the final choice was to realize a pure liquid Xenon scintillator directly 85
86 collecting the emitted UV light. A high purity O F H C Copper vessel without any additional material (wave shifters, wires, etc.) assures both a high radiopure container and the absence of highly degassing/soluble materials inside it, which could reduce the light collection and the quenching factor value". Moreover, to assure both the purity and the radiopurity of the Xenon, Kr-free Xenon enriched in 1 2 9 Xe at 99.5% has been used so far, while the purification line uses special getters activated at high t e m p e r a t u r e and a low t e m p e r a t u r e t r a p . During the last years several upgradings have been performed improving the experimental sensitivity. The competitiveness of this set-up has been discussed in details elsewhere. First results on W I M P - 1 2 9 X e elastic scattering have been published in 1996 4 . Then, after the measurement of the quenching factor and of the pulse shape discrimination capability both with Am-B source and with a neutron beam at ENEA-Frascati (a 40 cc devoted setup has been used for this purpose) 4 , thanks t o a well reduced counting rate and t o t h e used pulse shape discrimination of the events significantly improved results have been achieved 4 . T h e same set-up has been used to investigate the W I M P - 1 2 9 X e inelastic scattering (exciting the 39.58 keV level) obtaining b o t h model-independent and model-dependent exclusion plots 4 . Furthermore, new competitive limits on charge non-conserving processes and on the nucleon and di-nucleon decay have been obtained with the same set-up l . Since August 2000, the LXe set-up is running filled with Kr-free Xenon gas enriched at ~ 68.8% in 1 3 6 Xe 5 which has been stored underground since about 15 years. A significant upgrading of the purification/filling/recovery line and of the shield have been carried out; now in the apparatus both gases can be alternatively used.
3
The CaF2(Eu)
T h e interest in developing CaF2(Eu) radiopure detectors was based on the presence of Fluorine, which offers a large cross section for spin dependent. coupled W I M P s , and on the presence of Ca isotopes, which allow the study of double beta decay processes. T h e results most recently released are available in ref. 6 (see also references therein), where - in addition to better limits in SD exclusion plot - new and improved limits on the j3f3 processes in 4 0 C a and 46 C a isotopes have been achieved. A new prototype detector is stored deep a
T h e quenching factor, considered in dark matter search, is defined for a scintillator as the ratio of the amount of the light induced by a recoiling nucleus to the amount of light induced by an electron of the same kinetic energy.
87 underground since several months and measurements are foreseen in the R&D installation at end of 2000. 4
T h e ~ 100 kg h i g h l y r a d i o p u r e N a l ( T l ) s e t - u p
A detailed description of the ~ 100 kg highly radiopure Nal(Tl) set-up and of its performances is given in ref. 7 as well as the stability control of the various parameters, the noise rejection, the efficiency, the calibrations, the higher energy stability, the total hardware rate, etc.. Nine 9.70 kg Nal(Tl) detectors have been especially built for the experiment on the W I M P annual modulation signature by means of a joint effort, with Crismatec company. The materials used for these detectors have been selected - as well as those for the P M T s by measuring sample radiopurity with low background germanium detectors deep underground in the low background facility of the G r a n Sasso National Laboratory 7 . As regards the samples of powders, their U / T h content was measured in Ispra with a mass spectrometer, while their K content was determined in the chemical department of the University of Rome "La Sapienza" with an atomic absorption spectrometer. A single growth has been used for all the crystals. The crystals are enclosed in a low radioactive copper box inside a low radioactive shield made by 10 cm of copper and 15 cm of lead; the lead is surrounded by 1.5 m m Cd foils and about 10 cm of polyethylene/paraffin. T h e copper box is maintained in high purity (HP) Nitrogen atmosphere by continuously flushing high purity Nitrogen gas. Each detector is viewed through 10 cm long light guides by two low background EMI9265B53/FL - 3" diameter - P M T s working in coincidences; the hardware threshold for each P M T is at single photoelectron level. T h e 9.70 kg detectors have tetrasil-B light guides directly coupled to the bare crystals (acting also as windows). Other four crystals of 7.05 kg - originally developed for other purposes - are used as veto of the other detectors and for special triggers; they have tetrasil-B windows and are coupled to the P M T s in one case by tetrasil-B and in the others by noUV-plexiglass light guides. All the crystals have surfaces polished with the same procedure and enveloped in T E T R A T E C - 4 (Teflon) diffuser such as the light, guides. On the top of the shield a glove-box, maintained in the same Nitrogen atmosphere as the Cu box containing the detectors, is directly connected to it through 4 Cu thimbles in which source holders can be inserted to calibrate all the detectors at the same time without allowing t h e m to enter in direct contact with environmental air. T h e glove-box is equipped with a compensation chamber. When the source holders are not inserted, Cu bars fill completely the thimbles.
88 In the production runs, the knowledge of the energy scale is assured by periodical calibration with 2 4 1 A m source and by monitoring (in the production d a t a summed each ~ 7 days) the position and resolution of the 2 1 0 P b peak (46.5 keV), which is present at level of few c p d / k g in the measured energy distributions because of a surface contamination by environmental Radon occurred during the first period of the detectors storage underground. T h e trigger is issued when at least one crystal fires; then, the pulse shape profile is acquired by a 200 M S a m p l e / s Lecroy Transient Digitizer if the event is a single hit in the lowest energy region; ADC values are recorded for every events in the whole energy scale 7 . A hardware/software monitoring of the running conditions is operating; in particular, several probes are read out by CAMAC and stored with the production d a t a . Moreover, self-controlled computer processes are effective to control automatically the stability parameters and to manage alarms 7 . As regards the rejection of obvious noise events (sharply decreasing with the increase of the number of available photelectrons) present below ~ 8-10 keV , we have at disposal a relative large number of photoelectrons/keV: about 5.5. - 7.5, depending on the detector. This allows to effectively reject this noise by profiting of the different timing structure between the noise ( P M T fast signals with decay times of order of tens ns) and the scintillation (signals with decay times of order of hundreds ns) pulses. T h e large difference in decay times, the good performances of the electronic chain and the relatively large number of available photoelectrons allows an effective noise rejection 7 . As mentioned in ref. 7 several variables can be built by using the pulse information recorded over 3250 ns by a Transient Digitizer. In particular, for each • l i
i •
i . .i
\r
Areaifrom
considered energy bin, we plot the Y — -, J
.1
v
& Areaifrom
' ^ 100 ns to 600 ns)
the X = —.—^77——j: Area{Jrom
;—TT^,
\
i
ft
0 ns to 50 ns)
^—;—r™—r
Area(jrom 0 ns to 100 ns) i i , i r i
i
value versus .
T
,i •
value calculated for each event. In this
0 ns to 600 ns)
X, Y plane the slow scintillation pulses are grouped roughly around (X ~ 0.7, Y ~ 0.5) well separated from the noise population which is grouped around small X and high Y values (see ref. 7 ) . To select the scintillation pulses an acceptance window in X, Y is applied. Since the statistical spread of the two populations in the X, Y plane becomes larger when decreasing the number of available photoelectrons and also the S/N decreases, smaller acceptance windows become necessary to maintain the same full noise rejection power (as necessary also e.g. for correct pulse shape discrimination, P S D , analyses). According to standard procedures, the window acceptance for scintillation pulses is determined by applying the same window to the scintillation d a t a induced - in the same energy bin - by an external source of suitable strenght 7 . The measured energy counting rate has been published in various energy
89 intervals 1>7; we only recall here t h a t the low energy counting rate considered in the studies for W I M P search refers to events where only one detector of m a n y actually fires, since the W I M P multi-scattering probability is negligible. In practice, each detector has all the others as veto. T h e quenching factor and the discrimination capability of these detectors has been measured at ENEA-Frascati and a pulse shape analysis of a statistics of 4123.2 kgday (DAMA/NaI-0) has been performed by applying the pulse shape discrimination technique to reject electromagnetic background 7 , determining upper limit on recoils. A statistics of 14962 kgday has also been analysed in term of possible diurnal variation of the low energy rate as a function b o t h of the sidereal (to search for relatively high cross section W I M P s 8 ) and of the solar time (to verify the absence of possible diurnal effects of unknown nature in the data) 7 . Furthermore, competitive results on several different topics, such the the search for neutral SIMPs and nuclearites, the investigation of the electron stability and t h a t of P E P violating processes, have also been achieved by profiting of the d a t a collected in various energy regions l . For all these results we refer to the quoted bibliography, while the following sections will be devoted to summarize the studies on the annual modulation search 7 .
J. 1
The investigation
of the WIMP annual modulation
signature
As we have clearly pointed out 7 , the annual modulation signature is a well distinguished one, requiring the presence not of a "generic" rate variation but of a variation according to all the following specifications: i) presence of a correlation with cosine function; ii) proper period (1 year); iii) proper phase (about 2 June); iv) only in a well defined low energy region (where W I M P induced recoils can be significantly present); v) for single "hit" events; vi) with modulation amplitude in the region of maximal sensitivity not exceeding 7%. T h e careful verification of the satisfaction of all these requirements is realized by DAM A thanks to: i) the collection of the whole energy spectrum from single photoelectron to the MeV region; ii) the continuous monitoring and control of several parameters; iii) many consistency checks and statistical tests 7 . Note t h a t to mimic the W I M P annual modulation signature a systematic effect should not only be quantitatively significant, but also able to satisfy all the six requirements as for a W I M P induced effect. Four years of d a t a taking for annual modulation studies have been released so far, namely DAMA/NaI-1,2,3 and 4, for a total statistics of 57986 kg-day (the largest statistics ever collected in the field of W I M P search) as reported in Table 1; there also the DAMA/NaI-0 running period which offered - as
90 Table 1. Released data sets ' ; from 1 to 4 they refer to different annual cycles. period DAMA/NaI-1 DAMA/NaI-2 DAMA/NaI-3 DAMA/NaI-4 Total statistics + DAMA/NaI-0
statistics (kgday) 4549 14962 22455 16020 57986 limits on recoils fraction by PSD
mentioned above - by means of a PSD study an upper limit on the recoil rate which has been properly included in the final result of the global dependent analysis given later 7 . 4-2
The model independent systematic effects
approach and the investigation
of possible
An immediate evidence of presence of modulation in the lowest energy region of the experimental d a t a is given in Fig. 1 where the model independent residual rate for the cumulative 2-6 keV energy interval is shown as a function of the time 7 . The x2 test of the d a t a of Fig. 1 disfavors the hypothesis of unmodulated behaviour giving a probability of 4 • 1 0 - 4 , while fitting these residuals with the function A- cosui(t — to) (obviously integrated in each of the considered time bin) a period T = — = (1.00 ± 0.01) year, when fixing t0 at 152.5 days and a phase to = (144 ± 13) days, when fixing T at 1 year are obtained. T h e modulation amplitude as free parameter gives A = (0.022 ± 0.005) c p d / k g / k e V and A = (0.023 ± 0.005) c p d / k g / k e V , respectively. Similar results, but with slightly larger errors, are obtained for T and to in case all the parameters are kept free. As it is evident the period and the phase fully agree with the ones expected for a W I M P induced effect. In the following we will summarize the investigation for possible systematics able to mimic such a signature (that is not only quantitatively significant but also able to satisfy the six requirements given above) considering here in particular the d a t a of the two last running periods since similar arguments for the D A M A / N a I - 1 and D A M A / N a I - 2 d a t a have been already discussed elsewhere 7 and at m a n y conferences and seminars. As mentioned above a stringent noise rejection is performed. However, to quantitatively investigate the role of possible noise tail in the d a t a after rejection on the annual modulation result, the hardware rate, RHJ, of each detector above a single photoelectron, can be considered. T h e distribution of
91 0.1 DAMAf Nal-l T
DAMA/_ "NaI-2 i
DAMA/i NaI-3
DAMA/ NaI-4
0.05
-44 \b IT4i
M-0—I
-0.05
-0.1
_L 500
1000
h
Hi +• j_
1500 time (days)
Figure 1. Model independent residual rate in the 2-6 keV cumulative energy interva a function of the time elapsed since January 1-st of the first year of d a t a taking. 1 expected behaviour of a WIMP signal is a cosine function with minimum roughly at th dashed vertical lines and with maximum roughly at the dotted ones.
Y,J{RHJ— < RHJ > ) shows a gaussian behaviour with a — 0.6% and 0.4% for D A M A / N a I - 3 and DAMA/NaI-4, respectively, values well in agreement with those expected on the basis of simple statistical arguments. Moreover, by fitting its time behaviour in both d a t a periods including a WIMP-like modulation term a modulation amplitude compatible with zero, (0.04 ± 0.12) • 1 0 - 2 Hz, is obtained. From this value, considering also the typical noise contribution to the hardware rate (~ 0.10 Hz) of the 9 detectors, the upper limit on the noise relative modulation amplitude has been derived to be: < ^l^oHz ~ L 8 ' 1 0 ~ 3 ( 9 0 % C - L - ) 7- T h i s s h o w s t h a t e v e n i n t h e w o r s t hypothetical case of a 10% contamination of the residual noise - after rejection - in the counting rate, the noise contribution to the modulation amplitude in the lowest energy bins would be < 1.8 • 10~ 4 of the total counting rate, t h a t is a possible noise modulation could account only for < 1 % of the annual modulation amplitude observed in ref. 7 . In conclusion, an hypothetical tail of residual noise after rejection can be excluded. As regards the possible role of the Radon gas, we recall t h a t in our set-up the detectors have been continuously isolated from environmental air since several years; different levels of closures are sealed and maintained in H P Nitrogen atmosphere. However, the environmental Radon level in the instal-
92 lation is continuously monitored and acquired with the production data; the results of the measurements are at the level of sensitivity of the used radonmeter. Moreover, fitting the behaviour of the environmental Radon level with time, according to a WIMP-like modulation, the amplitudes (0.14 ± 0.25) B q / m 3 and (0.12 ± 0.20) B q / m 3 are found in the two periods respectively, both consistent with zero. Further arguments are given in ref. 7 . In conclusion, considering the results of the Radon measurements and the fact t h a t in every case - a modulation induced by Radon would fail some of the six requirements of the annual modulation signature (which are instead verified in the production d a t a ) , a Radon effect can be excluded. As regards the role of possible t e m p e r a t u r e variation, we recall t h a t the installation, where the ~ 100 kg Nal(Tl) set-up is operating, is air-conditioned. The operating t e m p e r a t u r e of the detectors in the Cu box is read out by a probe and it is stored with the production d a t a 7 . In particular, sizeable t e m p e r a t u r e variations could only induce a light output variation, which is negligible considering: i) t h a t around our operating t e m p e r a t u r e , the average slope of the light o u t p u t is < - 0 . 2 % / ° C ; ii) the energy resolution of these detectors in the keV range; iii) the role of the intrinsic and routine calibrations 7 . In addition, every possible effect induced by t e m p e r a t u r e variations would fail at least some of the six requirements needed to mimic the annual modulation signature; therefore, a t e m p e r a t u r e effect can be excluded. In long t e r m running conditions, the knowledge of the energy scale is assured by periodical calibration with 2 4 1 A m source and by continuously monitoring within the same production d a t a (grouping t h e m each ~ 7 days) the position and resolution of the 2 1 0 P b peak (46.5 keV) 7 . T h e distribution of the relative variations of the calibration factor (proportionality factor between the area of the recorded pulse and the energy), ideal - without applying any correction - estimated from the position of the 2 1 0 P b peak for all the 9 detectors during both the D A M A / N a I - 3 and the D A M A / N a I - 4 running periods, has been investigated. From the measured variation of ideal an upper limit of < 1% of the modulation amplitude measured at very low energy has been obtained. Since the results of the routine calibrations are obviously properly taken into account in the d a t a analysis, such a result allows to conclude t h a t the energy calibration factor for each detector is known with an uncertainty •C 1% within every 7 days interval. Moreover, the variation of the calibration factor for each detector, within each interval of ~ 7 days, would give rise t o an additional energy spread (crCai) besides the detector energy resolution (<x res ).The effect of similar variations of ideal on the relative modulation amplitude results < 1.6 - 1 0 - 4 and gives an upper limit of < 1% of the modulation amplitude measured at very low energy in ref. .
93 The behaviour of the efficiencies during the whole d a t a taking periods has been also investigated. Their possible time variation depends essentially on the stability of the cut efficiencies; the latters are regularly measured by devoted calibrations 7 . These routine efficiency measurements are performed roughly each 10 days, collecting each time tipically 10 4 - 10 5 events each keV. In particular, we have investigated the percentage variations of the efficiency values, e.g. in the (2-8) keV energy interval; they show a gaussian distribution with a = 0.6% and 0.5% for D A M A / N a I - 3 and DAMA/NaI-4, respectively. Moreover, we have verified t h a t the time behaviour of these percentage variations does not show any modulation with period and phase expected for a possible W I M P signal. In particular, in the (2-4) keV energy interval a modulation amplitude (taking the two periods all together) equal to (1.0± 1.0) • 10 — 3 is found, while in the (4-6) keV the result is (0.1 ± 0.7) • 1 0 " 3 , they are both consistent with zero. Similar results are obtained in other energy bins. In this way, also the unlikely idea of a possible role played by the efficiency values in the effect observed has been ruled out 7 . In order to verify the absence of any significant background modulation, the measured energy distribution in energy regions not of interest for the WIMP-nucleus elastic scattering has been investigated 7 . In fact, the background in the lowest energy region is expected to be essentially due to Compton electrons, X-rays a n d / o r Auger electrons, muon induced events, etc., which are strictly correlated with the events in the higher energy part of the spectrum; therefore, if a detected modulation with time in the lowest energy region would be due to a background modulation (and not to a possible real signal), an equal or higher (sometimes much higher) modulation in the highest energy region should also be present. For this purpose, we have considered the rate integrated above 90 keV, R90, as a function of the time. T h e distributions of the percentage variations of R90 with respect to their mean values for all the crystals during the whole D A M A / N a I - 3 and DAMA/NaI-4 running periods show cumulative gaussian behaviours with a ~ 1%, well accounted by the statistical spread expected from the used sampling time . This result excludes any significant background variation. Moreover, including a WIMP-like modulation in the analysis of the time behaviour of R90, amplitude compatible with zero is found in both the running periods: -(0.11 ± 0.33) c p d / k g and -(0.35 ± 0.32) c p d / k g . This excludes the presence of a background modulation in the whole energy spectrum at a level much lower than the effect found in the lowest energy region; in fact, otherwise considering the R90 mean values - the modulation term should be of order of tens c p d / k g , t h a t is ~ 100 a far away from the measured value. A similar analysis performed in other energy region, such as e.g. the one just above the
94 first pole of the Iodine form factor, leads to the same conclusion. The results given above already account also for the background component due to the environmental neutrons, in any case a further additional independent analysis has also been performed. As regards the thermal neutrons, the reactions 23Na(n,j)24Na and 23 Na(n, f)24mNa (cross section to thermal neutrons equal to 0.10 and 0.43 barn, respectively 9 ) have been investigated. The capture rate results to be: ~ 0.2 c a p t u r e s / d a y / k g since the thermal neutron flux has been measured to be 1.08 -10~ 6 neutrons • c m " 2 - s _ 1 10 b. Assuming cautiously a 10% modulation of the thermal neutrons flux, the corresponding modulation amplitude in the lowest energy region has been calculated by MonteCarlo program to be < 10~ 5 c p d / k g / k e V , t h a t is < 0.05% of the modulation amplitude we found in the lowest energy interval of the production data. In addition, a similar contribution cannot anyhow mimic the annual modulation signature since it would fail some of the six requirements necessary to mimic a W I M P signal. A similar analysis can also be carried out for the fast neutrons case. From the fast neutron flux measured at the Gran Sasso underground laboratory, 0.9 • 1 0 - 7 neutrons • c m - 2 - s _ 1 n , the differential counting rate above 2 keV has been estimated by MonteCarlo to be ~ 1 0 - 3 c p d / k g / k e V . Therefore, assuming - also in this case - cautiously a 10% modulation of the fast neutron flux, the corresponding modulation amplitude results to be < 0.5% of the modulation amplitude found in the lowest energy interval 7 . Moreover, also in this case some of the six requirements mentioned above would fail. As regards possible side reactions, the only process which has been found as a possibility is the muon flux modulation reported by the M A C R O experiment 1 2 . In fact, M A C R O has observed t h a t the muon flux shows a nearly sinusoidal time behaviour with one year period and m a x i m u m in the summer with amplitude of ~ 2 %; this muon flux modulation is correlated with the temperature of the atmosphere. However, it can be easily calculated t h a t this effect would give in our set-up modulation amplitudes < < 1 0 - 4 c p d / k g / k e V , t h a t is much smaller t h a n we observe. Moreover, some of the six requirements necessary to mimic the signature would also fail. Thus, it can be safely ignored 7 . T h e search for other possible side reactions able to mimic the signature has not offered so far any candidate. As a result of the model independent approach and of the full investigation of known systematic effects, the presence of an annual modulation compatible with W I M P s in the Galactic halo is candidate by the d a t a independently on
^Consistent upper limit on the thermal neutron flux have been obtained with the ~ 100 kg DAMA Nal(Tl) set-up considering these same capture reactions .
95 their nature and coupling with ordinary m a t t e r . In the next section a particular particle candidate will be investigated; for t h a t a model is needed as well as an effective energy and time correlation analysis. We take this occasion to remark t h a t a large scenario exists in the model dependent analyses not only because various candidates with different coupling could be considered, but also because of the large uncertainties affecting several parameters and aspect of each given model.
4-3
Results of a model dependent
analysis
Properly considering the time occurrence and the energy of each event, an energy and time correlation analysis of the d a t a between 2 and 20 keV has been performed, according to the method described in ref. 7 . This allows to effectively test the possible presence in the rate of a contribution having the typical features of a given W I M P candidate. In particular we have considered a particle with a dominant spin-independent scalar interaction (as it is also possible for the neutralino 1 3 ) . A detailed discussion is available in ref. 7 ; here the m a i n result is outlined. In the minimization procedure by the standard m a x i m u m likelihood method 7 the W I M P mass has been varied from 30 GeV up to 10 TeV; the lower bound accounted for results achieved at accelerators. T h e calculations have been performed according to the same astrophysical, nuclear and particle physics considerations as in ref. 7 and to the 90% C.L. recoil limit, of ref. 7 (DAMA/NaI-0). Since the analysis of each d a t a cycle independently 7 gives consistent results, a global analysis has been performed properly including also the known uncertainties on astrophysical local velocity, vo, 1 5 . According to ref. 1 5 , the minimization procedure has been repeated by varying vo from 170 k m / s to 270 k m / s analysing also the case of possible bulk halo rotation. Obviously, the positions of the m i n i m a for the log-likelihood function consequently vary 1 5 ; for example, in this model framework for vo = 170 k m / s the minimum is at Mw = (72±\l) GeV and ^o-p = (5.7±1.1) - l O " 6 pb, while for v 0 = 220 k m / s it i s a t M w = ( 4 3 ^ 2 ) GeV and £<rp = ( 5 . 4 ± 1 . 0 ) • 1 0 " 6 pb. T h e results obtained in this model framework are summarized in Fig. 2, where there are shown the regions allowed at 3 95% discrimination against surface electron-recoil backgrounds.
recoils with > 99.5% efficiency and surface events with > 95% efficiency. 17 ' 18 CDMS detectors t h a t sense athermal phonons provide further surface-event rejection based on the differing phonon pulse shapes of bulk and surface events. This phonon-based surface-event rejection is > 99.7% efficient above 20 keV. 2 0 ' 2 1 The 1-cm-thick, 7-cm-diameter detectors are stacked 3 m m apart with no intervening material. This close packing enables the annular outer ionization electrodes to shield the disk-shaped inner electrodes from low-energy electron sources on surrounding surfaces. T h e probability t h a t a surface event will multiply scatter is also increased. T h e low expected rate of W I M P interactions necessitates operation of the detectors underground in a shielded, low-background environment 22 > 23 . Key to the success of the experiment at its current shallow site is a > 99.9% efficient plastic-scintillator veto t h a t detects muons, allowing rejection of events due to muon-induced particles. The measured event rate below 100 keV due t o photons is roughly 60 k e V - 1 k g - 1 d _ 1 overall and 2 k e V - 1 k g - 1 d _ 1
102 anticoincident with the veto. Neutrons with energies capable of producing keV nuclear recoils are produced by muons interacting inside and outside the veto ("internal" and "external" neutrons, respectively). T h e dominant, low-energy (< 50 MeV) component of these neutrons is moderated by a 25-cm thickness of polyethylene between the outer lead shield and cryostat. 2 4 Essentially all remaining internal neutrons are tagged as muon-coincident by the scintillator veto. However, relatively rare, high-energy external neutrons m a y punch through the polyethylene and yield secondary neutrons capable of producing keV nuclear recoils. A large fraction of the high-energy external neutrons are vetoed: ~ 4 0 % due to neutron-scintillator interactions, as well as an unknown fraction due to hadronic showers associated with the primary muon. This unknown fraction, combined with a factor of ~ 4 uncertainty in the production rate, makes it difficult to accurately predict the absolute flux of unvetoed external neutrons. Two methods are used to measure this flux of unvetoed external neutrons. First, CDMS detectors may consist of one of two different target materials: Ge, which is more sensitive to W I M P s , or Si, which is more sensitive to neutrons. T h e neutron rate is therefore measured using Si detectors (accounting for the possible fraction of the total nuclear-recoil rate t h a t m a y be due t o W I M P s ) and then subtracted from the combined rate of neutrons plus W I M P s in Ge. Second, the rate of neutrons scattering in multiple detectors yields a clean measurement of the neutron background, since W I M P s interact too weakly to multiply scatter. Monte Carlo simulations are then used to determine the implied rate of neutron single-scatter events. It is important to note t h a t such normalization-independent predictions of the simulation, including the relative rates of single scatters and multiple scatters, relative rates in Si and Ge detectors, and the shapes of nuclear-recoil spectra, are insensitive to reasonable changes in the neutron spectrum. In this way, the rate of neutrons in Si and the rate of multiple-scatter nuclear recoils in Ge each yield independent estimates of the rate of background single-scatter neutrons in Ge. T h e neutron Monte Carlo simulation assumes the production spectrum given by Khalchukov, et al., 2 5 and propagates the neutrons through the shield to the detectors using the M I C A P and F L U K A hadronic interaction simulation packages and cross-sections from M u g h a b g h a b , et al. 2 6 T h e accuracy of the simulation's propagation of neutrons is confirmed by the agreement of the simulated and observed recoil-energy spectra due to muon-coincident and calibration-source neutrons, as shown in Figure 2. In both cases, agreement between the Monte Carlo and the d a t a is good even with no free parameters as shown. In particular, the ratios of singles to multiples agrees to better t h a n 20% for both samples.
103 Neutron Calibration
Muon-Coincident Neutrons
Recoil Energy [keV]
Recoil Energy [keV]
F i g u r e 2. O b s e r v e d (solid) a n d s i m u l a t e d (dashed) n e u t r o n s p e c t r a , c o a d d e d o v e r all four G e d e t e c t o r s , w i t h n o free p a r a m e t e r s . T h e u p p e r h i s t o g r a m s i n c l u d e all n u c l e a r recoils w h o s e e n e r g y is fully c o n t a i n e d in a d e t e c t o r ' s i n n e r e l e c t r o d e . T h e lower h i s t o g r a m s i n c l u d e all h i t s f r o m m u l t i p l e - s c a t t e r n u c l e a r - r e c o i l e v e n t s for w h i c h a t least o n e s c a t t e r is fully c o n t a i n e d in a d e t e c t o r ' s i n n e r e l e c t r o d e . L e f t : C a l i b r a t i o n in situ w i t h e x t e r n a l 2 5 2 C f n e u t r o n s o u r c e . R i g h t : N e u t r o n s t a g g e d as m u o n - c o i n c i d e n t b y t h e s c i n t i l l a t o r v e t o d u r i n g low-background running.
3
R e s u l t s f r o m t h e 1 9 9 8 Si a n d 1 9 9 9 G e D a t a R u n s
Two d a t a sets are used in this analysis: one consisting of 33 live days taken with a 100 g Si ZIP detector between April and July, 1998, and another taken later with Ge BLIP detectors. T h e Si run yields a 1.6 kg d exposure after cuts. T h e total low-energy electron surface-event rate is 60 k g - 1 d _ 1 between 20 and 100 keV. As shown in Figure 3, four nuclear recoils are observed in the Si d a t a set. Based on a separate electron calibration, the upper limit on the expected number of unrejected surface events above 20 keV is 0.26 events (90% CL). These nuclear recoils also cannot be due to W I M P s . Whether their interactions with target nuclei are dominated by spin-independent or spin-dependent couplings, W I M P s yielding the observed Si nuclear-recoil rate would cause an unacceptably high number of nuclear recoils in the Ge d a t a set discussed below. Therefore, the Si d a t a set, whose analysis is described elsewhere, 1 9 , 2 0 , 2 8 measures the unvetoed neutron background. Between November, 1998, and September, 1999, 96 live days of d a t a were obtained using 3 of 4 165 g Ge BLIP detectors. T h e top detector of the 4-detector stack is discarded because it displays a high rate of vetoanticoincident low-energy electron surface events, 230 k g - 1 d _ 1 between 1 0 -
104
o1 «'•'
0
;
'
;
;
20 40 60 80 Recoil Energy [keV]
'
100
Figure 3. Ionization yield (Y) vs. recoil energy for veto-anticoincident d a t a taken with the 1998 Si ZIP detector. Four events {circled) lie within the nuclear-recoil acceptance region (dark curves), above the 15 keV analysis threshold (dashed line). The expected position of nuclear recoils (light curve) is also shown.
100 keV as compared to 50 k g - 1 d _ 1 for the other detectors (see Fig. 4). This detector suffered additional processing steps t h a t may have contaminated its surface and damaged its electrodes. Data-quality, nuclear-recoil acceptance, and veto-anticoincidence cuts reduce the exposure (mass x time) by 4 5 % . To take advantage of close packing, analysis is restricted to events fully contained in the inner electrodes, reducing the exposure further by a factor of (at most) 2.47 to yield a final Ge exposure of 10.6 kg d. 1 8 , 2 9 Analysis is in progress on the set of events only partially contained in the inner electrodes; including these events will increase the total Ge exposure to ~ 1 7 kg d. 29 At the experiment's current shallow site, most of the events are induced by muons and tagged by the muon veto. As shown in Figure 4, the observed rates of single-scatter inner-electrode-contained electron-recoil background events coincident and anticoincident with veto are 20 k e V - 1 k g - 1 d _ 1 and 1 k e V - 1 k g - 1 d " 1 . Since the veto efficiency is > 99.9%, the muon-induced veto-anticoincident event rate is negligible; the dominant muon-anticoincident electron-recoil background is due to radioactivity. T h e surface electron-recoil background rate ~ 0.3 k e V - 1 k g - 1 d _ 1 . Since the discrimination efficiency based on ionization yield is > 95%, the expected rate of muon-anticoincident electron recoils passing the nuclear-recoil cut is < 0.02 k e V - 1 k g - 1 d _ 1 . T h e nuclear-recoil cut should eliminate nearly all the remaining electron-recoil background events, leaving a spectrum dominated by nuclear-recoil events.
105 1.0
' :
°' 8 s 0.6 ! 0.4 a Pi
mill: 20 40 60 80 Recoil Energy [keV]
20
m
80 40 60 Recoil Energy [keV]
0.2 Z 100 0.0
Figure 4. Left: Histograms of single-scatter events observed in the inner electrodes of the 3 uncontaminated Ge detectors (solid), including (from top to bottom) all events, vetoantcoincident events, and veto-anticoincident low-K (surface electron-recoil) events. The peak at 10.4 keV in the veto-anticoincident spectrum is caused by decay of Ge isotopes to Ga. The rate of veto-anticoincident, single-scatter, low-K (surface electron-recoil) events in the contaminated Ge detector (light dashes) is ~ 5x higher than the rate in the other detectors. Right: Unvetoed nuclear recoils observed in the inner electrodes of the 3 uncontaminated Ge detectors (solid histogram, left-hand scale). The peak-normalized nuclear-recoil efficiency (dashed curve, right-hand scale) is nearly constant above the 10 keV analysis threshold (shaded).
Figure 5 shows a plot of ionization yield vs. recoil energy for the Ge single scatters, as well as a scatter plot of ionization yields for the Ge multiple scatters. Bulk electron recoils lie at ionization yield Y fa 1. Low-energy electron events form a distinct band at Y ~ 0.75, leaking into the nuclear-recoil acceptance region below 10 keV. Imposing an analysis threshold of 10 keV simplifies analysis by rendering low-energy electron misidentification negligible. This threshold is well above the hardware trigger threshold, which is < 2 keV in recoil energy, and is 100% efficient by 5 keV. As shown in Figure 4, the relative efficiency of the software cuts for nuclear recoils in Ge is nearly constant above 10 keV. However, below 10 keV, this efficiency drops off sharply, and therefore becomes somewhat uncertain. T h e 10 keV analysis threshold has the added benefit of minimizing the uncertainty on the efficiency. T h e nuclear-recoil efficiency is determined in situ using calibration-source neutrons; comparison to the simulation indicates this efficiency is accurate to < 20%. Furthermore, the constant source of "internal" neutrons tagged by the muon veto provides an excellent check of the accuracy and stability of the efficiency of all hardware and software cuts together. Because of the shallow depth of the current site, the rate of these neutron events is about lOOx the rate of the veto-anticoincident "external" neutrons. Figure 6 shows the 1999 muon-coincident-neutron rates as a function of time. T h e rate is within a few percent of predictions from Monte Carlo simulations. T h e rate is also stable
106
0i
0
,
.
.
,
1
20 40 60 80 100 Recoil Energy [keV]
0i
0
,
,
0.5 1 Ionization Yield
1
1.5
Figure 5. Left: Ionization yield (Y) vs. recoil energy for veto-anticoincident single scatters contained in the inner electrodes of the 3 uncontaminated Ge detectors. Thirteen events (circled) lie within the nominal 90% nuclear-recoil acceptance region (dashed curves ), above both the 10 keV analysis threshold (dashed line) and the threshold for separation of ionization signal from amplifier noise (dot-dashed curve). The expected position of nuclear recoils (solid curve) is also shown. The presence of 3 events just above the acceptance region is compatible with 90% acceptance. Right: Scatter plot of ionization yields for multiple Scatters in the top/middle (crosses), middle/bottom (x's), or t o p / b o t t o m (diamonds) uncontaminated Ge detectors with at least 1 inner-electrode scatter and both scatters between 10 and 100 keV. Four events (circled) are tagged as nuclear recoils in both detectors. Bulk recoils and surface events lie at Y « 1 and Y ~ 0.75, respectively.
to better t h a n 15%, marginally consistent with statistical fluctuations, and good enough to induce negligible errors on our results. Thirteen unvetoed nuclear recoils are observed between 10 and 100 keV. T h e observation of 4 Ge multiple-scatter nuclear recoils (Fig. 5) indicates t h a t many if not all of the unvetoed nuclear recoils are caused by neutrons rather t h a n W I M P s , since the W I M P multiple-scatter rate is negligible. It is also highly unlikely t h a t these events are misidentified low-energy electron events. Both plots in Figure 5 demonstrate excellent separation of low-energy electron events from nuclear recoils. In particular, no multiple scatter looks like a nuclear recoil in one detector but an electron recoil in the other. Quantitatively, analysis using events due to electrons emitted by the contaminated detector yields an upper limit of 0.03 misidentified multiple-scatter low-energy electron events (90% CL). All other pieces of evidence are also consistent with the neutron interpretation. First, the 4 nuclear recoils observed in the Si d a t a set cannot be interpreted as W I M P s or surface events. Second, there is reasonable agreement between predictions from the Monte Carlo simulation and the relative
107 Qin Single-Scatter Nuclear Recoils HO
mean = 22.17 a 30
irean
=0.56
X2 = 21.45 dof=14 CL = 0.91
•S20 Z 10
U
0
20
40 60 80 raw live days
100
F i g u r e 6. R a t e of m u o n - c o i n c i d e n t n e u t r o n s , c o a d d e d o v e r t h e 4 G e d e t e c t o r s , s h o w n w i t h 1 R90. Here n is the value of n t h a t
108 maximizes the likelihood C for the given parameters M and w and the observations. C is the m a x i m u m of the likelihood for any physically allowed set of parameters. The 90% CL region excluded by the observed d a t a set consists of all parameter space for which the observed likelihood ratio Rdata < ^?90T h e 90% CL excluded region is projected into two dimensions conservatively by excluding only points excluded for all possible values of n. Results depend only weakly on W I M P mass, with 90% upper limit ^90 ~ 8 events. As a side note, the Bayesian approach with uniform prior probabilities gives nearly identical results. Standard (but probably over-simplifying) assumptions are used in order to scale WQO to a limit on the spin-independent WIMP-nucleon elasticscattering cross section u. First, W90 is converted to a W I M P - G e crosssection following Lewin and Smith, 9 assuming a W I M P characteristic velocity VQ = 2*20 km s - 1 , Galactic escape velocity vesc = 650 km s _ 1 , mean E a r t h velocity vE = 232 km s - 1 , and local W I M P density p = 0.3 GeV c~ 2 c m - 3 . T h e resulting W I M P - G e cross-section is scaled to a target-independent result for the spin-independent WIMP-nucleon cross-section cr using the Helm spin-independent form factor and assuming A2 scaling with target nuclear mass. This scaling is valid for models of supersymmetric W I M P s currently favored. 6 ' 1 0 ' 1 1 - 1 2 T h e resulting upper limit (shown in Figure 7) excludes new parameter space for W I M P s with M > 10 GeV c - 2 , some of which is allowed by supersymmetry. 1 0 , 1 1 Because all the nuclear recoils may be neutron scatters, cr = 0 is not excluded. Because the number of multiple scatters observed is larger t h a n expected, the limit from this analysis is ~ 50% better t h a n the experiment's expected sensitivity. These d a t a exclude, at > 75% CL, the entire region allowed at 3cr by the D A M A / N a I - 1 to 4 annual modulation signal alone (i.e., the region given by the VQ = 220 km s _ 1 curve in Figure 4a of Bernabei, et al. 3 0 ). In order to determine the probability of compatibility of the two experiments, it is better to perform a goodness-of-fit test t h a n to compare the overlap of confidence regions. A likelihood ratio test indicates the CDMS d a t a and DAMA's model-independent signal (as shown in Fig. 2 of Bernabei, et al. 3 0 ) are incompatible at 99.98% CL in the asymptotic limit; work is in progress to determine the probability of compatibility without relying on the asymptotic approximation. T h e best simultaneous fit to this DAMA d a t a and the CDMS d a t a together predicts too little annual modulation for DAMA and too many events for CDMS, as shown in Figure 8. Although without theoretical support, non-A2 scaling or a dark m a t t e r halo significantly different from the one assumed 9 may allow the two results to be compatible.
109
10
i
i
I /
10
-40
1 1 I
•
\
-41
•310'
-42
'
•' f *'
/ /
I " ' *' / \V V * ' *» ' f' / / \ * *•" * / \* / /
u
10
/t
— - — "••
10"
• '
'.'
'
/
Ge Diode K * / / Nal DAMA \ » ^ / CDMS expected CDMS sensitivity 10' WIMP Mass [GeV]
10
F i g u r e 7. T h e 9 0 % C L u p p e r l i m i t o n t h e s p i n - i n d e p e n d e n t W I M P - n u c l e o n cross s e c t i o n from t h i s a n a l y s i s (solid curve), as a f u n c t i o n of W I M P m a s s . A l s o s h o w n a r e c o m b i n e d u p p e r l i m i t s from G e d i o d e e x p e r i m e n t s ^ i 0 4 (dot-dashed curve ), a n d t h e u p p e r l i m i t from D A M A ' s p u l s e - s h a p e a n a l y s i s ^ 2 (dashed curve). Because more multiple scatters are obs e r v e d t h a n a r e e x p e c t e d , t h e l i m i t from t h i s a n a l y s i s is lower t h a n t h e C D M S e x p e c t e d ( m e d i a n ) s e n s i t i v i t y given t h e o b s e r v e d n e u t r o n b a c k g r o u n d (dots). T h e D A M A 3 2x in both Ge and Si will result in faster accumulation of statistics, and better measurement and subtraction of the neutron background. Construction at the deep site for the experiment, at the Soudan Mine in Minnesota, has been under construction since October 1998 and should allow data-taking ("first dark") in late 2001. At Soudan, the experiment's dominant background of neutrons will drop from a rate of ~ 1 k g - 1 d - 1 t o ~ 1 k g - 1 y _ 1 , as the greater depth of the site decreases the muon flux by over four orders of magnitude. This removal of the neutron background should fully test the discrimination ability of the detectors. T h e first detectors used at Soudan will be the new ZIP detectors from the final run at Stanford. Earlier detectors have already demonstrated sufficient discrimination to reduce the background rate to < 10 events k g - 1 y _ 1 , assuming fairly modest cleanliness requirements. Such a rate is within a factor of a few of the CDMS Soudan goal, as shown in Figure 9. Over the following few years, the number of detectors will be gradually increased to 42, the m a x i m u m t h a t will fit in the cold volume, providing a t o t a l detector mass of > 5 kg Ge and > 2 kg Si. At Soudan, CDMS should provide tremendous new reach for W I M P direct detection, improving search sensitivity by two orders of magnitude.
Acknowledgments We thank Paul Luke of LBNL for his advice regarding surface-event rejection. We thank the engineering and technical staffs at our respective institutions for invaluable support. This work is supported by the Center for Particle Astrophysics, an NSF Science and Technology Center operated by the University of California, Berkeley, under Cooperative Agreement No. AST-91-20005, by the
111
10
ssas
'
*—'
1
10
'
io 2 io 3 WIMP Mass [GeV]
'
io 4
Figure 9. Projected CDMS sensitivities at the shallow Stanford site (dot-dashed curve ) and at the deep Soudan site (dashed curve), along with the exclusion limit from this analysis (solid curve), and the DAMA 3CT allowed region (dark shaded region) also shown in the previous plot. Othe projected sensitivities (dots), taken from the web-based WIMP dark matter plotter, 3 5 correspond, from highest to lowest, to the Heidelberg, CRESST, and Genius experiments. Future experiment data runs should be sensitive to large regions of parameter space of minimal supersymmetric models 3 6 (light shaded region) and mSUGRA models 1 2 (medium shaded region).
National Science Foundation under Grant No. PHY-9722414, by the Department of Energy under contracts DE-AC03-76SF00098, DE-FG03-90ER40569, DE-FG03-91ER40618, and by Fermilab, operated by the Universities Research Association, Inc., under Contract No. DE-AC02-76CH03000 with the Department of Energy. References 1. 2. 3. 4. 5.
V. Trimble, Annu. Rev. Astron. Astrophys. 2 5 , 425 (1987). M. Srednicki, Eur. J. Phys. C 1 5 , 143 (2000). J.R. Primack, These proceedings. B.W. Lee and S.W. Weinberg, Phys. Rev. Lett. 3 9 , 165 (1977). P.J.E. Peebles, Principles of Physical Cosmology (Princeton University Press, Princeton, NJ, 1993).
112
6. G. Jungman, M. Kamionkowski, and K. Griest, Phys. Rep. 267, 195 (1996). 7. M.W. Goodman and E. Witten, Phys. Rev. D 31, 3059 (1985). 8. J.R. Primack, D. Seckel, and B. Sadoulet, Annu. Rev. Nucl. Part. Sci. 38, 751 (1988). 9. J.D. Lewin and P.F. Smith, Astropart. Phys 6, 87 (1996). 10. A. Bottino, These proceedings. 11. R. Arnowitt, These proceedings. 12. A. Corsetti and P. Nath, hep-ph/000316. 13. T. Shutt et al, Phys. Rev. Lett. 69, 3531 (1992). 14. T. Shutt et al, Phys. Rev. Lett. 69, 3425 (1992). 15. K.D. Irwin et al, Rev. Set. Instr. 66, 5322 (1995). 16. R. Gaitskell et al., in Proceedings of the Seventh International Workshop on Low Temperature Detectors, ed. S. Cooper (Max Planck Institute of Physics, Munich, 1997). 17. T. Shutt et al, Nucl. Instrum. Meth. A 444, 340 (2000). 18. S.R. Golwala, Ph. D. thesis, University of California, Berkeley, 2000. 19. R.M. Clarke et al, in Proceedings of the Second International Workshop on the Identification of Dark Matter, ed. N.J.C. Spooner and V. Kudryavtsev (World Scientific, Singapore, 1999). Note that ZIPs are referred to as FLIPs in this and other references. 20. R.M. Clarke, Ph. D. thesis, Stanford University, 1999. 21. R.M. Clarke et al, Appl. Phys. Lett. 76, 2958 (2000). 22. J.D. Taylor et al, Adv. Cryo. Eng. 4 1 , 1971 (1996). 23. A. Da Silva et al, Nucl. Instrum. Meth. A 364, 578 (1995). 24. A. Da Silva et al, Nucl. Instrum. Meth. A 354, 553 (1995). 25. F.F. Khalchukov, A.S. Mal'gin, V.G. Ryassny, and O.G. Ryazhskaya, Nuovo Cimento 6C, 320 (1983). 26. S.F. Mughabghab, M. Divadeenam, and N.E. Holden, Neutron CrossSections (Academic Press, New York, 1981). 27. R. Abusaidi et al, Phys. Rev. Lett. 84, 5699 (2000). 28. R. Abusaidi et al, in preparation. 29. R. Abusaidi et al, in preparation. 30. R. Bernabei et al, Phys. Lett. B 480, 23 (2000). 31. G.J. Feldman and R.D. Cousins, Phys. Rev. D 57, 3873 (1998). 32. R. Bernabei et al, Phys. Lett. B 389, 757 (1996). 33. L. Baudis et al, Phys. Rev. D 59, 022001 (1999). 34. A. Morales et al, hep-ex/0002053, submitted to Phys. Lett. B. 35. R. Gaitskell and V. Mandic, http://dmtools.berkeley.edu/limitplots/. 36. P. Gondolo, Private communication.
LARGE
NCOSMOLOGY
S. W . H A W K I N G Department
of Applied Cambridge,
Mathematics Cambridge,
and Theoretical Physics, University CBS OWA, United Kingdom
of
The large N approximation should hold in cosmology even at the origin of the universe. I use A D S - C F T to calculate the effective action and obtain a cosmological model in which inflation is driven by the trace anomaly. Despite having ghosts, this model can agree with observations
1
Large N Universe
Inflation in the very early universe, seems the only natural explanation of many observed features of our universe, particularly the recent measurements of a Doppler peak in the microwave background fluctuations. It is usually assumed that inflation is caused by a scalar field, that slowly rolls down an effective potential. But this poses the awkward question, why did the scalar field, start out high in the potential. No satisfactory answer to this has been given. The new inflationary scenario, in which the scalar field of left exposed on a mountain peak, is now not believed. The chaotic inflation scenario, seems to lead to inflation at the Planck scale, at which all bets are off. And the no boundary proposal, doesn't predict enough inflation. Instead, I want to go back to an earlier model, in which inflation was driven by the trace anomaly, of a large number of matter fields. I shall show how the bad features of this model, can be over come. The standard model of particle physics, contains nearly a hundred fields. If as we suspect, the standard model is embedded in a super symmetric theory, the number of fields would be at least double, and maybe very much higher. Thus the large N approximation should hold in cosmology, even at the origin of the universe. In the large N approximation, one performs the path integral over the matter fields in a given background, to obtain an effective action, which is a functional of the background metric, S = —^jdixJ-y{R
+ W{g))
(1)
where W(g)=
( d[4>}e-s^^. 113
(2)
114
One then argues that the effect of gravitational fluctuations are small, in comparison to the large number of matter fluctuations. Thus one can neglect graviton loops, and look for a stationary point of the combined gravitational action, and the effective action for the matter fields. This is equivalent to solving the Einstein equations, Rij - \R9ij
=
toGpa)
(3)
with the source being the expectation value of the matter energy momentum tensor. Finally, one can calculate linearized fluctuations about this stationary point metric, and check they are small. This is confirmed observationally, by measurements of the cosmic microwave background, which indicate that the primordial metric fluctuations, were of the order of ten to the minus five. 2
Trace Anomaly
The large N approximation was first applied to cosmology in the 70s, particularly by the Russians. One of the main motivations, was to obtain a model of the universe, without an initial singularity. Instead, Grishcuk and Zeldovitch proposed that the universe was in a de Sitter phase for an infinite time, before exiting to a decelerating expansion. This model was developed in more detail by Starobinski : . He assumed there where a large number of conformally invariant matter fields, which would give the energy momentum tensor a trace anomaly, that was a known function of the local curvature. In a de Sitter background, the trace free part of the energy momentum tensor would be zero, by symmetry. Thus the energy momentum tensor, would be proportional to the metric, and de Sitter space would be a stationary point of the combined action. However, in order to get the universe to exit the de Sitter phase, Starobinski had to assume there were also non conformally invariant matter fields, that added a non conformally invariant local term to the effective action, (T)=aF-cG
+ dV'2R
where F = CijkiCijkl G = RijkiRijkl
a
= Weyl squared - 4Ri:jRij + R2 ex Euler density
=nohiNs+mF+l2Nv)
(4)
115
where Ns,Np and Ny are the numbers of real scalars, Dirac fermions and vectors, respectively. I must admit I did not take this Russian model very seriously at the time. This was before we realized the importance of exponential expansion or inflation, in solving the fine tuning problems of the Hot Big Bang, like horizons and space curvature. Also, why should the universe have expanded for an infinite time in a de Sitter phase, before becoming unstable, and exiting inflation. What was the clock, that told the instability to turn on. However, we can now recognize this initial de Sitter phase, as corresponding to the quantum creation of the universe from nothing, via an instanton which was the Euclidean four sphere. 3
Quantum Creation of the Universe
Moreover, the ADS, CFT correspondence, now provides us with a way of calculating the effective action of matter fields on backgrounds without symmetry. This was not available in the early days, so Starobinski had to neglect non local terms in the effective action. I shall therefore re-apraise the Starobinski model, in the light of modern knowledge. My talk will be based on joint work 2 with Harvey Reall and Thomas Hertog, at Cambridge. 4
ADS-CFT
The ADS, CFT prescription for calculating the effective action on a background metric, is performed in Euclidean space, like all good quantum field theory calculations. One takes the four dimensional metric, to be the boundary of a solution of the Einstein equations, with a negative cosmological constant, in five dimensions. One takes the action of this solution, adds counter terms that depend on the geometry of the boundary, and takes the limit that the ADS length scale, and the five dimensional Newton's constant go to zero W[h] = — / d5x^/g I R + — ) + suface counter terms. lD7rG5 J V ' / I shall be concerned with universes which, in the first approximation, homogeneous and isotropic in the space directions. This means that the clidean metrics have an O4 isometry group, acting on three spheres. The
(5) are Eufive
116
Figure 1. Quantum creation of the universe.
dimensional solution will have the same symmetry, which means that it will be a Euclidean Schwarzschild ADS metric. If one adds the requirement that the three spheres shrink to zero size, as they do in instantons for open inflation, the Schwarzschild mass must be zero. Thus the five dimensional metric, must be pure ADS. 5
04 Metrics
One can now calculate the combined four dimensional gravitational, and matter field effective actions. The only O4 stationary point metrics, are flat space, and the four sphere, ds2 = da2 + b(a)2dCl2.
(6)
The latter can be regarded as the Euclidean version of a de Sitter space, where the cosmological constant is provided by the trace anomaly, of a large AT conformal field theory. These two solutions are the final and initial stages, of an open inflationary model. However, in order to get a solution that interpolates between the two, one has to add an R squared term to the gravitational action, as Starobinski discovered. This can be justified as the local counter term
117
4-d metric,
h
5-d solution, 8 with negative cosmological constant
Figure 2. ADS-CFT.
in the effective action, of non conformal invariant fields. W i t h this addition, the expansion changes from exponential, to m a t t e r dominated, in a time scale t h a t depends on the coefficient of the R squared term. 6
Combined Action
There are thus seven contributions to the combined gravitational and effective m a t t e r action,
S = —rj-p; f (PxVhR + W[h]
(7)
1O7TG J
where
W[h]
+
167TG: 3 8nG5l N 32TT 2 aN
log l- + fl\ f dAxVh (RijRij f ,4
/TD2
•,
I
2AT
^RA (8)
First, there is the five dimensional gravitational action. This depends on the five dimensional Newton's constant, G5, and the ADS length scale, /. These
118
are not physical quantities, but are auxiliary variables introduced for the ADS, CFT calculation. The ADS length scale, I, acts like the cut off for the large N matter quantum fluctuations. One is therefore interested in the limit in which I and G$ go to zero, with P/G5 — N2 which is held fixed. The second term, is the Gibbons Hawking term, trace K, on the four sphere boundary. The third, fourth, and fifth terms, are the counter terms for ADS, CFT. They are the area, the Ricci scalar, and a curvature squared term of the boundary, respectively. They remove the quartic, quadratic and log divergences in the five dimensional action, as I and G5 are taken to zero. Thus they correspond to counter terms in the large N conformal field theory, in four dimensions. Even super symmetric Yang Mills theory, is not finite in a general curved background. This is reflected in the fact that the third ADS, CFT counter term, proportional to the curvature squared of the boundary, is not covariantly defined. Instead, one has to introduce an arbitrary constant,
0The sixth term in the combined action, is the square of the Ricci scalar of the boundary. This is interpreted as the local counter term in the effective action, of non conformal invariant matter fields on the background, so its coefficient, a, is undetermined. We do not have a good way of calculating the non local part of the effective action, of non conformally invariant fields. However, it is reasonable to suppose it is not too different to the non local part, of the effective action of conformal invariant fields. I shall therefore assume it can be absorbed into redefinitions of the number of fields, and the conformally invariant counter term, in the ADS, CFT correspondence. The seventh, and final, term in the combined action, is the four dimensional gravitational action. This occurs with a Newton's constant, G4, that is not related to G5 and I. I shall use Planck units, in which G4 = 1. The model then depends on three parameters, N, the number of matter fields, and j3 and a, the conformal and non conformal counter terms. N is fixed by physics, but fj and a seem able to be given arbitrary values. 7
Stationary Point Metric (four sphere of radius)
The combined action is stationary under all perturbations, htj, of the metric of the boundary, if the boundary is a four sphere with radius, r, of order TV to the half. This corresponds to a de Sitter solution, where the cosmological constant, is provided by the trace anomaly of large numbers of matter fields, ds2 = da1 + (r 2 sin2 - ) dti2
(9)
119
where r = ^NG/4ir. For large iV, the radius of the de Sitter space, will be large in Planck units. Thus gravitational fluctuations will be small, confirming the consistency of the large N approximation. Even though they are small, gravitational fluctuations about the de Sitter instanton are important, because they give rise to galaxy formation, and the anisotropies in the cosmic microwave background. One can calculate the two point function, (hh), of the metric fluctuations as follows. First, one picks a perturbation, hij, of the metric of the four sphere. Second, one solves five dimensional Einstein equations, with negative cosmological constant, for the region inside the perturbed four sphere. Because of the perturbations, this will no longer be pure ADS. Third, one calculates the action of the perturbed solution, including surface terms. Because a ball of five dimensional ADS bounded by the round four sphere, is a solution, the action won't have a term linear in h^. Thus the leading term will be quadratic, of the form, hMh. The inverse of the operator, M, gives the two point function for the metric perturbations, hij, for four dimensional gravity coupled to large N matter. One can decompose the perturbations in the bulk and boundary, into harmonics under the isometry group, O5, of the Euclidean de Sitter solution. The harmonics can be divided into scalar, transverse vector, and transverse trace less tensor. The vector harmonic perturbations are pure gauge, so they don't affect the action. The gauge freedom leads to closed loops of Fadeev Popov ghosts, but they can be neglected in the large N approximation. The scalar and transverse trace less tensor perturbation equations in the bulk, can be written in the synchronous gauge, /isM = 0. There are no scalar solutions in the bulk, that are regular at the origin of the five dimensional space. However, one has to allow for the boundary four sphere, to be at a variable distance from the origin, so this is a scalar degree of freedom. One gets a radial equation for each transverse trace less mode, in terms of the distance from the center of the five dimensional ball. 8
Transverse traceless mode and instability of de Sitter
These radial equations can be solved with hyper geometric functions, ds2 = I2 [dy2 + sinh2 y(dcr2 + sin2 adQ2)] .
(10)
Expand 5gij in tensor spherical harmonics 00
&9n(v,x) = Y,U(v)HUx)p=2
(n)
120
Solve for fp(y) to get , , .
sinh p + 2 y „ fp
(p + 1)
5
,
\
Thus one can solve the five dimensional Einstein equations, in the ball bounded by the perturbed four sphere. This gives the quantum effective action, of the large N matter fields, on the perturbed four sphere. The scalar part of the perturbed combined action is,
5 =
8^/
c / 4 : C
^
0 ( 2 a V 2
"
1 ) ( V 2 + 4 ) < A
(13)
where jij and V are the metric and connection on the unit four sphere. Note that it depends on a, the nonconformal counter term, but not on /?, the counter term for conformally invariant fields. If a = 0, the action looks like that of a tachyon. However, it is the same action, though with opposite sign, as that for conformal factor perturbations of de Sitter space, with a cosmological constant. We know that conformal factor fluctuations of this system, are not physical degrees of freedom, but are eliminated by the constraints. It therefore seems clear that the second factor in the large N scalar action, does not correspond to a physical mode. On the other hand, if a is negative, the first factor in the scalar action will correspond to a physical degree of freedom, which behaves like a massive scalar. For perturbations around flat space, this mass will be real, with m 2 of the order of l/(—a). However, around the de Sitter large N instanton, m 2 will be l/2a, and so will be tachyonic. This means that the de Sitter phase will be unstable, as Starobinski discovered. The growth time of the instability, will be 12a times the radius of the four sphere,
*KeXp(l2jbk)-
(14)
Since a is a counter term, there is no reason it should not be negative and quite large. If it is less than minus ten, there will be enough inflation to solve the flatness problem. A large negative a will also give scalar fluctuations in the microwave background that are small. The second order variation of the combined action, for transverse trace less ha is shown on the slide.
121
9
Transverse Traceless Action -] 2
M(p,a,/?)
(15)
where M(p,a,l3)
= y(p)+p2+3p
+ 6 + 2Pp(p+l)(P
+ 2){P + 3)-4ap{p
+ 3) (16)
and y(p)=p(p+l)(p
+ 2)(p + 3)
^( £ T i ) + ( £ T i ) - ^ ) - ^ )
+ p 4 + 2p3 - 5p2 - lOp - 6, with ^(z) = — log T(z)
(17)
Here p is the level number of the harmonic. The second order variation goes like p to the fourth, logp, at large p. This should be compared with p2 for de Sitter with gravity and a cosmological constant. The effect of the large N matter fields is to suppress short scale metric fluctuations. The matter field effective action makes spacetime stiffer. They cause metric fluctuations to go to zero below the Planck length rather than a foam like structure, as is normally thought. To understand the transverse trace less fluctuations physically, it is helpful to write the propagator or two point function, as an integral over a variable q, rather than a discrete sum over harmonics labeled by p, (6hij(x)6hki(y))
oc / dg(continuum of modes) + deiscrete poles.
(18)
Then poles in the integrand, can be interpreted as particles. The transverse trace less propagator, has one pole at q = 0. For this value of q, the vector harmonics mix with the transverse trace less harmonics. This means that one can gauge three of the five transverse trace less components to zero, leaving two, which is what one expects for the graviton. The other poles in the complex q plane, have the full five components that one expects for a massive spin two particle. The positions of these poles will depend on the counter terms a and /3. For j3 sufficiently large, there will be no tachyons, but there will be ghosts, poles, with negative residues. Ghost poles in the propagator, are normally taken to be a fatal flaw in a field theory. The reason is that they seem to indicate, that one could have asymptotic ingoing and outgoing states, which had negative norm. This would mean that the evolution from the initial state, to the final state, is not
122
described by a unitary S matrix. Unitarity is usually taken to be an essential property, of any respectable field theory. Some recent work I have done, however, shows that one can still make sense of a quantum field theory, even if it has ghosts. One no longer specify the initial and final states, as elements of a Hilbert, so the issue of unitarity, does not arise. Instead, the initial and final states, should be described as density matrices, obtained by tracing out over the time derivative of the field, which cannot be measured. I will describe ghost busting in more detail elsewhere. It is enough to say here that unitarity is not defined in the early universe, because there is no asymptotic region. At late times, the departures from strict unitarity would be so small, that we would not notice them, even if we could observe graviton graviton scattering, which we are never likely to be able to do. What we can observe, however, are the fluctuations in the microwave background. These can be calculated in the large N theory, even though it has ghosts. In the usual scalar field driven model of inflation, there is a long slow roll down, during which the horizon size gradually increases. The amplitude of the microwave fluctuation, is essentially the amplitude when the mode leaves the horizon, near the end of inflation. In scalar field driven inflation, the slow roll down means that the horizon at the end of inflation, is much larger than the initial horizon. The fluctuations in the microwave background, are therefore roughly one over the horizon radius, at the end of inflation. In trace anomaly driven inflation, on the other hand, there is not a slow roll down. The solution changes from accelerating, exponential expansion, to a decelerating phase, with only a small increase in horizon size. The amplitude of the fluctuations, will therefore be determined in the initial de Sitter stage. 10
Tensor Fluctuations in Microwave
If the gravitational action, were all that contributed, the amplitude of the tensor fluctuations, would be of the order of one over horizon size. Since the horizon size will be roughly vN, the tensor fluctuations, would be 1/viV. Starobinski visited DAMTP in Cambridge in 1980, and took the opportunity of being outside Soviet scientific Censorship, to publish a remarkable paper in Physics Letters. In it, he showed how to calculate tensor fluctuations of inflationary models, before the term, inflation, had even been applied to the early universe. Starobinski concluded that the trace anomaly inflationary model, had to be abandoned, because it would require 1010 matter fields, to reduce the tensor fluctuations, to the observational limit.
123
Starobinski assumed that the amplitude of the tensor fluctuations, was not significantly changed by the coupling to the matter effective action. However, this assumption can now be examined, using ADS, CFT. It turns out that matter loops can greatly reduce the fluctuations, so that they can be compatible with the observations, with only a realistic number of matter fields. For example, one can have iV = 104, and j3 = 103. The large N trace anomaly inflationary model, requires tuning of the counter terms, a and (3, to fit the observations. But then, any other explanation of the microwave fluctuations, also involves tuning. No one can predict an amplitude of 1 0 - 5 in a natural way. Maybe we have to resort to the anthropic principle. I wouldn not claim that this model is the last word on the very early universe. For one thing, it takes no account of extra dimensions, except as a mathematical device, to calculate quantities in four dimensions. It may be that this is right. We have no experimental evidence of physical extra dimensions. On the other hand, there is a whole web of mathematical dualities, involving six or seven extra dimensions. Nevertheless, the model I have just described, contains features that I expect to be part of the final answer. 11
Features That Should Be Present in Ultimate Model
First of these is that gravity is only one of a large number of physical fields. Thus even at the origin of the universe, the large TV approximation should be valid, and gravity should be described by a background metric, with small fluctuations, in agreement with the observations of the microwave background. Second, the natural candidate for the background metric of the initial instanton that began the universe, is a round sphere, at least in the four dimensions we observe. The analytic continuation of a Euclidean sphere, is Lorentzian de Sitter space. Classically, this expands exponentially from the infinite past, to the infinite future. Thus it deserves the title, eternal inflation, more than the misguided scenario in which inflation is supposed to result from the quantum fluctuations of a single scalar field. Such eternal inflation, would occur at the Planck scale, at which the division into background metric and fluctuation, breaks down, so all bets are off. In large N inflation, on the other hand, the horizon size is large compared to the Planck length. Thus the background + perturbation split, is well defined. However, the inflation does not go on for ever, in either direction of time. The infinite past of the classical de Sitter solution, is replaced by the quantum creation of the universe from nothing, mediated by the Euclidean four spherei, as shown in Fig. 1.
124
If the background metric expanded like de Sitter into the infinite future, it wouldn't contribute to the amplitude for a universe like we observe. This amplitude will come from metrics that exit inflation. One really does not care if there is also an amplitude for eternal inflation. The third feature of this large N model, that I expect to be in the final theory, is ghosts. In 1985, it was claimed that perturbative string theory, was the only theory of quantum gravity, that was both finite, and unitary. However, perturbative string theory, is not much good in cosmology, or indeed for anything except calculating graviton graviton scattering. Instead, most of what is now called string theory, uses the supergravity theories, that were so rubbished in the late 80s. The fact that supergravity, probably has higher loop divergences, is quietly glided over. Somehow, it is felt that the finite loop behavior of string theory, will eliminate ghosts, and guarantee unitarity in supergravity. However, it is known that the loop expansion of string theory does not converge. If it did, we would be living in ten dimensional Minkowski space, and physics would just be the S matrix, and very dull. It would not describe the origin of the universe, or black holes. So I do not think one can rely on perturbative string theory, to save you from ghosts in cosmology. As I said earlier, one should not be afraid of higher derivative ghosts. They are gentle, harmless creatures, with whom one can live quite comfortably. Ghosts are not compatible with strict unitarity, but the way they prevent it, is quite benign. The existence of ghosts, means that one can not prepare a system in a pure quantum state, or measure whether a given state is pure. Thus the initial and final situations, have to be described by density matrices, and there is no S matrix. One can never produce a negative norm state, containing just ghosts. Ghosts are gregarious, and demand the company of real particles. However, they appear only at high energy, so their effect will be insignificant in normal particle scattering, which will appear to be unitary to a high degree of accuracy. I have told you how the universe began, how structure developed, and how to deal with ghosts. I think that is enough for one talk. Thank you for listening. References 1. A. A. Starobinsky, Phys. Lett. B 9 1 , 99 (1980). 2. S. W. Hawking, T. Hertog and H. S. Reall, Phys. Rev. D63, 083054 (2001).
OBSERVATIONAL C O N S T R A I N T S O N MODELS OF INFLATION* DAVID LYTH Lancaster
Department of Physics University, Lancaster LAI
4YB,
U.K.
Present data require a spectral index n > 0.95 at something like 1-u level. If this lower bound survives it will constrain 'new' and 'modular' inflation models, while raising it to 1.00 would rule out all of these models plus many others.
1
Introduction
Inflation is supposed to set the initial conditions for other subsequent Hot Big Bang 1>2. It does this job in two parts. During the first few e-folds, at perhaps the Planck scale, it generates a Universe which is almost perfectly homogeneous and isotropic at the classical level, which is spatially fiat and free from unwanted relics. Then, during the last fifty or so e-folds, when the rate of expansion is at least five orders of magnitude below the Planck scale, inflation generates a primordial curvature perturbation, whose spectrum is rather flat on cosmological scales. Any kind of inflation will do for the first job, and we have no way of discovering from observation which kind it is. In contrast, by the time that the second job is being done, the flat spectrum strongly suggests that we should be dealing with the slow-roll paradigm. Moreover different models within that paradigm give different values for the shape of the spectrum, so that observation provides discrimination between different models. 2
Inflation and the spectral index of the primordial curvature perturbation
Let us begin by recalling the history of the Universe, as summarized in Table 1. The curvature perturbation is generated when cosmological scales leave the horizon during inflation. Until these scales re-enter the horizon, long after inflation, it is time-independent (frozen in); this is the object that I am calling the primordial curvature perturbation. The freezing-in of the curvature perturbation on super-horizon scales is a direct consequence of the lack of causal •UPDATED VERSION OF A TALK AT COSM02K, T O A P P E A R IN T H E PROCEEDINGS
125
126 Table 1 . A brief history of the Universe. (energy density) 7 10 1 8 GeV? 10 1 3 GeV??
lMeV IkeV l(T3eV
Inflation begins Primordial curvature perturbation freezes Inflation ends soon afterwards We don't know what happens next, until .. . Nucleosynthesis Primordial curvature perturbation unfreezes Matter becomes clumpy Radiation becomes anisotropic Present epoch
interactions on such scales, under the sole assumption of energy conservation 3 , and independently of whether Einstein gravity is valid. This is extremely fortunate, since we know essentially nothing the Universe while cosmological scales are outside the horizon. The spatial Fourier components of the primordial curvature perturbation are uncorrelated (Gaussian perturbation), which means that its stochastic properties are completely determined by its spectrum V-jz(k), defined essentially as the mean-square value of the spatial Fourier component with comoving wavenumber k. The spectral index
n(k) = 1 +
dlog^TC dlogfc
(1)
defines the shape of the spectrum. A special case, predicted by most inflation models, is that of a practically scale-invariant n, giving Viz oc k^n~^. The most special case, predicted only by rather special models of inflation, is that of a spectral index practically indistinguishable from 1, giving a practically scale-invariant VnBy the time that cosmological scales re-enter the horizon, long after nucleosynthesis, we know the content of the Universe; there are photons, three types of neutrino with (probably) negligible mass, the baryon-photon fluid, the (non-baryonic) dark matter, and the cosmological constant. The primordial curvature perturbation is associated with perturbations in the densities of each of these components, which all vanish on a common spatial slicing (an adiabatic density perturbation). It is also associated with anisotropies in the momentum distributions. Using well-understood coupled equations, encapsulated say in the CMBfast package, the perturbations and anisotropies can be evolved forward to the present time, if we have a well-defined cosmological model. Here we will make the simplest assumption, namely the ACDM cos-
127
mology; the Universe is spatially flat, and the non-baryonic cold dark matter is cold (CDM). Flatness is the naive prediction of inflation, and there is no definite evidence against CDM. I would like to report the result of a recent fit 4 of the parameters of the ACDM model. The data set consisted of the following. • The normalization (2/5)7 3 4 /2 = anisotropy.
L 9 4 x 10
~5
from
COBE data on the cmb
• Boomerang and Maxima data at the first and second peaks of the cmb anisotropy. • Hubble parameter h = 0.65 ± 0.075, total density ft0 = 0.35 ± 0.075, baryon density 0 B /i 2 = 0.019 ± 0.002. • Slope of galaxy correlation functions T = 0.23 ± 0.035 • RMS matter density contrast a$ = 0.56 ± 0.059 in sphere of radius 8/1" 1 Mpc The epoch of reionization was calculated, assuming that a fraction / > 1 0 - 4 has collapsed. The result (for / ~ 10" 2 ) is n = 0.99 ±0.05
(2)
5
This is higher than that of Kinney et al. (n = 0.93 ±0.05) and of Tegmark et al. 6 (n = 0.92 ±0.04). Probably, this is because the former do not include CT8 or r , while the latter do not include as, and have also a lower T. Also, both have reionization redshift ZR ~ 0. We shall see that the tighter lower bound on n implied by our analysis is significant, in the context of some models of inflation. (These are the only two analyses so far which include most of the relevant data, including the crucial nucleosynthesis constraint. A recent analysis 7 omitting the latter gives n = 1.03 ± 0.08.) 3
Comparison with models of slow-roll inflation
The near scale-independence of the primordial curvature perturbation presumably requires slow-roll inflation, in which the potential V satisfies flatness conditions MP\V'/V\
In order to avoid the overproduction of gravitinos, the reheating temperature TR must be lower than 109GeV, which requires the small coupling A < 10~ 5 . The small coupling A is naturally understood in 't Hooft's sense provided that HH is even under the Z2 symmetry in eq.(7). 3
N e w inflation in supergravity with a chaotic initial condition
In this section, we propose new inflation with a chaotic initial condition as an alternative scenario because it predicts low reheating temperature straightforwardly and can easily generate density fluctuations with a tilted spectrum. 3,1
model
We introduce the inflaton chiral superfield $(2:, 0) and assume that the Kahler potential, K{* by imposing the NambuGoldstone like shift symmetry. If this Nambu-Goldstone-like symmetry is exact, however, x (= v 2 Im$) cannot have any potential and chaotic inflation would be impossible no matter how large an initial value it could take. Hence this global symmetry must be explicitly broken. For the purpose, we introduce the following superpotential which breaks the shift symmetry with a dimensionless coupling parameter g', W = v2X - g'X2 = v2X{\ - g<S>2),
(28)
where we have introduced another chiral superfield X(x, 9). Here v is a scale generated dynamically, and g = g'v~2. The above superpotential W is invariant under U(1)R X ZI symmetry. Under U(\)R symmetry, X{0) -> e-2iaX(eeia), $((9) -» $( , x can take a value much larger than unity without costing exponentially large potential energy. Then,
142
the amplitudes of both ip and X soon become smaller than unity due to this steep potential and the exponential factor can be Taylor expanded around the origin. This is the situation we deal with hereafter. Then the scalar potential is dominated by V ^ \\x\
(38)
with A = g2vA. Thus chaotic inflation can set out around the Planck epoch. chaotic inflation, the mass squared of A - 1 / 6 , when the universe is in a self-reproduction stage of eternal inflation. Hence let us consider the regime x < A^ 1 / 6 and (42) holds. Then we can estimate the root-mean-square (RMS) fluctuation in
143
X using the Fokker-Planck equation and found that ((AX) approaches /
\
\ asymptotically
A2/3
which is much less than unity because A must be a tiny number as will be shown later. From (43) and (44) the amplitude of X becomes much smaller than unity by the time Y ~ A/24- Thereafter (41) no longer holds and X oscillates around the origin rapidly and its amplitude decreases even more. Thus our approximation that both if and X are much smaller than unity is consistent throughout the chaotic inflation regime. As x becomes of order of unity, chaotic inflation ends and the field oscillates coherently with the mass squared m^ ~ 2gv4. Since g must take a value slightly larger than unity as shown later, the energy density of this oscillation becomes less than the vacuum energy density ~ vA soon. At this stage
*)4,
(54)
roughly generate three minor changes associated with ki. The first change is as follows; during chaotic inflation, the mass squared mx of X acquired the additional terms, 2
• z/r oscillation 2 with Am2tm~3xlO-3eV2, sin2 20 a t m ~ 1.
(8)
As for the solar neutrino anomaly, four different oscillation scenarios are possible 3 though the large mixing angle (LMA) MSW oscillation is favored by the recent Super-Kamiokande data: SMA MSW : Am 2 ol ~ 5 x 1 0 - 6 eV 2 ,
sin2 26>soi ~ 5 x 10~ 3 ,
LMA MSW : Am 2 ol ~ 2 x 1CT5 eV 2 , sin2 26»soi ~ 0.8, LOW MSW : Am 2 ol ~ 10~ 7 eV 2 , sin2 27f>Afc)>
(n)
II. Bi-maximal mixing with LOW MSW or LMA VAC solar neutrino oscillation: 7772/7773 ~ A4 or A5 ,
( M . M . M ) ~ (^, ^ , A * ) ,
(12)
III. Single-maximal mixing with SMA MSW solar neutrino oscillation: 7772/7713 ~ A , ( | « 2 3 | , |«i2|, | S l 3 | ) ~ ( ^ , A 2 , A * ) ,
(13)
where A = s i n # c ~ 0.2 for the Cabbibo angle 8c and 7773 ~ 5 x 1 0 " 2 e V ,
k>
1
(14)
in all cases. These neutrino results can be compared with the following quark and charged lepton masses a n d mixing angles: (mt,mc,mu) (mb,ms,md) {mT,mll,me)
~ 1 8 0 ( 1 , A 4 , A8) 2
4
~ 4(1, A , A ) 2
GeV, GeV,
5
~ 1.8(1, A , A ) 2
GeV,
( s i n ^ 2 3 , s i n 0 1 2 , s i n ^ 1 3 ) ~ (A , A, A 3 ),
(15)
where mH\H
•
L
l
/"
H
* I
f