2006 IEEE Nuclear and Space Radiation Effects Conference Short Course Notebook
July 17, 2006 Ponte Vedra Beach, Florida...
63 downloads
902 Views
22MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
2006 IEEE Nuclear and Space Radiation Effects Conference Short Course Notebook
July 17, 2006 Ponte Vedra Beach, Florida
Modeling the Space Radiation Environment and Effects on Microelectronic Devices and Circuits
Sponsored by: IEEE/NPSS Radiation Effects Committee
Supported by: Defense Threat Reduction Agency Air Force Research Laboratory Sandia National Laboratories NASA Living With a Star Program Jet Propulsion Laboratory NASA Electronic Parts and Packaging Program
Approved for public release; distribution is unlimited.
2006 IEEE Nuclear and Space Radiation Effects Conference
Short Course
Modeling the Space Radiation Environment and Effects on Microelectronic Devices and Circuits
July 17, 2006 Ponte Vedra Beach, Florida
Copyright© 2006 by the Institute of Electrical and Electronic Engineers, Inc. All rights reserved. Instructions are permitted to photocopy isolated articles for noncommercial classroom use without fee. For all other copying, reprint, or replication permission, write to Copyrights and Permissions Department, IEEE Publishing Services, 445 Hoes, Lane, Pascataway, NJ 08555-1331.
Table of Contents Section I...............................................................................................................................I Introduction Dr. Robert Reed Vanderbilt University Institute for Space and Defense Electronics
Section II.............................................................................................................................II Modeling the Space Radiation Environment Dr. Mike Xapsos NASA Goddard Space Flight Center
Section III...........................................................................................................................III Space Radiation Transport Models Dr. Giovanni Santin ESA / ESTEC and Rhea System SA
Section IV...........................................................................................................................IV Device Modeling of Single Event Effects Prof. Mark Law University of Florida
Section V.............................................................................................................................V Circuit Modeling of Single Event Effects Jeff Black Dr. Tim Holman Vanderbilt University Institute for Space and Defense Electronics
2006 IEEE NSREC Short Course
Section I: Introduction
Dr. Robert Reed Vanderbilt University Institute for Space and Defense Electronics
Approved for public release; distribution is unlimited
Introduction This Short Course Notebook contains manuscripts prepared in conjunctions with the 2006 Nuclear and Space Radiation Effects Conference (NSREC) Shot Course, which was held July 17, 2006 in Ponte Vedra Beach, Florida. This was the 27th Short Course offered at the NSREC. This Notebook is intended to be a useful reference for radiation effects experts and beginners alike. The topics chosen each year are timely and informative, and are covered in a greater level of detail in this forum than is possible by individual contributed papers. The title of this year’s short course is “Modeling the Space Radiation Environment and Effects on Microelectronic Devices and Circuits.” This one-day Short Course will provide a detailed discussion of the methods used by radiation effects engineers to model the space radiation environment and some of its effects on modern devices and circuits. The remarkable advances in modern device technology offers specific challenges for high-fidelity radiation effects modeling. These include the need for improved modeling of the variability of space radiation, the transport of the environment through spacecraft structures and chip packaging, and detailed single event effects modeling at the device and circuit level. This notebook has four technical sections on different aspects of the problem. The first section focuses on methods used to predict the space radiation environment. The next section provides a detailed discussion of the basic interactions of radiation with matter and describes existing radiation transport computer codes and methods. The next two sections are focused on Single Event Effects (SEE) modeling. The first will focus on the proper use of Technology Computer Aided Design (TCAD) tools to model charge transport after a single ionizing event; the second and final Notebook section provides details on modeling SEEs in integrated circuits. Each attendee received a complimentary CD-ROM that contains an archive of IEEE NSREC Short Course Notebooks (1980-2006). This collection covers 27 years of the on-day tutorial courses, presented yearly at the NSREC. It serves as a valuable reference for students, engineers and scientists. The Short Course Notebook is divided into 5 sections as follows: Section I (this section!), provides the motivation for the selection of the Short Course topics and the biographies of the instructors. Section II, by Dr. Mike Xapsos, will discuss recent developments in modeling the trapped particle, galactic cosmic ray and solar particle radiation environments. The metrics for describing the effect these radiations have on electronic devices and circuits will be introduced. This includes ionizing dose, displacement damage dose and linear energy transfer (LET). A substantial portion of the course will be devoted to the recent application of models for characterization of radiation environments. The origins of the methods will be described leading up to the environment applications. Comparisons with traditional models will be shown. Example results for different phases of the solar cycle and for missions ranging from low earth orbit out to interplanetary space will be presented. Section III, by Dr. Giovanni Santin, will provide a review of the physical interactions of the space radiation environment with matter and models used to compute the environment local to the microelectronic circuit. The first portion will be devoted to defining the important physical processes that must be included when modeling the transport of space radiation environment through spacecraft materials. Then Dr. Santin will provide an overview of the current techniques and tools that are available for transport modeling. The last portion will focus on the application and validation of GEANT4 for use in transport model of the space environment with emphasis on the effects on microelectronic devices.
Section IV, by Prof. Mark E. Law, will discuss using device and process simulation tools effectively to model single event upset behaviors. Modeling single event upset provides many challenges to TCAD tools. In this course, practical pitfalls will be described and techniques will be discussed to avoid these problems. Several issues can create problems. First, numerical approximations must be understood and controlled by the user. Second, the device geometry, doping, and materials need to be set up correctly. Third, physical transport models have specific limitations for application to single event simulations. Most TCAD models have been tuned to MOS device transport, and may not be appropriate for bulk charge removal in a single event case. A complex simulation example illustrating these points will be presented to help illustrate good practice. Section V, by Jeff Black and Dr. Tim Holman, will discuss the various tools currently available for simulating single event effects at the circuit level. Circuit level simulation can be performed more efficiently than TCAD simulation at a cost of reduced simulation fidelity. This course will provide an understanding of the circuit simulation fidelity and how to make use of the results in design and analysis tasks using modern technology. They will provide an overview of single event effect mechanisms with emphasis on the circuit structures responsible for charge generation. They will also provide a classification of circuit simulation tools and describe the simulation challenges and potential pitfalls. The bulk of the course will cover the tools available for circuit simulation, defining the circuits and stimulus inputs, setting up the simulations, and analyzing the results. An example of single event effects simulation will be shown on each class of circuit simulator. I would like to personally thank each of the Short Course Instructors, Mike Xapsos, Giovanni Santin, Mark Law, Jeff Black, and Tim Holman for their substantial efforts to guarantee the success of the 2006 NSREC Short Course. The preparation of these manuscripts and the presentations given at the conference involve a great deal of personal time and sacrifice. I think I can speak for all of those that attended the Short Course and read these notes when I say THANK YOU. I would also like to thank Lew Cohn for his efforts in reviewing the manuscripts and ensuring that the Short Course Notebooks were printed in a timely manner, and the DTRA print office for printing the Notebooks. In addition, I would like to that Dale Platteter for his efforts in publishing the CD-ROM. Finally, I would like to thank the team of people that server as reviewers for these notes and the presentations.
Dr. Robert A. Reed NSREC Short Course Chairman ISDE Vanderbilt University
Biographies Dr. Robert Reed Short Course Chairman Vanderbilt University Institute for Space and Defense Electronics Robert A. Reed received his M.S. and Ph.D. degrees in Physics from Clemson University in 1993 and 1994. After completion of his Ph.D. he worked as a post-doctoral fellow at the Naval Research Laboratory and later worked for Hughes Space and Communication. From 1997 to 2004, Robert was a research physicist at NASA Goddard Space Flight Center where he supported NASA space flight and research programs. He is currently a Research Associate Professor at Vanderbilt University. His radiation effects research activities included topics such as single event effects and displacement damage basic mechanisms and on-orbit performance analysis and prediction techniques. He has authored over 70 papers on various topics in the radiation effects area. He was awarded the 2004 Early Achievement Award from IEEE/NPSS and the 2000 Outstanding Young Alumni Award from Clemson University. Robert has been involved in the NSREC community since 1992 serving as 2004 Finance Chairman, 2002 Poster Session Chairman, and 2000 Short Course Instructor.
Dr. Mike Xapsos NASA Goddard Space Flight Center Mike Xapsos is a research physicist in the Radiation Effects and Analysis Group at NASA Goddard Space Flight Center where he oversees its work on the space radiation environment. This involves developing models of the environment and using models and tools to determine radiation requirements for NASA missions. He is the Project Scientist for the Space Environment Testbeds (SET) Project and the Radiation Lead for the Solar Dynamics Observatory (SDO) Mission. Prior to joining NASA in 2001 he worked in the Radiation Effects Branch at the Naval Research Laboratory, where he also researched problems in device radiation physics. He holds a Bachelor’s degree in physics and chemistry from Canisius College and a PhD degree in physics from the University of Notre Dame. He has held the position of Guest Editor for the IEEE Transactions on Nuclear Science, Technical Program Chairman for the IEEE Nuclear and Space Radiation Effects Conference, and is currently General Chairman for the Single Event Effects Symposium. He has published over 75 technical papers and holds one US patent.
Dr. Giovanni Santin ESA/ESTEC and Rhea System SA Giovanni Santin is an analyst in the Space Environments and Effects Analysis section at the European Space Agency (ESA / ESTEC) on loan from RHEA Tech Ltd for the support to ESA programs. He is a specialist in radiation transport codes for Monte-Carlo simulations. His current research interests are in development and use of radiation environment models, radiation effects modeling for manned and unmanned missions, radiation analysis engineering tools and radiation monitors. He holds a Bachelor’s degree in physics and a PhD degree in physics from the University of Trieste, Italy. Prior to joining ESA with RHEA in 2002 he worked on experimental particle physics at CERN for the University of Geneva and on medical physics at the University of Lausanne. In addition to his research in space environment, he is involved in medical physics research, mainly in developments for PET and SPECT and in dosimetry for radiation therapy.
Professor Mark E. Law University of Florida Mark Law is a professor and chair of Electrical and Computer Engineering at the University of Florida. He received the B.S. Cpr.E. degree from Iowa State University in 1982 and the Ph.D. degree from Stanford University in 1988. His current research interests are in integrated circuit process modeling, characterization, and device modeling. Dr. Law was named a National Science Foundation Presidential Faculty Fellow in 1992, College of Engineering Teacher of the Year in 1996-97, and a UF Research Fellow in 1998. He was editor-in-chief of the IEEE Journal on Technology Computer Aided Design. He is currently the vice president for technical activities of the IEEE Electron Device Society. He chaired the 1997 Simulation of Semiconductor Process and Devices Meeting, the 1999 and 2002 silicon front-end processing symposium of the Materials Research Society, the 2005 Ultra-Shallow Junctions workshop and chaired the 2000 International Electron Devices Meeting. He was named an IEEE Fellow in 1998 for his contributions to integrated circuit process modeling and simulation.
Jeffrey D. Black Dr. W. Timothy Holman Vanderbilt University Institute for Space and Defense Electronics Jeffrey D. Black is a Senior Research Engineer in the Institute for Space and Defense Electronics (ISDE) at Vanderbilt University. He received his BSEE at the United States Air Force Academy in 1988 and his MSEE at the University of New Mexico in 1991. He is currently pursuing his PhD at Vanderbilt University. Jeff’s areas of specialty and interest are single event effects and mitigation approaches. Prior to joining ISDE in 2004, Jeff worked for Mission Research Corporation, now ATK Mission Research, in Albuquerque, NM. Jeff is just completing his three year term as Secretary of the Radiation Effects Steering Group. He has enjoyed serving the NSREC community in various positions. Dr. W. Timothy Holman is a member of the Institute for Space and Defense Electronics and a research associate professor in the Department of Electrical and Computer Engineering at Vanderbilt University. His current research is focused on radiation effects in analog and mixed-signal circuits, and the design of radiation-hardened mixed-signal circuits in CMOS and BiCMOS technologies. In addition to his research, Dr. Holman has developed new methods for video-based delivery of educational material that are used to produce archival CD-ROM copies of the NSREC short course for attendees each year.
2006 IEEE NSREC Short Course
Section II: Modeling the Space Radiation Environment
Michael Xapsos NASA Goddard Spaceflight Center Greenbelt, MD 20771
Approved for public release; distribution is unlimited
Modeling the Space Radiation Environment Michael Xapsos, NASA Goddard Space Flight Center NSREC 2006 Short Course Outline I. Introduction................................................................................................................. 2 II. The Solar Activity Cycle ............................................................................................ 3 III. The Earth’s Trapped Radiation Environment ......................................................... 5 A. The Magnetosphere and Trapped Particle Motion.................................................. 5 B. Characteristics of Trapped Protons......................................................................... 8 C. The AP-8 Model ..................................................................................................... 9 D. Recent Developments in Trapped Proton Models ................................................ 12 E. Characteristics of Trapped Electrons .................................................................... 18 F. The AE-8 Model ................................................................................................... 19 G. Recent Developments in Trapped Electron Models ............................................. 20 IV. Galactic Cosmic Rays ........................................................................................... 29 A. General Characteristics ......................................................................................... 29 B. Galactic Cosmic Ray Models................................................................................ 32 V. Solar Particle Events ................................................................................................. 35 A. General Characteristics ......................................................................................... 35 B. Solar Proton Models ............................................................................................. 37 1. The Maximum Entropy Principle and the Distribution of Solar Proton Event Magnitudes ............................................................................................. 37 2. Cumulative Fluence During Solar Maximum................................................... 39 3. Cumulative Fluence During Solar Minimum ................................................... 42 4. Extreme Value Theory and Worst Case Events................................................ 44 5. Self-Organized Criticality and the Nature of the Energy Release Process ....... 48 a) Rescaled Range Analysis.............................................................................. 49 b) Fractal Behavior............................................................................................ 52 c) Power Function Distribution......................................................................... 53 C. Solar Heavy Ion Models ....................................................................................... 54 VI. Future Challenges ................................................................................................. 56 VII. References............................................................................................................. 57
I-1 II-1
I.
Introduction
There are a number of environmental hazards that spacecraft must be designed for, which includes low energy plasma, particle radiation, neutral gas particles, ultraviolet and x-ray radiation, micrometeoroids and orbital debris. This manuscript is focused on hazards present for devices and integrated circuits in the space environment. Hence it is mainly concerned with three categories of high-energy particle radiations in space. The first is particles trapped by planetary magnetic fields such as the earth’s Van Allen Belts. The second is the comparatively low-level flux of ions that originate outside of our solar system called galactic cosmic rays. The third is bursts of radiation emitted by the sun, characterized by high fluxes of protons and heavy ions, referred to as solar particle events. In order to have reliable, cost-effective designs and implement new space technologies, the radiation environment must be understood and accurately modeled. Underestimating the radiation levels leads to excessive risk and can result in degraded system performance and loss of mission lifetime. Overestimating the radiation levels can lead to excessive shielding, reduced payloads, over-design and increased cost. The last approximately 10 years has been a renaissance period in space radiation environment modeling for a number of reasons. There has been a growing need for some time now to replace the long-time standard AP-8 and AE-8 radiation belt models. These are based on data that badly needed to be updated. A growing number of interplanetary exploration initiatives, particularly manned initiatives to the moon and Mars, are driving the development of improved models of the galactic cosmic ray and solar particle event environments. Improved radiation detectors and other technologies such as those operating on the Advanced Composition Explorer (ACE) and the Solar, Anomalous and Magnetospheric Particle EXplorer (SAMPEX) satellites have led to unprecedented measurement accuracy and resolution of space radiation properties. Finally, the pervasive use of commercial-off-the-shelf (COTS) microelectronics in spacecraft to achieve increased system performance must be balanced by the need to accurately predict their complex responses in space. The main objective of this section of the short course is to present recent developments in modeling the trapped particle, galactic cosmic ray and solar particle event radiation environments for radiation effects applications. This will start with background information and initial reviews of the traditional models before proceeding to the newer models. In the case of solar particle event models a number of probabilistic methods not commonly found in the literature have recently been applied. An overview of the origins and backgrounds of these methods will be given leading up to the environment applications. Comparisons between various models will be shown for different phases of the solar cycle and for missions ranging from low earth orbit out to interplanetary space. As galactic cosmic rays and solar particles enter and interact with the earth’s upper atmosphere, showers of secondary particles are produced. Secondary neutrons are the most important contributor to single event effects at altitudes below about 60,000 feet. Discussions of the atmospheric and terrestrial neutron environments can be found elsewhere [Ba97], [Ba05].
I-2 II-2
II.
The Solar Activity Cycle
The sun is both a source and a modulator of space radiations. Understanding its cyclical activity is an important aspect of modeling the space radiation environment. The solar activity cycle is approximately 11 years long. During this period there are typically 7 years during solar maximum when activity levels are high and 4 years during solar minimum when activity levels are low. In reality the transition between solar maximum and solar minimum is a continuous one but it is often considered to be abrupt for convenience. At the end of each 11-year cycle the magnetic polarity of the sun reverses and another 11-year cycle follows. Thus, strictly speaking the total activity cycle is approximately 22 years long. Of the space radiations considered here the magnetic polarity apparently only affects the galactic cosmic ray fluxes [Ba96a], and not the trapped particle or solar particle event fluxes. Thus, things are often viewed on an approximately 11-year cyclical basis. Two common indicators of this approximately 11-year periodic solar activity are sunspot numbers and solar 10.7 cm radio flux (F10.7). The most extensive record is that of observed sunspot numbers, which dates back to the 1600s. This record is shown in Figure 1. The numbering of sunspot cycles began in 1749 and it is currently near the end of solar cycle 23. The record of F10.7 began part way through solar cycle 18 in the year 1947 and is shown in Fig. 2.
Figure 1. The observed record of yearly averaged sunspot numbers.
I-3 II-3
Figure 2. Measured values of solar 10.7 cm radio flux. Although sunspot numbers and F10.7 are commonly accepted indicators of solar activity, quantitative relations to measured radiation events and fluxes are not necessarily straight forward. Solar particle events are known to occur with greater frequency and intensity during the declining phase of solar maximum [Sh95]. Trapped electron fluxes also tend to be higher during the declining phase [Bo03]. Trapped proton fluxes in low earth orbit (LEO) reach their maximum during solar minimum but exactly when this peak is reached depends on the particular location [Hu98]. Galactic cosmic ray fluxes are also at a maximum during solar minimum but in addition depend on the magnetic polarity of the sun [Ba96a]. There has been considerable effort put into forecasting long-term solar cycle activity. A review of a number of the methods is presented by Hathaway [Ha99]. These include regression methods, which involve fitting a function to the data as the cycle develops. Also discussed are precursor methods, which estimate the amplitude of the next cycle based on some type of correlation with prior information. These methods can also be combined. In addition, physically based methods are being developed based on the structure of the magnetic field within the sun and heliosphere [Sc96], [Di06]. However, accurate methods for predicting future solar cycle activity levels prior to the start of the cycle have thus far been elusive. A potential breakthrough, however, has recently been reported that uses a combination of computer simulation and observations of the solar interior from instrumentation onboard the Solar and Heliospheric Observatory (SOHO) [Di06]. Given the current state of this modeling, probabilistic models of solar activity can be useful. Such a model of F10.7 is shown in Figure 3 [Xa02]. This also illustrates the general behavior of the observed cyclical properties, at least over recent cycles. The greater the peak activity of a cycle, the faster the rise-time to the peak level. Furthermore the cyclical activity is asymmetric such that the descending phase of the cycle is longer than the ascending phase.
I-4 II-4
Figure 3. Probabilistic model of F10.7. The various curves are labeled as a function of confidence level that the activity shown will not be exceeded [Xa02].
III.
The Earth’s Trapped Radiation Environment
This section leads up to recent modeling developments for trapped protons and trapped electrons geared toward radiation effects applications. Initially a review of background information and related physical processes will be given. Further background information can be found in [Ba97], [Ma02] and [Wa94]. A.
The Magnetosphere and Trapped Particle Motion
The earth’s magnetosphere consists of both an external and an internal magnetic field. The external field is the result of plasma or ionized gas that is continually emitted by the sun called the solar wind. The internal or geomagnetic field originates primarily from within the earth and is approximately a dipole field. As shown in Figure 4, the solar wind and its embedded magnetic field tends to compress the geomagnetic field. During moderate solar wind conditions, the magnetosphere terminates at roughly 10 earth radii on the sunward side. During turbulent magnetic storm conditions it can be compressed to about 6 earth radii. The solar wind generally flows around the geomagnetic field and consequently the magnetosphere stretches out to a distance of possibly 1000 earth radii in the direction away from the sun.
I-5 II-5
Figure 4. The earth’s magnetosphere. Figure 5 shows the geomagnetic field, which is approximately dipolar for altitudes of up to about 4 or 5 earth radii. It turns out that the trapped particle populations are conveniently mapped in terms of the dipole coordinates approximating the geomagnetic field. This dipole coordinate system is not aligned with the earth’s geographic coordinate system. The axis of the magnetic dipole field is tilted about 11 degrees with respect to the geographic North-South axis and its origin is displaced by a distance of more than 500 km from the earth’s geocenter. The standard method is to use McIlwain’s (B,L) coordinates [Mc61]. Within this dipole coordinate system, L represents the distance from the origin in the direction of the magnetic equator, expressed in earth radii. One earth radius is 6371 km. B is simply the magnetic field strength. It describes how far away from the magnetic equator a point is along a magnetic field line. Bvalues are a minimum at the magnetic equator and increase as the magnetic poles are approached.
I-6 II-6
Figure 5. The internal magnetic field of the earth is approximately a dipole field. Next the basic motion of a trapped charged particle in this approximately dipole field will be discussed. Charged particles become trapped because the magnetic field can constrain their motion. As shown in Figure 6 the motion a charged particle makes in this field is to spiral around and move along the magnetic field line. As the particle approaches the polar regions the magnetic field strength increases and causes the spiral to tighten. Eventually the field strength is sufficient to force the particle to reverse direction. Thus, the particle is reflected between so called “mirror points” and “conjugate mirror points”. Additionally there is a slower longitudinal drift of the path around the earth that is westward for protons and eastward for electrons. Once a complete azimuthal rotation is made around the earth, the resulting toroidal surface that has been traced out is called a drift shell. A schematic of such a drift shell is shown in Figure 7.
I-7 II-7
Figure 6. Motion of a charged trapped particle in the earth’s magnetic field.
Figure 7. Illustration of the geometry of drift shells. B.
Characteristics of Trapped Protons
Some of the characteristics of trapped protons and their radiation effects are summarized in Table 1. The L-shell range is from L = 1.15 at the inner edge of the trapped environment out beyond geosynchronous orbits to an L-value of about 10. Trapped proton energies extend up to a few 100’s of MeV, at which point the fluxes begin to fall off rapidly. The energetic trapped proton population with energies > 10 MeV is confined to altitudes below 20,000 km, while I-8 II-8
protons with energies of about 1 MeV or less are observed at geosynchronous altitudes and beyond. The maximum flux of energetic protons occurs at an L-value of around 1.8 and exceeds 105 p/(cm2-s). Close to the inner edge, proton fluxes are modulated by the atmospheric density. They can decrease by as much as a factor of 2 to 3 during solar maximum due to atmospheric expansion and resulting losses caused by scattering processes. Trapped protons can cause Total Ionizing Dose (TID) effects, Displacement Damage (DD) effects and Single Event Effects (SEE). The metric used for TID studies is ionizing dose, defined as the energy deposited per unit mass of material that comprises the sensitive volume. The unit commonly employed is the “rad” where 1 rad = 100 erg/g. One metric for protoninduced displacement damage is to use the equivalent fluence of a given proton energy, often taken as 10 MeV [An96]. A quantity analogous to the ionization dose, called the displacement damage dose (DDD), is also used to study displacement effects in materials [Ma99], [Wa04]. It is defined as the energy that goes into displaced atoms per unit mass of material that comprises the sensitive volume. The units are analogous to ionizing dose except that it is the nonionizing component. Finally, it is noted that studies of proton-induced SEE commonly use the proton energy incident on the sensitive device volume as a relevant parameter. Most proton-induced SEE occur as a result of target recoil products that result from interactions with the incident proton. The incident proton energy has a significant influence on these products and that is the reason why results are commonly presented in terms of it. Table 1. Trapped Proton Characteristics. L-Shell Values
Energies
1.15 – 10
Up to 100’s of MeV
*
C.
Fluxes* (>10 MeV) Up to ~ 105 cm-2s-1
long-term average
Radiation Effects
Metrics
Total Ionizing Dose (TID); Displacement Damage (DD); Single Event Effects
Dose for TID; 10 MeV equivalent fluence and Displacement Damage Dose for DD
The AP-8 Model
The well-known AP-8 trapped proton model is the eighth version of a model development effort led by James Vette. Over the years these empirical models have been indispensable for spacecraft designers and for the radiation effects community in general. The trapped particle models are static maps of the particle population during solar maximum and solar minimum. They are mapped in a dipole coordinate system such as the (B,L) coordinates described in section IIIA. A spacecraft orbit is calculated with an orbit generator. The orbit coordinates are then transformed to (B,L) coordinates and the trapped particle radiation environment determined. Models such as this are implemented in the SPace ENVironment Information System (SPENVIS) suite of programs [http]. Details of the AP-8 model and its predecessors can be found in [Sa76], [Ve91] and [Ba97].
I-9 II-9
Figure 8 is a contour plot of the trapped proton population with energies > 10 MeV shown in a dipole coordinate system. The x-axis is the radial distance along the geomagnetic equator in units of earth-radii while the y-axis is the distance along the geodipole axis, also in units of earth-radii. Thus, a y-value of zero represents the geomagnetic equator [Da96]. A semicircle with a radius of one centered at the point (0,0) represents the earth’s surface on this plot. It is seen that it is a particularly convenient way to reduce a large quantity of information and get an overview of the particle population on a single plot.
Figure 8. The trapped proton population with energies > 10 MeV as predicted by the AP-8 model for solar maximum conditions. From SPENVIS, [http]. For spacecraft that have an orbit lower than about 1000 km the so-called “South Atlantic Anomaly” (SAA) dominates the radiation environment. This anomaly is due to the fact that the earth’s geomagnetic and rotational axes are tilted and shifted relative to each other as discussed in section IIIA. Thus, part of the inner edge of the proton belt is at lower altitudes as shown in Figure 9. This occurs in the geographic region south and east of Brazil. It is shown in Figure 10 as a contour plot on geographic coordinates for > 10 MeV proton fluxes at a 500 km altitude.
I-10 II-10
Figure 9. The South Atlantic Anomaly [Da96].
I-11 II-11
Figure 10. Contour plot of proton fluxes > 10 MeV in the SAA at a 500 km altitude during solar maximum. From SPENVIS, [http]. The main difference between the solar maximum and solar minimum maps is seen at low altitudes where the fluxes are less during solar maximum. The reason is that the atmosphere expands as a result of heating during solar maximum so that trapped protons are lost due to scattering processes at a higher rate. D.
Recent Developments in Trapped Proton Models
This section discusses some of the measurements and modeling efforts that have been performed in an attempt to provide a more updated and dynamic description of the trapped proton population. The advantages of the AP-8 model are its long heritage of use and rather complete description of trapped protons in terms of energies and geographic location. However, it is based on data that were taken mainly in the 1960’s and early 1970’s. Thus a serious concern is whether it still accurately represents the trapped proton environment today. The PSB97 model, developed at the Belgian Institute for Aeronomy (BIRA) and the Aerospace Corporation, is a LEO model for the solar minimum time period [He99]. It is based on the Proton/Electron Telescope (PET) onboard SAMPEX. A notable feature of this model is its broad proton energy range, which extends from 18.5 to 500 MeV. One of the significant extensions of this model beyond AP-8 is that it accounts for secular variation of the geomagnetic field. This variation results because the center of the geomagnetic dipole field drifts away from the geocenter of the Earth at about 2.5 km per year and the magnetic moment decreases with time [http]. The overall effect is to draw the SAA slowly inward toward the earth. A comparison of measurements of the SAA made for > 18.5 MeV protons at an altitude of 500 km is shown in Figure 11 [He99]. It is seen that compared to the AP-8 model of magnetic field epoch 1960, the PSB97 model of magnetic field epoch 1995
I-12 II-12
shows that the SAA has a higher peak flux value that has drifted westward. It also indicates the SAA covers a broader geographic region.
Figure 11. Comparison of the SAA during solar minimum for > 18.5 MeV protons at a 500 km altitude for the time period of the AP-8 model (left) and for the modern SAMPEX/PET measurements (right) [He99]. The Low Altitude Trapped Radiation Model (LATRM), formerly called NOAAPRO, is a LEO proton model developed at the Boeing Company [Hu98]. It is based on 17 years of data taken by the TIROS/NOAA satellites. It accounts for the secular variation of the geomagnetic field using an internal field model. One of the important new features of this model was to account for a continuous solar cycle variation of the trapped proton flux as opposed to AP-8, which transitions discontinuously between the solar maximum and solar minimum periods. This was done using F10.7 as a proxy for the atmospheric density, which controls the proton flux at low altitudes. Figure 12 shows the proton flux for different L-values superimposed upon F10.7 for the period of time the model is based on. It is seen that the flux is anti-correlated with F10.7 due to the greater losses of protons to the atmosphere during solar maximum. The proton flux also shows a phase lag that is dependent on L. Using these empirical relations, the LATRM is able to describe the trapped proton variations over the complete solar cycle as well as make projections into the future.
I-13 II-13
Figure 12. Comparison of the trapped proton flux (points) for low L-values to F10.7 (dotted curve) [Hu98]. The Combined Release and Radiation Effects Satellite PROton (CRRESPRO) trapped proton model is based on data collected for a 14-month period during solar maximum of solar cycle 22 [Gu96]. Although the population of trapped protons in the region of the inner belt is fairly stable, measurements from this satellite demonstrated significant temporal and spatial variability of trapped particles. In particular it showed that the greatest time variations of trapped protons occur in Medium Earth Orbit (MEO). CRRESPRO consists of both a quiet and an active model of trapped protons during solar maximum ranging from L-values of 1.15 to 5.5. The quiet model is for the mission period prior to a large geomagnetic storm that occurred in March 1991 and the active model is for the mission period afterward. Figure 13 shows the CRRESPRO quiet and active models along with AP-8, demonstrating the formation of a second, stable proton belt for L-values between 2 and 3. The belt was particularly apparent in the 20 to 70 MeV energy range [Gu96]. Although the flux levels began to decay immediately they were still measurable on the Russian METEOSAT after about 2 years [Di97].
I-14 II-14
Figure 13. CRESSPRO quiet and active models compared to AP-8 for 55 MeV differential proton fluxes [Gu96]. Recently, the Trapped Proton Model-1 (TPM-1) was developed by Huston [Hu02]. This model combines many of the features of LATRM and CRRESPRO. It covers the geographic region from about 300 km out to nearly geosynchronous orbit for protons in the 1.5 to 81.5 MeV energy range. It models the continuous variation of fluxes over the solar cycle, and also contains a model of both quiet and active conditions as observed onboard CRRES. TPM-1 has a time resolution of 1 month, which is a significant improvement over AP-8. The AP-8 model should be used only for long-term average fluxes. As discussed above, the TPM-1 and PSB97 models have a number of advantages over the AP-8 model. In addition, these models are based on relatively modern instrumentation compared to AP-8. Thus, it is interesting to examine how representative the AP-8 model is of the current trapped proton environment. Figure 14 shows a comparison of the fluxes calculated for an orbit similar to the International Space Station for TPM-1 (quiet conditions), PSB97 and AP-8. All results are for the solar minimum time period. Comparing TPM-1 to AP-8 it is clear there is a significant difference in the hardness of the energy spectra. TPM-1 calculates lower fluxes than AP-8 for low energies and higher fluxes for high energies. Examining the results calculated with PSB97, it is seen that the overlap with the TPM-1 model in their common energy range of about 20 to 80 MeV is excellent. Thus, it would appear that significant discrepancies now exist with the AP-8 model for LEO. A combination of TPM-1 and PSB97, including an update of data
I-15 II-15
taken with the SAMPEX/PET instrument, would result in a fairly complete trapped proton model.
Figure 14. Comparison of fluxes predicted by three trapped proton models for an orbit similar to the International Space Station during solar minimum [La05]. Figures 15 and 16 are comparisons of the TPM-1 and CRRESPRO models, both quiet and active, and the AP-8 model for solar maximum conditions. This comparison is for a 2000 km x 26,750 km elliptical orbit with a 63.4 degree inclination. In this case it is seen that AP-8 predicts significantly higher fluxes over nearly the full energy spectrum, although the difference is less for the TPM-1 active calculation. It can be seen from the examples discussed in this section that these type of comparisons are highly orbit dependent and must be considered on a case-by-case basis. An excellent summary of a number of model comparisons for common orbits is given in Lauenstein and Barth [La05].
I-16 II-16
Figure 15. Comparison of trapped proton models for an elliptical orbit during quiet conditions and the solar maximum time period [La05].
I-17 II-17
Figure 16. Comparison of trapped proton models for an elliptical orbit during active conditions and the solar maximum time period [La05]. E.
Characteristics of Trapped Electrons
Some of the characteristics of trapped electrons are summarized in Table 2. There is both an inner and an outer zone or belt of trapped electrons. These two zones are very different so the characteristics are listed separately. The inner zone ranges out to an L-value of about 2.8. The electron energies range up to approximately 4.5 MeV. The flux reaches a peak near L = 1.5 and the value is about 106 cm-2s-1 for > 1 MeV electrons. These fluxes gradually increase during solar maximum by a factor of 2 to 3. This electron population, though, tends to remain relatively stable. The outer zone has L-values ranging between about 2.8 and 10. The electron energies are generally less than approximately 10 MeV. Here the region of peak flux is between L-values of 4.0 and 4.5 and the long-term average value for > 1 MeV electrons is roughly 3 x 106 cm-2s-1. This zone is very dynamic and the fluxes can vary by orders of magnitude from day to day. The distribution of trapped particles is a continuous one throughout the inner and outer zones. However, between the two high intensity zones is a region where the fluxes are at a local minimum during quiet periods. This is known as the slot region. The exact location and extent of the slot region depends on electron energy but it is between L-values of 2 and 3. The slot region is an attractive one for certain types of missions due to the increased spatial coverage compared to missions in LEO. However, the radiation environment of this region is very dynamic.
I-18 II-18
Trapped electrons contribute to TID effects, displacement damage effects and charging/discharging effects. As discussed previously, the metric for describing TID effects is dose. In a fashion analogous to protons, the metric for electron-induced displacement damage is either 1 MeV equivalent electron fluences or displacement damage dose. It should be noted though that the application of the displacement damage dose concept is not as straight forward for electrons as it is for protons [Wa04]. Finally, charging/discharging effects can be either spacecraft surface charging caused primarily by low energy electrons or deep dielectric charging caused by high energy electrons. A key parameter for these analyses is the potential difference induced by charging between a dielectric and a conductive surface. Table 2. Trapped Electron Characteristics.
*
F.
Inner Zone
L-Shell Values 1 to 2.8
Outer Zone
2.8 to 10
Energies Up to 4.5 MeV Up to 10 MeV
long-term average
Fluxes* (> 1 MeV) 106 cm-2s-1 3x106 cm-2s-1
The AE-8 Model
The long-time standard model for trapped electrons has been the AE-8 model [Ve91], [Ve91a], [Ba97]. It consists of two static flux maps of trapped electrons – one for solar maximum and one for solar minimum conditions. Due to the variability of the outer zone electron population, the AE-8 model is valid only for long periods of time. Fig. 17 is a contour plot of the trapped electron population with energies > 1 MeV shown in dipole coordinates. The structure of the inner and outer zones is clearly seen. Since AE-8 is based on an internal magnetic field model, results are shown only out to geosynchronous altitudes but the trapped electron population exists well beyond this. An interesting feature of the outer belt is that it extends down to low altitudes at high latitudes.
I-19 II-19
Figure 17. The electron population with energies > 1 MeV as predicted by the AE-8 model for solar maximum conditions. From SPENVIS, [http].
G.
Recent Developments in Trapped Electron Models
If only the trapped particle populations are considered, the inner zone is often dominated by radiation effects due to trapped protons while the outer zone is often dominated by radiation effects due to trapped electrons. Thus, recent trapped electron models have focused on the outer zone. A feature of the outer zone is its high degree of variability and dynamic behavior. This results from geomagnetic storms and substorms, which cause major perturbations of the geomagnetic field. For example, processes such as coronal mass ejections and solar flares cause disturbances in the solar wind, which subsequently interacts with the earth’s magnetosphere. Energy is extracted from the solar wind, stored and dissipated, resulting in the injection and redistribution of electrons into the magnetosphere. Although the physical details of the injection mechanisms are not completely understood, recent measurements from the Upper Atmosphere Research Satellite (UARS) illustrate the high degree of variability of electron flux levels prior to and after such storms. Figure 18 shows the electron energy spectra for 3.25 < L ≤ 3.5 after longterm decay from a prior storm (day 235) and two days after a large storm (day 244) compared to the average flux level over a 1000 day period [Pe01]. It is seen for example, at 1 MeV, that the difference in the one-day averaged differential fluxes over a 9-day period is about 3 orders of magnitude.
I-20 II-20
Figure 18. Total electron flux before and after a geomagnetic storm compared to a long-term average as measured onboard the UARS [Pe01]. Due to the volatile nature of the outer zone, it seems reasonable to resort to probabilistic methods in order to improve on the AE-8 model. The average flux measured during a period of time will approach the long-term average as the measurement period increases. This is illustrated in Figure 19, which is a statistical model of the median, 10th and 90th percentile fluxes measured in geostationary orbit by instrumentation onboard METEOSAT-3 [Da96]. The abscissa is the time period of the measurement and ranges from about one day to a little over one year. This figure indicates that about a month of data in the 200 to 300 keV energy range must be accumulated in this orbit in order to approximate the median flux. It turns out that even longer periods are needed for higher energy electrons and for orbits with lower L-values. These type calculations can also be used to put a constraint on the period of time over which a longterm model such as AE-8 should be used. A conservative rule of thumb is that AE-8 should not be applied to a period any shorter than 6 months. A model such as that shown in Fig. 19 is also useful for estimating worst-case fluxes averaged over different time scales.
I-21 II-21
Figure 19. Statistical model of the median, 10th and 90th percentile fluxes in geostationary orbit for approximately 200 to 300 keV electrons [Da96]. Instrumentation onboard UARS was used to construct a probabilistic model during the declining phase of solar cycle 22 [Pe01]. Figure 20 shows the probability of encountering a daily- averaged, > 1 MeV trapped electron flux for a given L-value. Note that such a probability plot indicates both the most frequently occurring flux value and its variation for a given L. The values of L covered in this work range from about 2 to 7. Note that for L-shells between 2 and 3 corresponding to the slot region, the highest probabilities correspond to the lowest observed fluxes. However, the overall range of possible flux values is several orders of magnitude, indicating the volatility of the region. Figure 20 shows the highest fluxes are between L-values of 3.5 and 4.5. For L > 4.5 the fluxes decrease steadily with increasing L.
I-22 II-22
Figure 20. Probability plot of encountering a given > 1 MeV electron flux at a given L-value during the declining phase of solar cycle 22 [Pe01]. The observations made of the slot region with instrumentation onboard the UARS satellite are consistent with recent results obtained from the TSX5 mission over an approximately 4 year period [Br04]. Figure 21 shows a cumulative probability plot of daily averaged > 1.2 MeV electron fluxes in this region. The distribution shows the probability that a daily averaged flux exceeds the threshold flux shown on the x-axis. The well-known “Halloween-2003” storm occurred during this mission and is shown for reference along with results for the AE-8 model. Interestingly, these measurements show that the AE-8 model results were exceeded every day during the 4-year mission.
I-23 II-23
Figure 21. Cumulative probability plot of > 1.2 MeV electron fluxes observed in the slot region during the 4-year TSX5 mission [Br04]. The statistical models discussed above give results for both an average flux and some indicator of the dispersion that can be used for determining a worst-case flux. Probabilistic approaches exist that focus only on worst-case scenarios. One such method is that of extreme value statistics. Extreme value methods are discussed in section VB4. These methods have been used to study daily-averaged fluxes of > 2 MeV electrons measured by the GOES satellites. It has been estimated from about one solar cycle of data that the largest observed flux on March 28, 1991 (8 x 104 cm-2s-1sr-1) would be exceeded once every 20 years [Ko01]. Although this result in itself is of minimal use for radiation effects applications, the overall utility of such an approach for analyzing trapped electron flux variations is relatively unexplored. The FLUx Model for Internal Charging (FLUMIC) [Wr00] software tool was developed as a worst-case daily flux model of the outer belt to be used with the deep dielectric charging model DICTAT. The model is based on data from several satellites in the > 0.2 to > 5.9 MeV range taken between 1987 and 1998. It uses fits to the most intense electron enhancements over this time period to account for properties such as energy spectra and solar cycle and seasonal dependence. The result is a model of the highest fluxes of penetrating electrons expected during a mission. Another general approach to describe the trapped electron fluxes in the outer belt is to relate them to the level of disturbance of the geomagnetic field. There are several geomagnetic indices that could possibly be used as a basis for this. Brautigam, Gussenhoven and Mullen developed a quasi-static model of outer zone electrons ordered by a 15-day running average of the geomagnetic activity index, Ap [Br92]. The Ap index is an indicator of the general level of
I-24 II-24
global geomagnetic disturbance. The daily outer zone electron energy spectra during the CRRES mission were separated according to the Ap index and averaged, thus producing flux profiles based on geomagnetic activity. The result is the basis for the CRRESELE model. An example of this is shown in Fig. 22 for 0.95 MeV differential electron fluxes for 6 levels of geomagnetic activity [Gu96]. It is seen that the flux changes are much larger for the smaller L-shell values shown. The current CRESSELE model, which is valid for solar maximum, features flux profiles ranging for 6 levels of geomagnetic activity, an average profile, and a worst-case profile encountered during the mission.
Figure 22. Differential electron energy spectra centered at 0.95 MeV during the CRRES mission for 6 different 15-day running average values of the Ap geomagnetic index. As conditions become more disturbed, the fluxes increase [Gu96]. Spurred on by the CRRESELE model, the European Space Agency funded an effort to further develop models of outer zone electrons based on geomagnetic activity indices [Va96]. The CRRES data were used to train neural networks using the geomagnetic index Kp as input. This is another general indicator of the global geomagnetic disturbance, similar to the Ap index. Thirty networks were trained to estimate flux intensities at 5 energies and 6 L-values during the CRRES time period. A simulated data base of electron flux intensities was subsequently generated dating back to 1932 when the Kp index was first tracked. The validity of using 14 months of data to generate a simulated catalog of 60+ years of fluxes in this manner is unknown. The goal of this effort was to use the simulated fluxes to develop improved models. Currently
I-25 II-25
there exists the ESA-SEE1 model that was developed from this effort. It represents an average flux map of trapped electrons during solar minimum and was intended as a replacement for AE-8 during this time period. The initial version of the Particle ONERA-LANL Environment (POLE) model for the geostationary electron environment was developed in 2003 [Bo03]. It is based on 25 years (1976-2001) of Los Alamos satellite data and is the most detailed model available of trapped electron data over the course of a solar cycle. It provides mean, worst-case and best-case fluxes with a time resolution of one year. The initial model covered the energy range of 30 keV to 2.5 MeV. A recent update has extended the upper energy range to 5.2 MeV and added 3 more years worth of data [Si06]. Figure 23 shows the evolution of the mean electron flux over about 2.5 solar cycles for the complete electron energy range of the satellite data. It is seen that the lower energies show relatively little variation with time while the higher energies tend to reach their maximum flux during the declining phase of the solar cycle.
Figure 23. Time and energy dependence of the mean electron flux at geostationary altitudes over about 2.5 solar cycles [Si06]. It is interesting to see how these more recent models compare with the traditional AE-8 model for common orbits. Keep in mind that AE-8 is supposed to represent the average flux for the period and orbit of interest. Figure 24 is a comparison of the average electron fluxes as a function of energy for POLE, CRRESELE and AE-8 predictions for a geostationary orbit. Figure 25 is a similar comparison except that worst-case predictions are presented. Results for the FLUMIC model, a worst-case model, are also shown in Figure 25. It is seen that generally,
I-26 II-26
the predicted fluxes for AE-8 are rather high compared to the average fluxes predicted by the other models except at the very lowest and very highest energies. For the worst case predictions in Figure 25, there is a rather large spread in the results at low energies, but the predictions converge for energies beyond about 1 MeV.
Figure 24. Model comparisons for average electron fluxes of POLE and CRRESELE at geostationary altitudes to AE-8 [La05].
I-27 II-27
Figure 25. Model comparisons for worst case electron fluxes of POLE, CRRESELE and FLUMIC at geostationary altitudes to AE-8 [La05]. Finally, Fig. 26 compares the POLE model at solar maximum and solar minimum with AE-8 for a geostationary orbit. There is no distinction between these two periods in the AE-8 model at geostationary altitudes. The POLE model shows little difference between solar maximum and solar minimum at low energies but shows higher fluxes during solar minimum at higher energies.
I-28 II-28
Figure 26. Comparison of the POLE model to AE-8 at geostationary altitudes for solar maximum and solar minimum conditions [La05].
It is seen that in geostationary orbit, the predictions of AE-8 are generally higher than the average flux predictions of more recent models. In fact, AE-8 is more similar to some of the worst-case flux models than the average flux models. Other comparisons for elliptical MEO are shown in Lauenstein, [La05].
IV.
Galactic Cosmic Rays
A.
General Characteristics
Galactic cosmic rays (GCR) are high-energy charged particles that originate outside of our solar system and are believed to be remnants from supernova explosions. Some general characteristics are listed in Table 3. They are composed mainly of hadrons, the abundances of which is listed in the Table. A more detailed look at the relative abundances is shown in Figure 27. All naturally occurring elements in the Periodic Table (up through uranium) are present in GCR, although there is a steep drop-off for atomic numbers higher than iron (Z=26). Energies can be as high as 1011 GeV, although the acceleration mechanisms to reach such high energies are not understood. Fluxes are generally a few cm-2s-1, and vary with the solar cycle. Typical I-29 II-29
GCR energy spectra for a few of the major elements during solar maximum and solar minimum are shown in Figure 28. It is seen the spectra tend to peak around 1 GeV per nucleon. The flux of the ions with energies less than about 10 GeV per nucleon is modulated by the magnetic field in the sun and solar wind. During the high activity solar maximum period there is significantly more attenuation of the flux, resulting in the spectral shapes shown in Figure 28. Table 3. Characteristics of Galactic Cosmic Rays. Hadron Composition 87% protons 12% alphas 1% heavier ions
Energies
Flux
Radiation Effects
Metric
Up to 1011 GeV
1 to 10 cm-2s-1
SEE
LET
Figure 27. Abundances of GCR up through Z = 28.
I-30 II-30
Figure 28. GCR energy spectra for protons, helium, oxygen and iron during solar maximum and solar minimum conditions [Ba96a]. SEE are the main radiation effects caused by GCR in microelectronics and photonics. The metric traditionally used to describe heavy ion induced SEE is linear energy transfer (LET). LET is the energy lost by the ionizing particle per unit path length in the sensitive volume. For SEE studies the path length is often divided by the material density and expressed as an areal density. The units of LET that are commonly used are then MeV-cm2/mg. For SEE analyses energy spectra such as those shown in Figure 28 are often converted to LET spectra. Such integral LET spectra for solar maximum and solar minimum conditions are shown in Figure 29. These spectra include all elements from protons up through uranium. The ordinate gives the flux of particles that have an LET greater than the corresponding value shown on the abscissa. Given the dimensions of the sensitive volume this allows the flux of particles that deposit a given amount of charge or greater to be calculated in a simple approximation.
I-31 II-31
Figure 29. Integral LET spectra for GCR during solar maximum and solar minimum. The LET spectra shown in Figure 29 are applicable to geosynchronous and interplanetary missions where there is no geomagnetic attenuation. The earth’s magnetic field, however, provides significant protection. Due to the basic interaction of charged particles with a magnetic field, the charged particles tend to follow the geomagnetic field lines. Near the equator the field lines tend to be parallel to the earth’s surface. Thus all but the most energetic ions are deflected away. In the polar regions the field lines tend to point toward the earth’s surface, which allows much deeper penetration of the incident ions. The effect of the geomagnetic field on the incident GCR LET spectrum during solar minimum is discussed for various orbits in [Ba97]. B.
Galactic Cosmic Ray Models
The original Cosmic Ray Effects in MicroElectronics (CREME) suite of programs of Adams [Ad87] was developed specifically for microelectronics applications. It turned out to be a very useful and popular tool and has been updated since then. CREME96 is the most recent version [Ty97] and uses the GCR model of Moscow State University (MSU) [Ny96a]. In principle the MSU model is similar in approach to a GCR model that was originated independently at NASA by Badhwar and O’Neill [Ba96a]. Both models are based on the diffusion-convection theory of solar modulation [Pa85]. This is used to describe the penetration of cosmic rays into the heliosphere from outside and their transport to near earth at 1 Astronomical Unit (AU). The solar modulation is used as a basis to describe the variation of GCR energy spectra over the solar cycle, as shown in Figure 28. However, the implementation of the solar modulation theory for the two models is different. The Badhwar and O’Neill model
I-32 II-32
estimates the modulation level from GCR measurements at 1 AU. Correlations to ground-based neutron monitor counting rates are then made to establish long-term predictive capability. The MSU model is not as direct but uses multi-parameter fits to ultimately relate solar cycle variations in GCR intensity to observed sunspot numbers. Comparisons of the GCR proton and alpha particle spectra of the two models above plus that used in the QinetiQ Atmospheric Radiation Model (QARM) show discrepancies among all three models for narrow time ranges [Le06]. Examples of this are shown in Figure 30 for protons. This is not surprising considering the details of the solar modulation implementation are different. However, similar predictions are seen for the total fluence over the course of a solar cycle.
Figure 30. GCR proton energy spectra predicted by the MSU, Badhwar and O’Neill, and QARM models for two different dates [Le06]. The recent high-quality measurements of GCR heavy ion energy spectra taken on the ACE satellite make possible a stringent test of the GCR models. Comparisons of model results and the ACE data for the 1997 solar minimum period are shown in Figure 31 for 4 of the major elements in the energy range of about 50 to a few hundred MeV per nucleon. It is seen that both models yield good results for heavy ions. Over the range of data shown, the NASA model of
I-33 II-33
Badhwar and O’Neill tends to have a more accurate spectral shape while the MSU model tends to show a smaller root-mean-square deviation from the data.
i Figure 31. Comparison of the NASA model of Badhwar and O’Neill and the MSU model to measurements made with instrumentation onboard the ACE satellite during 1997 [Da01]. A recent development led by the California Institute of Technology is to use a transport model of GCR through the galaxy preceding the penetration and subsequent transport in the heliosphere. [Da01]. During the initial propagation of GCR through the galaxy use is made of knowledge of astrophysical processes that determine the composition and energy spectra of GCR. Comparisons of the fitted model spectra to the ACE satellite measurements are shown in Figure 32. The elements C and Fe are GCR primaries while B, Sc, Ti and V are GCR secondaries produced by fragmentation of primaries on interstellar H and He. The goal of this new approach is to provide an improved description of GCR composition and energy spectra throughout the solar cycle.
I-34 II-34
i Figure 32. The new approach of the California Institute of Technology to describe GCR energy spectra compared to the ACE data during 1997 [Da01].
V.
Solar Particle Events
A.
General Characteristics
It is believed that there are 2 categories of solar particle events and that each one accelerates particles in a distinct manner. Solar flares result when the localized energy storage in the coronal magnetic field becomes too great and causes a burst of energy to be released. They tend to be electron rich, last for hours, and have an unusually high 3He content relative to 4He. A Coronal Mass Ejection (CME), on the other hand, is a large eruption of plasma (a gas of free ions and electrons) that drives a shock wave outward and accelerates particles. CMEs tend to be proton rich, last for days, and have a small 3He content relative to 4He. A review article by Reames gives a detailed account of the many observed differences between solar flares and CMEs [Re99]. CMEs are the type of solar particle events that are responsible for the major disturbances in interplanetary space and the major geomagnetic disturbances at earth when they impact the magnetosphere. The total mass of ejected plasma in a CME is generally around 1015 to 1017 grams. Its speeds can vary from about 50 to 1200 km/s with an average speed of around 400
I-35 II-35
km/s. It can take anywhere from about 12 hours to a few days to reach the earth. Table 4 lists some further general characteristics of CMEs. Table 4. Characteristics of CMEs. Hadron Composition 96.4% protons 3.5% alphas ~0.1% heavier ions
Energies Up to ~GeV/nucleon
Integral Fluence Peak Flux (>10MeV/nucleon) (>10MeV/nucleon) >109 cm-2
>105 cm-2s-1
Radiation Effects TID DD SEE
All naturally occurring chemical elements ranging from protons to uranium are present in solar particle events. They can cause permanent damage such as TID and DD that is due mainly to the proton and possibly alpha component. Just because the heavy ion content is a small percentage does not mean it can be ignored. Heavy ions, as well as protons and alpha particles in solar particle events, can cause both transient and permanent SEE. Figures 33 and 34 illustrate the periodic yet statistical nature of solar particle events. They are plots of the daily solar proton fluences measured by the IMP-8 and GOES series of spacecraft over an approximately 28 year period. Figure 33 shows > 0.88 MeV fluences while Figure 34 shows > 92.5 MeV fluences. The solar maximum and solar minimum time periods are shown in the figures to illustrate the dependence on solar cycle for both low energy and highenergy protons.
Figure 33. Daily fluences of > 0.88 MeV protons due to solar particle events between approximately 1974 and 2002. I-36 II-36
Figure 34. Daily fluences of > 92.5 MeV protons due to solar particle events between approximately 1974 and 2002. The available solar particle data that cover the largest period of time are for protons. Since the available solar heavy ion data are not nearly as extensive, solar proton models and solar heavy ion models will be discussed separately. B.
Solar Proton Models
Sections B1 – B5 describe the application of probabilistic models to solar proton event data, including the origin of the models. This will be done in a sequence that emphasizes the construction of a set of tools that are useful to the design engineer starting from the basics. Section B1 describes the distribution of event magnitudes. B2 and B3 describe modeling cumulative fluences over the course of a mission. B4 discusses worst-case events during a mission. Finally, B5 describes a model that has implications for the energy release and predictability of events. It indicates a potential new direction toward a physically based model for solar proton events. 1.
The Maximum Entropy Principle and the Distribution of Solar Proton Event Magnitudes Given that the occurrence of solar particle events is a stochastic phenomenon, it is important to accurately model the distribution of event magnitudes. However, in general it can be rather difficult to select a probability distribution for the situation where the data are limited. There have been a number of empirical assumptions that the event magnitudes can be represented by certain distributions. For example, lognormal distributions [Ki74], [Fe91] and
I-37 II-37
power function distributions [Ga96], [Ny99] have been used. The lognormal distribution describes the large events well but underestimates the probability of smaller events. On the other hand power functions describe the smaller events well but overestimate the probability of larger events. This section describes a method for making arguably the best selection of a probability distribution for a limited set of data that is compatible with known information about the distribution. The Maximum Entropy Principle was developed by E.T. Jaynes [Ja57] using the concept of entropy originated by Shannon [Sh49]. Jaynes showed in his studies of statistical mechanics that the usual statistical distributions of the theory could be derived by what became known as the Maximum Entropy Principle. This led Jaynes to re-interpret statistical mechanics as a form of statistical inference rather than a physical theory. It established the principle as a procedure for making an optimal selection of a probability distribution when the data are incomplete. Entropy is defined mathematically the same way as in statistical mechanics but for this purpose it is a measure of the probability distribution’s uncertainty. The principle states that the distribution that should be selected is the one that maximizes the entropy subject to the constraints imposed by available information. This choice results in the least biased distribution in the face of missing information. Choosing the distribution with the greatest entropy avoids the arbitrary introduction or assumption of information that is not available. It can therefore be argued that this is the best choice that can be made using the available data. The probability distribution’s entropy, S, is defined [Ja57], [Ka89] S = − p ( M ) ln[ p ( M )]dM
(1)
where p(M) is the probability density of the random variable M. For the case of solar particle event fluences, M is conveniently taken as the base 10 logarithm of the event fluence. A series of mathematical constraints are imposed upon the distribution, drawing from known information. In this case the constraints are [Xa99]: a) The distribution can be normalized. b) The distribution has a well-defined mean. c) The distribution has a known lower limit in the event fluence. This may correspond to a detection threshold, for example. d) The distribution is bounded and consequently infinitely large events are not possible. The resulting system of equations are used along with equation (1) to find the solution p(M) that maximizes S. This has been worked out for many situations [Ka89] and can also be solved using the LaGrange multiplier technique [Tr61]. Using this procedure the following result for solar proton event fluences has been obtained for the solar maximum time period [Xa99]:
N = N tot
−b φ − b − φ max −b −b φ min − φ max
(2)
where N is the number of events per solar maximum year having a fluence greater than or equal to φ, Ntot is the total number of events per solar maximum year having a fluence greater
I-38 II-38
than or equal to φmin, -b is the index of the power function, and φmax is the maximum event fluence. Equation (2) is a truncated power function in the event fluence. It behaves like a power function with an index of -b for φ 30 MeV solar proton event data compared to the best fit to equation (2). The data are from the 21 solar maximum years during solar cycles 20 – 22. It is seen that the probability distribution derived from the maximum entropy principle describes the data quite well over its entire range. This strong agreement indicates that this probability distribution captures the essential features of a solar proton event magnitude distribution. It is a power function for small event sizes and falls off rapidly for very large events. The interpretation of the maximum fluence parameter φmax is interesting in itself and will be discussed further in section B4.
Figure 35. Comparison of the maximum entropy theory result for the distribution to 3 solar cycles of data during solar maximum [Xa99].
2.
Cumulative Fluence During Solar Maximum During a space mission the solar particle event fluence that accumulates during the solar maximum time period is often the dominant contribution to the total fluence. Thus, much prior work focuses on this period of the solar cycle. A solar cycle typically lasts about 11 years. A commonly used definition of the solar maximum period is the 7-year period that spans a starting
I-39 II-39
point 2.5 years before and an ending point 4.5 years after a time defined by the maximum sunspot number in the cycle [Fe93]. The remainder of the cycle is considered solar minimum. Once the initial or underlying distribution of event sizes during solar maximum such as that shown in Figure 35 is known, it can be used to determine the accumulated fluence for a period of time during solar maximum. Due to the stochastic nature of the events, confidence level approaches are often used so that risk-cost-performance tradeoffs can be evaluated by the designer. The first such model was based on King’s analysis of >10 to >100 MeV protons during solar cycle 20 [Ki74], [St74]. One “anomalously large” event, the well-known August 1972 event, dominated the fluence of this cycle so the model predicts the number of such events expected for a given mission length at a specified confidence level. Using additional data, a model from JPL emerged in which Feynman et al. showed that the magnitude distribution of solar proton events during solar maximum is actually a continuous distribution between small events and the extremely large August 1972 event [Fe90]. Under the assumptions that this underlying distribution can be approximated by a lognormal distribution and that the occurrence of events is a Poisson process, the JPL Model uses Monte Carlo simulations to calculate the cumulative fluence during a mission at a given confidence level [Fe90], [Fe93]. An example of this is shown in Figure 36 for > 30 MeV protons. Thus, according to this model, there is approximately a 10% probability of exceeding a proton fluence of 1010 cm-2 for a 3-year period during solar maximum. This corresponds to a 90% confidence level that this fluence will not be exceeded.
Figure 36. JPL91 solar proton fluence model for > 30 MeV protons. The misprint of x-axis units has been corrected from the original reference [Fe93].
I-40 II-40
More recently several different techniques have been used to demonstrate that the cumulative fluence distribution during solar maximum is consistent with a lognormal distribution for periods of time up to at least 7 years [Xa00]. This was shown using the Maximum Entropy Principle, Bootstrap-like methods [Ef93] and by Monte Carlo simulations using the initial distribution shown in Figure 35. Thus the cumulative fluence distribution is known once the parameters of the lognormal distribution are determined. These parameters depend on the proton energy range and the mission duration. They have been determined from the available satellite data and well-known relations for Poisson processes. Figure 37 shows examples of the annual proton fluences for >1, >10 and >100 MeV protons plotted on lognormal probability paper. This paper is constructed so that if a distribution is lognormal, it will appear as a straight line. It further illustrates that the cumulative fluences are well described by lognormal distributions. The fitted data can also be used to determine the lognormal parameters for different periods of time and is used in the ESP Model [Xa99a].
Figure 37. Cumulative annual solar proton event fluences during solar maximum periods for 3 solar cycles plotted on lognormal probability paper. The straight lines are results for the ESP model [Xa00]. Figure 38 shows a representative comparison of the models discussed above. In addition it shows an update of the ESP Model, called PSYCHIC [Xa04], in which the data were extended to cover the time period from 1966 to 2001 and the proton energy range extended to over 300
I-41 II-41
MeV. Results shown are for the 90% confidence level and for a mission length of two solar maximum years. In all cases the energy range shown corresponds to the data range on which the statistical models are based, i.e. no extrapolations are used. Thus, the model differences seen are an indicator of model uncertainties. The spectral shape for the King Model is based on the August 1972 event and is therefore somewhat different than the other model results. The JPL91, ESP, and PSYCHIC models all agree reasonably well for their common 1 to 60 MeV energy range. Note that extrapolation of the JPL91 Model beyond 60 MeV results in an overestimate of the mission fluence. A significant advantage of the PSYCHIC model is its broad energy range and incorporation of several sources of satellite data.
Figure 38. Comparison of different models of cumulative solar proton event fluence during solar maximum for a 2 year period and the 90% confidence level [Xa04].
3.
Cumulative Fluence During Solar Minimum It has often been assumed that the solar particle event fluence during the solar minimum time period can be neglected. However, for missions that are planned mostly or entirely during solar minimum it is useful to have guidelines for solar particle event exposures, especially considering the current frequent use of COTS microelectronics, which can exhibit rather low total dose failure levels.
I-42 II-42
Due to the relative lack of events during solar minimum, confidence level based models are difficult to construct for this period. However, recent solar minimum time periods have been analyzed to obtain 3 average solar proton flux levels that allow varying degrees of conservatism to be used [Xa04]. These flux levels are included in the PSYCHIC model and are shown in Figure 39. First there is the average flux vs, energy spectrum over all 3 solar minimum periods that occurred between 1966 and 2001. A more conservative choice is the highest flux level of the 3 periods or “worst solar minimum period”. Finally, the most conservative choice is the “worst solar minimum year”. This corresponds to the highest flux level over a one year solar minimum time period. It is the one-year interval beginning April 23, 1985 and ending April 22, 1986. Once the choice of a flux-energy spectrum is made the cumulative fluence-energy spectrum is calculated using the mission time period during solar minimum.
Figure 39. Solar proton flux vs. energy spectra for the 3 solar minimum model spectra in the PSYCHIC model. Also shown for comparison purposes is the average proton flux during solar maximum [Xa04]. For comparison purposes, Figure 39 also shows the average solar proton flux during solar maximum for the time period 1966 to 2001. It can be concluded that during the solar minimum time period the event frequencies are generally lower, event magnitudes are generally smaller and the energy spectra are generally softer. Physically this is consistent with the fact that the sun is in a less disturbed state during solar minimum.
I-43 II-43
4.
Extreme Value Theory and Worst Case Events An important consideration for spacecraft designers is the worst-case solar particle event that occurs during a mission. One approach is to design to a well-known large event such as that which occurred in October 1989 [Ty97], or a hypothetical one such as a composite of the February 1956 and August 1972 events [An94]. Energy spectra of some of the most severe solar proton events during solar cycles 19-22 are shown in Figure 40. In addition, there are event classification schemes in which the magnitudes range from “small” to “extremely large” that can be helpful for design purposes [St96], [Ny96].
Figure 40. Some of the most severe solar proton event energy spectra in solar cycles 19-22 [Wi99]. However, more useful information can be provided to the designer if a confidence level associated with the worst case event is known for a given mission length. The designer can then more systematically balance risk-cost-performance tradeoffs for the mission in a manner similar to what is done for cumulative fluences. Once the initial probability distribution such as that shown in Figure 35 is determined it becomes possible to construct such a statistical model using extreme value theory. In the usual central value statistics, the distribution for a random variable is characterized by its mean value and a dispersion indicator such as the standard deviation. Extreme value statistics, pioneered by Gumbel [Gu58], focuses on the largest or smallest values taken on by the distribution. Thus, the “tails” of the distribution are the most significant. For the present
I-44 II-44
applications the concern is with the largest values. An abbreviated description of a few useful relations from extreme value theory is given here. Further detail can be found elsewhere [Gu58], [An85], [Ca88]. Suppose that a random variable, x, is described by a probability density p(x) and corresponding cumulative distribution P(x). These are referred to as the “initial” distributions. If a number of observations, n, are made of this random variable, there will be a largest value within the n observations. The largest value is also a random variable and therefore has its own probability distribution. This is called the extreme value distribution of largest or maximum values. These probability distributions can be calculated exactly. The probability density is
f max ( x; n) = n[P( x)]
n−1
p( x)
(3)
n
(4)
and the cumulative distribution is
Fmax ( x; n) = [P( x)]
An example of the characteristics of such a distribution is shown in Fig. 41 for n-values of 10 and 100 compared to the initial distribution (n = 1), taken to be Gaussian. Note that as the number of observations increase the distributions become more highly peaked and skewed to the right.
Figure 41. Extreme value distributions for n-values of 10 and 100 compared to the initial Gaussian distribution [Bu88]. I-45 II-45
As n becomes large, the exact distribution of extremes may approach a limiting form called the asymptotic extreme value distribution. If the form of the initial distribution is not known but sufficient experimental data are available, the data can be used to derive the asymptotic extreme value distribution by graphical or other methods. For practical applications there are 3 asymptotic extreme value distributions of maximum values – the type I or Gumbel , type II and type III distributions. Examples of extreme value modeling of environmental phenomena such as floods, wave heights, earthquakes and wind speeds can be found in a number of places [Gu58], [An85], [Ca88]. This modeling was first applied to radiation effects problems by Vail, Burke and Raymond in a study of high density memories [Va83]. It has turned out to be a very useful tool for studying the response of large device arrays to radiation. One reason is that the array of devices will fail over a range of radiation exposures and it is important to determine at what point the first failure is likely to occur. Other radiation effects applications have been found for arrays of gate oxides [Va84], [Xa96], sensor arrays [Bu88], [Ma89] and EPROMs [Mc00]. For the application to solar particle events the interest is in the worst-case event that will occur over a period of T solar maximum years. Since the number of events that can occur over this period is variable, the expression for the extreme value distribution must take this into account. Assuming that event occurrence is a Poisson process [Fe93], it can be shown that the cumulative, worst case distribution for T solar maximum years is [Xa98a] Fmax ( M ; T ) = exp{− N tot T [1 − P ( M )]}
(5)
where P(M) is the initial cumulative distribution, which is closely related to equation (2) [Xa99]. Figure 42 shows results for worst-case event fluences for mission lengths of 1, 3, 5 and 10 solar maximum years. The ordinate represents the probability that the worst-case event encountered during a mission will exceed the > 30 MeV proton fluence shown on the abscissa. Also shown in the figure by the vertical line denoted by “Design Limit” is the maximum event fluence parameter, φmax. As will be discussed next, this parameter can be used as an upper limit guideline. Results analogous to these have also been obtained for peak solar proton fluxes during events [Xa98], which are very relevant for SEE. The event fluence magnitudes are discussed here because of the interesting comparison that can be made with historical data to help validate the model.
I-46 II-46
Figure 42. Probability model for worst-case event fluences expected during the indicated time periods during solar maximum [Xa99]. A unique feature of this model is the upper limit parameter for a solar proton event fluence, φmax. For the case of > 30 MeV protons this turns out to be 1.3 x 1010 cm-2. However, this is a fitted parameter that was determined from limited data. There must be some amount of uncertainty associated with the parameter. Thus, it should not be interpreted as an absolute upper limit. One method of estimating its uncertainty is the parametric “bootstrap” technique [Ef93]. This method attempts to assess the uncertainty of the parameter due to the limited nature of the data. The idea is to randomly select event fluences according to the distribution given by equation (2) until the number of events in the distribution is simulated. The equation is then fitted to the simulated data, and the parameters extracted. The procedure is repeated, and each time the parameters have different values. After a number of simulations, the standard deviation of the parameter of interest can be determined. This technique showed the upper limit parameter plus one standard deviation equaled 3.0 x 1010 cm-2 [Xa99]. A reasonable interpretation for the upper limit fluence parameter is that it is the best value that can be determined for the largest possible event fluence, given limited data. It is not an absolute upper limit but is a practical and objectively determined guideline for use in limiting design costs. Constraints on the upper limit of solar proton event sizes can be put on models as a result of studies of historical-type evidence. Relatively small fluctuations of 14C observed in tree rings over a long period of time [Li80] and measured radioactivity in lunar rocks brought back during the Apollo missions [Re97] are consistent with the upper limit parameter but are not especially restrictive. The strictest constraint to date comes from analysis of approximately 400 years of I-47 II-47
the nitrate record in polar ice cores [Mc01]. The largest event reported was estimated to be 1.9 x 1010 cm-2 for > 30 MeV protons. This was the Carrington event that occurred in September 1859. Fig. 43 shows a bar graph of the upper limit parameter, φmax, for > 30 MeV protons including the one standard deviation uncertainty that was estimated from the parametric bootstrap method. This is compared with the reported value for the Carrington event. It is seen that these quantities are well within the uncertainties. Also shown for reference is the value for the October 1989 solar particle event that is commonly used as a worst-case event.
Figure 43. Comparison of the > 30 MeV solar proton event fluences of the October 1989 event, the 1859 Carrington event as determined from ice core analysis [Mc01], and the model upper limit parameter plus one standard deviation shown by the error bar [Xa99].
5. Self-Organized Criticality and the Nature of the Energy Release Process Organizations such as NASA, ESA and others have put substantial resources into studies of the sun’s properties as related to the occurrence of solar particle events. One of the main goals is to find a reliable predictor of events. Despite this significant international effort, solar particle events can occur suddenly and without obvious warning. In addition to potential problems with electronic systems and instrumentation, this is an especially serious concern for new space initiatives that plan to send manned spacecraft to the moon, Mars or interplanetary space. Thus, there is strong motivation to develop predictive methods for solar particle events. It is hoped that the apparent stochastic character can be overcome and predictability achieved if precursor phenomena such as x-ray flares or magnetic topology signatures can be properly interpreted or if the underlying mechanisms are identified. This section discusses the very basic
I-48 II-48
question of whether the nature of the energy release process for solar particle events is deterministic or stochastic. In other words is it possible to predict the time of occurrence and magnitude of solar particle events or are probabilistic methods necessary? The self-organized criticality (SOC) model is a phenomenological model originated by Bak, Tang and Wisenfeld [Ba87] that can give insight into the basic nature of a system. It postulates that a slow continuous build-up of energy in a large interactive system causes the system to evolve to a critical state. A minor, localized disturbance can then start an energyreleasing chain reaction. Chain reactions and therefore energy releasing events of all sizes are an integral part of the dynamics, leading to a “scale invariant” property for event sizes. This scale invariance results in power function distributions for the density functions of event magnitudes and waiting times between events. As a result of this basic nature it is generally assumed in the literature that accurate predictions of the magnitude and time of occurrence of such events are not possible. A system in a SOC state is therefore generally assumed to be probabilistic in nature. Applications for the theory of SOC have been found in natural phenomena such as earthquakes, avalanches and rainfall. A useful conceptual aid is the sandpile. If sand is dropped one grain at a time to form a pile, the pile soon becomes large enough that grains may slide down it, thus releasing energy. Eventually the slope of the pile is steep enough that the amount of sand added is balanced, on average, by the amount that slides down the pile. The system is then in the critical state. As single grains of sand are subsequently added, a broad range of consequences is possible. Nothing may happen or an avalanche of any size up to a “catastrophic” one may occur. The dynamics of this interactive system do not allow accurate predictions of when an avalanche will occur or how large it will be. It has recently been shown that the energy release due to solar particle events is consistent with the dynamics of a SOC system [Xa06]. This was based on three analyses of 28 years of solar proton data taken by the IMP-8 and GOES series of satellites. The first is rescaled range (R/S) analysis, a method used to determine if events show long-term correlation. The second is a demonstration of fractal properties of event sizes, which suggests “scale invariant” behavior. The third is an analysis of the integral distribution of fluence magnitudes, which is shown to be a power function. These are hallmark features of systems that exhibit self-organized criticality.
a) Rescaled Range Analysis Rescaled range (R/S) analysis, originated by Hurst [Hu65], is a method that indicates whether or not events show long-term correlation. The original goal of Hurst was to provide a basis for estimating the optimum size of water storage reservoirs. An optimum size was taken as a reservoir that never ran dry or overflowed. The analysis was based on a history of floods and droughts in the region of interest over a period of many years. For a period of years beginning at time t the cumulative input to the reservoir is Yt +τ =
t +τ i =t
Xi
(6)
where the Xi are the observed inputs for a given time interval, i.e. the daily or monthly input. The cumulative deviation for the total observation period of τ years is then
I-49 II-49
∆Yt +τ =
t +τ i =t
(X
i
− Yt +τ
)
(7)
where Yt +τ is the mean value of the stochastic quantity Xi. Thus, the cumulative deviation represents the difference between the actual cumulative input to the reservoir at a given time and a cumulative calculation based on the average inflow over the total time period of interest. This analysis permits identification of the maximum cumulative input and the value of the minimum cumulative store thereby enabling identification of the optimum size of the reservoir. The difference between the maximum and minimum values is customarily identified as the range. In order to compare results for different rivers Hurst rescaled the range by dividing it by the standard deviation of the inputs over the period of the record, τ. It turns out that this rescaled range is given by R / S = aτ H
(8)
where a and H are constants [Pe02]. The latter constant is called the Hurst coefficient. It is known that if the inputs are completely random and uncorrelated the rescaled range should vary as the square root of the elapsed time, i.e. H would equal 0.5. Contrary to this expectation Hurst found that the rescaled range varied as the 0.7 to 0.8 power of the elapsed time indicating that the events showed long-term correlation. He found that many other natural phenomena such as rainfall, temperatures, pressures and sunspot numbers had power indices in the same range. In Figure 44 a plot analogous to that used by Hurst to describe flood and drought periods is shown for solar proton daily fluences for the year 1989. The quantity shown on the ordinate is the cumulative deviation expressed in equation (7) and can also be termed the net proton fluence. It is the analog of the reservoir level in Hurst’s analysis. A negative slope on this plot indicates a lack of solar proton events (a “solar proton drought”). When an event occurs there is a rapid increase in the net proton fluence, producing the jagged appearance of the plot. This is indicative that there is a build-up of energy with time that is released in bursts.
I-50 II-50
Figure 44. Cumulative deviation plot of daily solar proton fluences in 1989 [Xa06]. The difference between the maximum and minimum values in Figure 44 is conventionally referred to as the range. When divided by the standard deviation it is the rescaled range. To carry out a complete R/S analysis a number of samples covering different time periods in the total record are used to determine a series of rescaled range values. When R/S values are amenable to this analysis, they yield a straight line when plotted as a function of the period on a log-log scale. As seen in Figure 45 the solar proton data are well described by rescaled range analysis. The power index, H, has been determined using equation (8) to obtain a result of 0.70 [Xa06]. This is typical of those for other natural phenomena and indicates long-term correlation between solar particle events. This can be interpreted as a consequence of the fact that the amount of energy stored in the system, i.e. the sun’s corona, is dependent on the system’s past history.
I-51 II-51
Figure 45. Rescaled range analysis of > 0.88 MeV protons for 1989 [Xa06].
b) Fractal Behavior A significant feature of a system in a SOC state is that when its features are viewed on a different scale the character of the appearance does not change. This is closely related to Mandelbrot’s concept of fractal geometry [Ma83], a formulation of the complexity of natural patterns observed in nature, which tend to have similar features regardless of the scale on which they are viewed. Well-known examples are coastlines, snowflakes and galaxy clusters. Figure 46 shows the net proton fluence as a function of monthly fluences as compared to Figure 44, which is for daily fluences. If the axis units were not visible it would not be possible to distinguish the 2 figures. For this reason processes of this type have been described in the literature by terms such as “scale invariant”, “self-similar” and “fractal” [Ba96], [Je98], [Sc91]. This scale invariance is further evidence of a SOC system, and suggests the possibility of power function behavior in the fluence magnitudes. In fact, it has been suggested that a fractal can be thought of as a snapshot of a SOC process [Ba91].
I-52 II-52
Figure 46. Cumulative deviation plot for > 0.88 MeV protons for the time period 1973 to 2001 [Xa06].
c) Power Function Distribution A necessary characteristic of SOC phenomena is that the number density distribution of event magnitudes is a power function [Ba96], [Je98], [Pe02]. An integral distribution of monthly solar proton fluences for a 28-year period is shown in Figure 47. The ordinate represents the number of occurrences when the monthly fluence exceeds that shown on the abscissa. It is seen that this distribution is a straight line on a semi-logarithmic plot that spans about 4 orders of magnitude. The number density function is [Xa06] dN − 29.4 = dΦ Φ
(9)
In this case the density function turns out to be exactly proportional to the reciprocal of the fluence. Thus, the solar event data can be represented by a power function of a type commonly referred to as 1/f [Ba87]. It can therefore be viewed as 1/f noise, also known as flicker noise. It is well known that this type of noise results when the dynamics of a system is strongly influenced by past events. Additionally, it reinforces the results is section B5a. Thus, an especially compelling argument can be made that solar particle events are a SOC phenomenon [Xa06].
I-53 II-53
Figure 47. Integral distribution of monthly solar proton fluences > 1.15 MeV, from 1973 to 2001 [Xa06]. The general behavior of a SOC system is that of a non-equilibrium system driven by a slow continuous energy input that is released in sudden bursts with no typical size as indicated by the power function distribution shown in equation (9). Although research involving SOC is still a developing field and there is much yet to be learned about the sun’s dynamics [Lu93], [Bo99], [Ga03], these results strongly suggest that it is not possible to predict that a solar particle event of a given magnitude will occur at a given time. This also suggests a direction toward a more physically based model involving a description of the energy storage and release processes in the solar structure. It is possible that such a model could explain useful probabilistic trends such as why larger and more frequent solar proton events are observed to occur during the declining phase of the solar cycle compared to the rising phase [Sh95].
C.
Solar Heavy Ion Models
Solar heavy ion models are generally not as advanced as solar proton models due to the large number of heavy ion species, which complicates measurements of individual species. For microelectronics applications, solar heavy ion models are needed primarily to assess SEE. In an attempt to model worst-case events, the original CREME model [Ad87] and subsequently the CHIME model [Ch94] scaled heavy ion abundances to protons for individual events. However, this assumption that the events with the highest proton fluxes should also be heavy ion rich turned out to be inconsistent with subsequent data [Re99] and led to worst-case event models that were too conservative [Mc94]. Modifications of the original CREME code were made in the MACREE model [Ma95] to define a less conservative worst-case solar particle event. MACREE
I-54 II-54
gives the option of using a model based on the measured proton and alpha particle spectra for the well-known October 1989 event and an abundance model that is 0.25 times the CREME abundances for atomic numbers, Z > 2. A model that originated at JPL [Cr92] characterizes the distribution of 1 to 30 MeV per nucleon alpha particle event fluences using a lognormal distribution in order to assign confidence levels to the event magnitudes. The alpha particle data are based on measurements from the IMP-8 satellite for solar maximum years between 1973 and 1991. For ions heavier than Z = 2 an abundance model is used and the fluxes are scaled to the alpha particle flux for a given confidence level [Mc94]. The current version of the widely used CREME code, CREME96, uses the October 1989 event as a worst-case scenario. It provides 3 levels of solar particle intensity [Ty97]. These are the “worst week”, “worst day” and “peak flux” models, which are based on proton measurements from the GOES-6 and -7 satellites and heavy ion measurements from the University of Chicago Cosmic Ray Telescope (CRT) on the IMP-8 satellite. The most extensive heavy ion measurements in the model are for C, O and Fe ions [Ty96]. It is noteworthy that the energy spectra of these 3 elements extend out to roughly 1 GeV per nucleon. The remaining elemental fluxes are determined from a combination of measurements limited to 1 or 2 energy bins and abundance ratios. Comparisons to the CREME96 worst case models have been made with data taken by the Cosmic Radiation Environment DOsimetry (CREDO) Experiment onboard the Microelectronics and Photonics Test Bed (MPTB) between 2000 and 2002 [Dy02]. The data show that 3 major events during this time period approximately equaled the “worst day” model. An example of this is shown in Figure 48 for an event that occurred in November 2001.
Figure 48. Comparison of a solar heavy ion event that occurred in November 2001 with the CREME96 “worst day” model. The progression of daily intensities is indicated with the peak intensity occurring on day 2929 of the mission.
I-55 II-55
The above models can be used to calculate worst-case SEE rates induced by heavy ions. Another quantity of interest is the average SEE rate during a mission, which means that models for cumulative solar heavy ion fluence must be developed. Tylka et al. used a Monte Carlo procedure similar to the JPL91 solar proton model [Fe93] to predict cumulative fluences for certain elements during a mission at a specified confidence level [Ty97a]. This was done for 2 broad energy bins each for alpha particles, for the CNO group, and for Fe. It is based on the University of Chicago CRT data taken between 1973 and 1996. The new PSYCHIC model [Xa06a] is based on measurements of approximately 1 to 200 MeV per nucleon alpha particle data taken onboard the IMP-8 and GOES series of satellites between 1973 and 2001. For Z > 2 heavy ions the energy spectra and abundances relative to alpha particles are determined from measurements by the Solar Isotope Spectrometer (SIS) instrument on the ACE spacecraft for the major elements C, N, O, Ne, Mg, Si, S and Fe. These measurements were taken between 1997 and 2005. The remaining less prevalent elements are scaled according to an abundance model using the measured energy spectra of the major elements.
VI.
Future Challenges
There are many future challenges that are faced in attempting to model the space radiation environment. First there should be a goal to produce more dynamical and more physical models of the environment. The resulting increased understanding should allow more accurate projections to be made for future missions. For trapped particle radiations, this would mean initially developing descriptions or particle maps for various climatological conditions that occur throughout the solar cycle for the full range of particle energies and geomagnetic coordinates covered by the AP-8 and AE-8 models. Ultimately, it would mean developing an accurate description of the source and loss mechanisms of trapped particles, including the influence that magnetic storms have on the particle populations. Galactic cosmic ray models are closely tied to solar activity levels, which modulate the fluxes of the incoming ions. Challenges for these models are to incorporate an improved description of the solar modulation potential and to develop cosmic ray transport models that incorporate knowledge of astrophysical processes. Solar particle events demonstrate a strongly statistical character. A major challenge for these models is to develop a description of the energy storage and release processes in the solar structure. This would provide a more detailed probabilistic view of the cyclical dependence of event frequencies and magnitudes. Developing and implementing a strategy to deal with the radiation environment for manned and robotic space missions is critical for new interplanetary exploration initiatives. Getting astronauts safely to Mars and back will be the greatest exploration challenge of our lifetimes. It will involve planning and implementing strategies for the interplanetary radiation environment to an unprecedented degree. The lack of predictability of solar particle events underscores the importance of establishing a measurement system in the inner heliosphere for the early detection and warning of events [Xa06]. Once an event is detected, accurate predictions must be made of the transport process to the Earth, Mars and possibly beyond so that properties such as time of arrival, duration, intensity and energy spectrum can be transmitted well ahead of the arrival time. The current GCR models depend on knowing the solar activity levels in order to predict GCR fluxes. Thus, the lack of an established method for predicting future solar cycle activity is a
I-56 II-56
serious concern that must be addressed for new exploration initiatives. Especially disconcerting are the occasional large drops in solar activity from one cycle to the next as seen in Figure 1. This translates to a substantial increase in GCR flux from one cycle to the next, which would be a serious problem for long-term manned missions should the mission happen to occur during an unfavorable cycle. Thus, in spite of the recent progress that has been made in modeling the space radiation environment over the last 10 or so years, much work remains to be done.
VII. References [Ad87] [An85] [An94] [An96] [Ba96a] [Ba87] [Ba91] [Ba96] [Ba97] [Ba05] [Bo99] [Bo03] [Br92] [Br04] [Bu88] [Ca88]
J.H. Adams, Jr., Cosmic Ray Effects on Microelectronics, Part IV, NRL Memorandum Report 5901, Naval Research Laboratory, Washington DC, Dec. 1987. A.H-S. Ang and W.H. Tang, Probability Concepts in Engineering Planning and Design, Vol. II, Wiley, NY, 1985. B.J. Anderson and R.E. Smith, Natural Orbital Environment Guidelines for Use in Aerospace Vehicle Development, NASA Technical Memorandum 4527, Marshall Space Flight Center, Alabama, June 1994. B.E. Anspaugh, GaAs Solar Cell Radiation Handbook, JPL Publication 96-9, 1996. G.D. Badhwar and P.M. O’Neill, “Galactic Cosmic Radiation Model and Its Applications”, Adv. Space Res., Vol. 17, No. 2, (2)7-(2)17 (1996). P. Bak, C. Tang and K. Wisenfeld, “Self-Organized Criticality: An Explanation of 1/f Noise”, Phys. Rev. Lett., Vol. 59, 381-384 (1987). P. Bak and K. Chen, “Self-Organized Criticality”, Scientific American, Vol. 264, 4653 (Jan. 1991). P. Bak, How Nature Works – The Science of Self-Organized Criticality, SpringerVerlag, NY, 1996. J.L. Barth, “Modeling Space Radiation Environments” in 1997 IEEE NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. R. Baumann, “Single-Event Effects in Advanced CMOS Technology” in 2005 NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. G. Boffeta, V. Carbone, P. Giuliani, P. Veltri and A. Vulpiani, “Power Laws in Solar Flares: Self-Organized Criticality or Turbulence?” Phys. Rev. Lett., Vol. 83, 46624665 (1999). D.M. Boscher, S.A. Bourdarie, R.H. W. Friedel and R.D. Belian, “A Model for the Geostationary Electron Environment: POLE”, IEEE Trans. Nucl. Sci., Vol. 50, 22782283 (Dec. 2003). D.H. Brautigam, M.S. Gussenhoven and E.G. Mullen, “Quasi-Static Model of Outer Zone Electrons”, IEEE Trans. Nucl. Sci., Vol. 39, 1797-1803 (Dec. 1992). D.H. Brautigam, K.P. Ray, G.P. Ginet and D. Madden, “Specification of the Radiation Belt Slot Region: Comparison of the NASA AE8 Model with TSX5/CEASE Data”, IEEE Trans. Nucl. Sci., Vol. 51, 3375-3380 (Dec. 2004). E.A. Burke, G.E. Bender, J.K. Pimbley, G.P. Summers, C.J. Dale, M.A. Xapsos and P.W. Marshall, “Gamma Induced Dose Fluctuations in a Charge Injection Device”, IEEE Trans. Nucl. Sci., Vol. 35, 1302-1306 (1988). E. Castillo, Extreme Value Theory in Engineering, Academic Press, Boston, 1988.
I-57 II-57
[Ch94]
[Cr92] [Da96] [Da01] [Di97] [Di06] [Dy02] [Ef93] [Fe90] [Fe93] [Ga96] [Ga03] [Gu58] [Gu96] [Ha99] [He99] [http] [Hu65] [Hu98] [Hu02] [Ja57]
D.L. Chenette, J. Chen, E. Clayton, T.G. Guzik, J.P. Wefel, M. Garcia-Munoz, C. Lopate, K.R. Pyle, K.P. Ray, E.G. Mullen and D.A. Hardy, “The CRRES/SPACERAD Heavy Ion Model of the Environment (CHIME) for Cosmic Ray and Solar Particle Effects on Electronic and Biological Systems in Space”, IEEE Trans. Nucl. Sci., Vol. 41, 2332-2339 (1994). D.R. Croley and M. Cherng, “Procedure for Specifying the Heavy Ion Environment at 1 AU”, JPL Interoffice Memorandum 5215-92-072, July 1992. E.J. Daly, J. Lemaire, D. Heynderickx and D.J. Rodgers, “Problems with Models of the Radiation Belts”, IEEE Trans. Nucl. Sci., Vol. 43, No. 2, 403-414 (April 1996). A.J. Davis, et al., “Solar Minimum Spectra of Galactic Cosmic Rays and Their Implications for Models of the Near-Earth Radiation Environment”, J. Geophys. Res., Vol. 106, No. A12, 29,979-29,987 (Dec. 2001). E.I. Diabog, Report on the International Workshop for Physical and Empirical Models at Dubna, http://wings.machaon.ru/inp50/english/dubna.html, April 1997. M. Dikpati et al., Geophys. Res. Lett. (online), March 3, 2006. C.S. Dyer, K. Hunter, S. Clucas, D. Rodgers, A. Campbell and S. Buchner, “Observation of Solar Particle Events from CREDO and MPTB During the Current Solar Maximum”, IEEE Trans. Nucl. Sci., Vol. 49, 2771-2775 (2002). B. Efron and R.J. Tibshirani, An Introduction to the Bootstrap, Chapman & Hall, NY, 1993. J. Feynman, T.P. Armstrong, L. Dao-Gibner and S.M. Silverman, “New Interplanetary Proton Fluence Model”, J. Spacecraft, Vol. 27, 403-410 (1990). J. Feynman, G. Spitale, J. Wang and S. Gabriel, “Interplanetary Fluence Model: JPL 1991”, J. Geophys. Res., Vol. 98, 13281-13294 (1993). S.B. Gabriel and J. Feynman, “Power-Law Distribution for Solar Energetic Proton Events”, Solar Phys., Vol. 165, 337-346 (1996) S.B. Gabriel and G.J. Patrick, “Solar Energetic Particle Events: Phenomenology and Prediction”, Space Sci. Rev., Vol. 107, 55-62 (2003). E. Gumbel, Statistics of Extremes, Columbia University Press, NY, 1958. M.S. Gussenhoven, E.G. Mullen and D.H. Brautigam, “Improved Understanding of the Earth’s Radiation Beltsfrom the CRRES Satellite”, IEEE Trans. Nucl. Sci., Vol. 43, No. 2, 353-368 (April 1996). D.H. Hathaway, R.M. Wilson and E.J. Reichmann, “A Synthesis of Solar Cycle Prediction Techniques”, J. Geophys. Res., Vol. 104, A10, 22375-22388 (1999). D. Heynderickx, M. Kruglanski, V. Pierrard, J. Lemaire, M.D. Looper and J.B. Blake, “A Low Altitude Trapped Proton Model for Solar Minimum Conditions Based on SAMPEX/PET Data”, IEEE Trans. Nucl. Sci., Vol. 46, 1475-1480 (Dec. 1999). http://www.spenvis.oma.be/ H.E. Hurst, Long Term Storage: An Experimental Study, Constable & Co., Ltd., London (1965). S.L. Huston and K.A. Pfitzer, “A New Model for the Low Altitude Trapped Proton Environment”, IEEE Trans. Nucl. Sci., Vol. 45, 2972-2978 (Dec. 1998). S.L. Huston, Space Environments and Effects: Trapped Proton Model, Boeing Final Report NAS8-98218, Huntington Beach, CA, Jan. 2002. E.T. Jaynes, “Information Theory and Statistical Mechanics”, Phys. Rev., Vol. 106, 620-630 (1957).
I-58 II-58
[Je98] [Ka89] [Ki74] [Ko01] [La05] [Le06] [Li80] [Lu93] [Ma89] [Ma95]
[Ma83] [Ma99] [Ma02] [Mc61] [Mc01] [Mc94] [Mc00] [Ny96]
H.J. Jensen, Self-Organized Criticality, Cambridge University Press, Cambridge, UK, 1998. J.N. Kapur, Maximum Entropy Models in Science and Engineering, John Wiley & Sons, Inc., NY, 1989. J.H. King, “Solar Proton Fluences for 1977-1983 Space Missions”, J. Spacecraft, Vol. 11, 401-408 (1974). H.C. Koons, “Statistical Analysis of Extreme Values in Space Science”, J. Geophys. Res., Vol. 106, No. A6, 10915-10921 (June 2001). J-M. Lauenstein and J.L. Barth, “Radiation Belt Modeling for Spacecraft Design: Model Comparisons for Common Orbits”, 2005 IEEE Radiation Effects Data Workshop Proceedings, pp. 102-109, IEEE Operations Center, Piscataway, NJ, 2005. F. Lei, A. Hands, S. Clucas C. Dyer and P. Truscott, “Improvements to and Validations of the QinetiQ Atmospheric Radiation Model”, Accepted for publication in IEEE Trans. Nucl. Sci., June 2006 issue. R.E. Lingenfelter and H.S. Hudson, “Solar Particle Fluxes and the Ancient Sun” in Proc. Conf. Ancient Sun, edited by R.O. Pepin, J.A. Eddy and R.B. Merrill, Pergamon Press, London, pp.69-79 (1980). E.T. Lu, R.J. Hamilton, J.M. McTiernan and K.R. Bromund, “Solar Flares and Avalanches in Driven Dissipative systems”, Astrophys. J., Vol. 412, 841-852 (1993). P.W. Marshall, C.J. Dale, E.A. Burke, G.P. Summers and G.E. Bender, “Displacement Damage Extremes in Silicon Depletion Regions”, IEEE Trans. Nucl. Sci., Vol. 36, 1831-1839 (1989). P.P. Majewski, E. Normand and D.L. Oberg, “A New Solar Flare Heavy Ion Model and its Implementation through MACREE, an Improved Modeling Tool to Calculate Single Event Effect Rates in Space”, IEEE Trans. Nucl. Sci., Vol. 42, 2043-2050 (1995). B.B. Mandelbrot, The Fractal Geometry of Nature, W.H. Freeman & Co., NY, 1983. C.J. Marshall and P.J. Marshall, “Proton Effects and Test Issues for Satellite Designers – Part B: Displacement Effects” in 1999 IEEE NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. J. Mazur, “The Radiation Environment Outside and Inside a Spacecraft” in 2002 IEEE NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. C.E. McIlwain, “Coordinates for Mapping the Distribution of Magnetically Trapped Particles”, J. Geophys. Res., Vol. 66, 3681-3691 (1961). K.G. McCracken, G.A.M. Dreschoff, E.J. Zeller, D.F. Smart and M.A. Shea, “Solar Cosmic Ray Events for the Period 1561 – 1994 1. Identification in Polar Ice”, J. Geophys. Res., Vol. 106, 21585-21598 (2001). P.L. McKerracher, J.D. Kinnison and R.H. Maurer, “Applying New Solar Particle Event Models to Interplanetary Satellite Programs”, IEEE Trans. Nucl. Sci., Vol. 41, 2368-2375 (1994). P.J. McNulty, L.Z. Scheick, D.R. Roth, M.G. Davis and M.R.S. Tortora, “First Failure Predictions for EPROMs of the Type Flown on the MPTB Satellite”, IEEE Trans. Nucl. Sci., Vol. 47, 2237-2243 (2000). R.A. Nymmik, “Models Describing Solar Cosmic Ray Events”, Radiat. Meas., Vol. 26, 417-420 (1996).
I-59 II-59
[Ny96a] [Ny99] [Pa85] [Pe01] [Pe02] [Re97] [Re99] [Sa76] [Sc91] [Sc96] [Sh49] [Sh95] [Si06] [St74] [St96] [Tr61] [Ty96] [Ty97] [Ty97a]
R.A. Nymmik, M.I. Panasyuk and A.A. Suslov, “Galactic Cosmic Ray Flux Simulation and Prediction”, Adv. Space Res., Vol. 17, No. 2, (2)19-(2)30 (1996). R.A. Nymmik, “Probabilistic Model for Fluences and Peak Fluxes of Solar Energetic Particles”, Radiation Mesurements, Vol.30, 287-296 (1999). E.N. Parker, “The Passage of Energetic Charged Particles Through Interplanetary Space”, Planet. Space Sci., Vol. 13, 9-49 (1985). W.D. Pesnell, “Fluxes of Relativistic Electrons in Low Earth Orbit During the Decline of Solar Cycle 22”, IEEE Trans. Nucl. Sci., Vol. 48, 2016-2021 (Dec. 2001). O. Peters, C. Hertlein and K. Christensen, “A Complexity View of Rainfall”, Phys. Rev. Lett., Vol. 88(1), 018701-1 (2002). R.C. Reedy, “Radiation Threats from Huge Solar Particle Events”, in Proc. Conf. High Energy Radiat. Background in Space, edited by P.H. Solomon, NASA Conference Publication 3353, pp.77-79 (1997). D.V. Reames, “Particle Acceleration at the Sun and in the Heliosphere”, Space Sci. Rev., Vol. 90, 413-491 (1999). D.M. Sawyer and J.I. Vette, AP-8 Trapped Proton Environment for Solar Maximum and Solar Minimum, NSSDC/WDC-A-R&S, 76-06, NASA Goddard Space Flight Center, Greenbelt, MD, Dec. 1976. M. Schroeder, Fractals, Chaos and Power Laws, W.H. Freeman & Co., NY, 1991. K.H. Schatten, D.J. Myers and S. Sofia, “Solar Activity Forecast for Solar Cycle 23”, Geophys. Res. Lett., Vol. 6, 605-608 (1996). C.E. Shannon and W. Weaver, The Mathematical Theory of Communication, University of Illinois Press, 1949. M.A. Shea and D.F. Smart, “A Comparison of Energetic Solar Proton Events During the Declining Phase of Four Solar Cycles (Cycles 19-22)”, Adv. Space Res., Vol. 16, No. 9, (9)37-(9)46 (1995). A. Sicard-Piet, S. Bourdarie, D. Boscher and R.H.W. Friedel, “A Model of the Geostationary Electron Environment: POLE, from 30 keV to 5.2 MeV”, Accepted for publication in IEEE Trans. Nucl. Sci., June 2006 issue. E. G. Stassinopoulos and J.H. King, “Empirical Solar Proton Models for Orbiting Spacecraft Applications”, IEEE Trans. Aerospace and Elect. Sys., Vol. 10, 442-450 (1974). E.G. Stassinopoulos, G.J. Brucker, D.W. Nakamura, C.A. Stauffer, G.B. Gee and J.L. Barth, “Solar Flare Proton Evaluation at Geostationary Orbits for Engineering Applications”, IEEE Trans. Nucl. Sci., Vol. 43, 369-382 (April 1996). M. Tribus, Thermostatics and Thermodynamics, D. Van Nostrand Co., Inc., NY, 1961. A.J. Tylka, W.F. Dietrich, P.R. Boberg, E.C. Smith and J.H. Adams, Jr., “Single Event Upsets Caused by Solar Energetic Heavy Ions”, IEEE Trans. Nucl. Sci., Vol. 43, 2758-2766 (1996). A.J. Tylka et al., “CREME96: A Revision of the Cosmic Ray Effects on Microelectronics Code”, IEEE Trans. Nucl. Sci., Vol. 44, 2150-2160 (1997). A.J. Tylka, W.F. Dietrich and P.R. Boberg, “Probability Distributions of High-Energy Solar-Heavy-Ion Fluxes from IMP-8: 1973-1996”, IEEE Trans. Nucl. Sci., Vol. 44, 2140-2149 (1997).
I-60 II-60
[Va83] [Va84] [Va96] [Ve91] [Ve91a] [Wa94] [Wa04] [Wi99] [Wr00] [Xa96] [Xa98] [Xa98a] [Xa99] [Xa99a]
[Xa00] [Xa02]
P.J. Vail, E.A. Burke and J.P. Raymond, “Scaling of Gamma Dose Rate Upset Threshold in High Density Memories”, IEEE Trans. Nucl. Sci., Vol. 30, 4240-4245 (1983). P.J. Vail and E.A. Burke, “Fundamental Limits Imposed by Gamma Dose Fluctuations in Scaled MOS Gate Insulators”, IEEE Trans. Nucl. Sci., Vol. 31, 14111416 (1984). A.L. Vampola, Outer Zone Energetic Electron Environment Update, European Space Agency Contract Report, Dec. 1996; http://spaceenv.esa.int/R_and_D/vampola/text.html. J.I. Vette, The NASA/National Space Science Data Center Trapped Radiation Environment Program (1964-1991), NSSDC 91-29, NASA/Goddard Space Flight Center, National Space Science Data Center, Greenbelt, MD, Nov. 1991. J.I. Vette, The AE-8 Trapped Electron Environment, NSSDC/WDC-A-R&S 91-24, NASA Goddard Space Flight Center, Greenbelt, MD, Nov. 1991. M. Walt, Introduction to Geomagnetically Trapped Radiation, University Press, Cambridge, 1994. R.J. Walters, S.R. Messenger, G.P. Summers and E.A. Burke, “Solar Cell Technologies, Modeling and Testing” in 2004 IEEE NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. J.W. Wilson, F.A. Cucinotta, J.L. Shinn, L.C. Simonsen, R.R. Dubbed, W.R. Jordan, T.D. Jones, C.K. Chang and M.Y. Kim, “Shielding from Solar Particle Event Exposures in Deep Space”, Radiat. Meas., Vol. 30, 361-382 (1999). G.L. Wrenn, D.J. Rodgers and P. Buehler, “Modeling the Outer Belt Enhancements of Penetrating Electrons”, J. Spacecraft and Rockets, Vol. 37, No. 3, 408-415 (MayJune 2000). M.A. Xapsos, “Hard Error Dose Distributions of Gate Oxide Arrays in the Laboratory and Space Environments”, IEEE Trans. Nucl. Sci., Vol. 43, 3139-3144 (1996). M.A. Xapsos, G.P. Summers and E.A. Burke, “Probability Model for Peak Fluxes of Solar Proton Events”, IEEE Trans. Nucl. Sci., Vol. 45, 2948-2953 (1998). M.A. Xapsos, G.P. Summers and E.A. Burke, “Extreme Value Analysis of Solar Energetic Proton Peak Fluxes”, Solar Phys., Vol. 183, 157-164 (1998). M.A. Xapsos, G. P. Summers, J.L. Barth, E.G. Stassinopoulos and E.A. Burke, “Probability Model for Worst Case Solar Proton Event Fluences”, IEEE Trans. Nucl. Sci., Vol. 46, 1481-1485 (1999). M.A. Xapsos, J.L. Barth, E.G. Stassinopoulos, E.A. Burke and G.B. Gee, Space Environment Effects: Model for Emission of Solar Protons (ESP) – Cumulative and Worst Case Event Fluences, NASA/TP-1999-209763, Marshall Space Flight Center, Alabama, Dec. 1999. M.A. Xapsos, G.P. Summers, J.L. Barth, E.G. Stassinopoulos and E.A. Burke, “Probability Model for Cumulative Solar Proton Event Fluences”, IEEE Trans. Nucl. Sci., Vol. 47, No. 3, 486-490 (June 2000). M.A. Xapsos, S.L. Huston, J.L. Barth and E.G. Stassinopoulos, “Probabilistic Model for Low-Altitude Trapped-Proton Fluxes”, IEEE Trans. Nucl. Sci., Vol. 49, 27762781 (Dec. 2002).
I-61 II-61
[Xa04] [Xa06] [Xa06a]
M.A. Xapsos, C. Stauffer, G.B. Gee, J.L. Barth, E.G. Stassinopoulos and R.E. McGuire, “Model for Solar Proton Risk Assessment”, IEEE Trans. Nucl. Sci., Vol. 51, 3394-3398 (2004). M.A. Xapsos, C. Stauffer, J.L. Barth and E.A. Burke, “Solar Particle Events and SelfOrganized Criticality: Are Deterministic Predictions of Events Possible?”, Accepted for publication in IEEE Trans. Nucl. Sci., June 2006 issue. M.A. Xapsos et al., submitted to the 2006 NSREC (Ponte Vedra Beach, FL).
I-62 II-62
2006 IEEE NSREC Short Course
Section III: Space Radiation Transport Models
Giovanni Santin* European Space Agency *on loan from RHEA System SA Dennis Wright Makoto Asai Stanford Linear Accelerator Center
Approved for public release; distribution is unlimited
III-1
Space Radiation Transport Models Giovanni Santin, European Space Agency and RHEA System SA Dennis Wright, Makoto Asai, Stanford Linear Accelerator Center NSREC 2006 Short Course Outline Introduction..................................................................................................................................... 3 I. Space Radiation Transport: Physics........................................................................................ 4 A. Radiation in Space: Types and Energy Ranges .................................................................. 4 B. In the spacecraft / In the Devices........................................................................................ 5 C. Particle interactions: fundamental forces............................................................................ 6 D. Particle interactions: cross section, probabilities, mean free path ...................................... 6 E. Electromagnetic Interactions .............................................................................................. 7 1. Photon processes............................................................................................................. 7 2. Charged particles: Electrons and Positrons................................................................... 12 3. Charged particles: Protons and Ions ............................................................................. 15 4. Energy loss.................................................................................................................... 16 5. Straggling...................................................................................................................... 16 F. Nuclear Interactions .......................................................................................................... 17 1. Nucleon-nuclei processes.............................................................................................. 17 G. Interplay of Processes ....................................................................................................... 18 1. Electromagnetic showers .............................................................................................. 18 2. Electrons and positrons in matter.................................................................................. 19 3. Bremsstrahlung and ionization from low penetrating radiation ................................... 20 4. Proton range and straggling .......................................................................................... 21 5. Knock-on particles and fragments from nuclear interactions ....................................... 21 6. Shower of lattice-atom displacements .......................................................................... 22 II. Radiation Transport Techniques ........................................................................................... 24 A. Analytical / Monte Carlo .................................................................................................. 24 B. Single Particle / Collective Effects ................................................................................... 24 C. Look-Up Tables / Sectoring analysis............................................................................... 25 D. Forward / reverse transport .............................................................................................. 26 III. In depth: Monte Carlo techniques..................................................................................... 28 A. General Concepts .............................................................................................................. 28 1. Monte Carlo for elementary particle transport.............................................................. 28 2. Random generators ....................................................................................................... 29 B. Variance Reduction........................................................................................................... 29 C. Interfaces........................................................................................................................... 30 D. Output: tallies.................................................................................................................... 31 IV. Radiation Transport Tools ................................................................................................ 33 A. Historical tools .................................................................................................................. 33 1. ETRAN / ITS ................................................................................................................ 33 2. SHIELDOSE................................................................................................................. 34
III-1
B.
Present............................................................................................................................... 35 GEANT4 ....................................................................................................................... 35 MCNPX ........................................................................................................................ 36 FLUKA ......................................................................................................................... 36 PHITS ........................................................................................................................... 36 NOVICE ....................................................................................................................... 36 PENELOPE................................................................................................................... 37 EGS ............................................................................................................................... 37 BRYNTRN / HZETRN................................................................................................. 37 SRIM / TRIM................................................................................................................ 37 C. Future ................................................................................................................................ 38 V. GEANT4 applications and physics validation for space environment analyses................... 39 A. Introduction and Kernel .................................................................................................... 39 B. Geometry........................................................................................................................... 40 C. Physics Processes.............................................................................................................. 42 1. Electromagnetic ............................................................................................................ 42 2. Decay ............................................................................................................................ 44 3. Hadronic processes and models .................................................................................... 44 D. Physics Validation ............................................................................................................ 46 1. Electromagnetic ............................................................................................................ 46 2. Hadronic........................................................................................................................ 48 E. Geant4-based radiation analysis and tools........................................................................ 50 1. Sector Shielding Analysis Tool (SSAT) ....................................................................... 50 2. PLANETO-COSMICS ................................................................................................. 51 3. Multi-Layered Shielding Simulation Software (MULASSIS) ..................................... 52 4. GEANT4 Radiation Analysis for Space GRAS............................................................ 53 5. Monte Carlo Radiative Energy Deposition (MRED) – Vanderbilt .............................. 53 Conclusion .................................................................................................................................... 55 References..................................................................................................................................... 55 1. 2. 3. 4. 5. 6. 7. 8. 9.
III-2
Introduction Knowledge of the potential impact of the radiation environment on evolving space-borne devices relies on precise analysis tools for the understanding and the prediction of the basic effects of the particle environment on new technologies. In addition to cumulative effects such as dose, single event effects (SEE) in modern microelectronics are often a major cause of spacecraft failures or anomalies [Da04][Ba04]. Simulations improve the understanding of the underlying phenomena of the interaction of the particle radiation with the spacecraft devices. They can thus play a major role both in understanding system performance in space and in improving the design of flight components. Engineering design margins are a crucial issue for missions flying commercial off-the-shelf (COTS) technology and sensitive detectors. A complete space qualification ground test procedure for all new components makes costs higher, without being able to cover, in energy and species, the entire range of the particle radiation population in space. The availability of reliable simulation tools could lower costs by complementing a more limited set of experimental tests, while still giving enough confidence on the component’s behavior in space. Several issues are related to the effectiveness and the reliability of transport tools for the description of the radiation environment local to the sensitive devices. The main ones are linked to the fundamental physics modeling, but the transport algorithms utilizing these models present some delicate aspects too. The purpose of this lecture is to introduce some of the basic concepts in space particle radiation transport. Section I gives a short summary of the main fundamental physics interaction types encountered by particle radiation in matter (which translates both in shielding mechanisms and radiation-induced effects in devices). Sections II and III provide an introduction to radiation transport and more detail on Monte Carlo techniques, their application to particle transport applications, and some of the issues related to Monte Carlo algorithms. Section III gives an overview of historical and present radiation transport tools, and some guesses on the trends for the years to come. Finally, Section IV focuses on GEANT4, a modern and promising MC toolkit, which has found several space application cases in recent years. The main features of the toolkit are outlined before concentrating on the physics models and on some validation examples. The section ends with a number of GEANT4-based tools for space applications and related results.
III-3
I. Space Radiation Transport: Physics A. Radiation in Space: Types and Energy Ranges The first lecture introduces in great detail the characteristics of the radiation environment. As a short explanation for some of the requirements for transport tools, we briefly mention here the main components of the radiation environment, in terms of the great variety of particle species and wide energy spectrum [He68][Da88]. The radiation belt trapped radiation environment includes protons with energies up to several hundred MeV, and two electron belts, whose spectrum extends to energies of a few MeV. The combination of their motions in the Earth's magnetic field (gyration about field lines, bouncing between the magnetic mirrors, and drift around the Earth) makes the particle field at the spacecraft effectively isotropic. There are a few exceptions, notably low-altitude protons, which can cause differences of a factor three or more in fluxes arriving from different azimuths. During Solar Particle Events (SPE) large fluxes of energetic protons and other particles are produced. This component of the environment is event driven, with occasional high fluxes over short periods, and unpredictable in time of occurrence, magnitude, duration. The composition, mainly protons and alphas, but also heavier ions, electrons, neutrons and gammas, varies greatly between events. Cosmic radiation, which originates outside the solar system, includes heavy and energetic (HZE) ions and has a spectrum that varies approximately as ~E-2.5. Particles with energies in excess of 1020 eV have been detected on Earth. Because of their energy spectrum and their charge, despite the low intensity (about 2 to 6 /cm2s) they are difficult to stop and cause intense ionization along their tracks (and occasionally high-energy nuclear fragments). Other environment components (energetic and low-energy plasma, Oxygen atoms, debris) are neglected here.
Figure 1 Simplified diagram of typical particle radiation spectra from them main space environment sources. It is worth mentioning that ground-testing procedures usually employ mono-energetic beams of relatively low energy, whereas in space, spacecraft are exposed to continuous spectra, and
III-4
ground-testing facilities have limited or no access to the energies of the cosmic ray environment that has just been introduced. Only by complementing testing with detailed simulations can a complete coverage of the expected effects to devices in space be obtained.
B. In the spacecraft / In the Devices When considering the impact of the space environment on spacecraft devices, several sources of radiation have to be considered. The first component, briefly summarized in the previous section, is the external particle field, whose models are presented in great detail in the first lecture. The particle environment local at the device level differs from the external one, and is given by the superposition of: a. the external field, modified or “attenuated” by the presence of shielding, b. secondary radiation (such as delta electrons, nuclei fragments and subsequent de-excitation) produced by the interaction of the external field with the spacecraft structure or within the device itself, c. natural radioactivity due to unstable isotopes that are present in the materials used for the spacecraft or the payloads, or induced radioactivity, due to the formation of unstable isotopes as a result of the interaction of the radiation with the materials. It is clear that all these particle fields need to be correctly modeled and transported in a realistic spacecraft model to assess their effects on sensitive devices, and that the description of the physics involved plays a fundamental role in this process. The interaction of the external radiation with matter is usually described in terms of shielding, slow-down, or protection. The net effect is indeed generally energy degradation and consequently decreased fluxes after shielding. When considered from the point of view of the interaction of the radiation within the sensitive devices, the same physics processes play an active role, as they are responsible for charge injection, activation, excitation, or device degradation. However, fundamental processes in the devices are of the same type as in the S/C structure. In the next sections we briefly introduce the possible interaction types for the main radiation species in the space environment, with an emphasis on their impact on the interaction of radiation with spacecraft and sensitive devices therein.
Figure 2 Radiation sources in space: primary particles, secondary particles from interactions in the spacecraft structures, natural and induced radioactivity.
III-5
C. Particle interactions: fundamental forces Four fundamental forces drive radiation interactions with matter: the electromagnetic force (which involves charged particles and photons), the weak force (responsible for example for the β decay in radioactivity: n → p + e − + ν e ), the strong force (which acts on hadrons, e.g. between protons and neutrons, and holds the nucleus together) and the gravitational force. It is possible to associate to each of them non-dimensional coupling constants, which are related somehow to the “force” of each interaction. Interaction type
g0
Range [m]
Strong
0.1 – 0.15
10-15
Weak
10-5
10-18
Electromagnetic
α = 1137 = 4πε 0 ηc
Infinite
Gravitational
10-39
Infinite
e2
Table 1 The four fundamental forces can be compared in terms of coupling constants and range of interaction. In the following sections we describe the main mechanisms that intervene in the interaction of the space radiation environment with matter. These mechanisms all consist of single interactions or ensembles of interactions belonging to two of the four fundamental forces just introduced: the electromagnetic and the nuclear ones.
D. Particle interactions: cross section, probabilities, mean free path The interaction of particles with matter is typically described in terms of single collisions of the incident particles with individual particles in matter. These collisions are described in terms of cross section, which gives a measure of the probability for a reaction to occur. The cross section for the single processes can be calculated when the basic mechanisms of the interaction are known. The cross section for the interaction of an incident particle with a target atom is defined as the ratio of the interaction probability P and the incident particle flux Φ: P σ= (1) Φ The distribution function P x , known as the “survival probability”, is the probability that a given particle will not interact after a distance x [Le94], and can be computed based on the probability of having an interaction between x and x dx . Probability of interaction in x , x dx wx ; w (2) where n is the number of target particles per unit volume, and is the interaction cross section. It is easily found that the probability of a particle surviving a distance x is exponential in distance, (3) P(x) = exp( − wx) [ P(x = 0 ) = 1 ].
III-6
From this, the probability of a particle having an interaction in F(x)dx = exp( − wx)wdx ,
x , x dx
can be derived: (4)
whereas the probability of having an interaction anywhere between 0 and x is Pint ( x) = 1 − exp(− wx ) . The “mean free path” is the mean distance traveled by a particle without collisions ∫ xP(x)dx = 1 = 1 . λ= ∫ P(x)dx w nσ
(5)
(6)
E. Electromagnetic Interactions Charged particle and photon interaction with matter is mainly of electromagnetic type, and it leads to the degradation of the incoming particle energy and/or to its scattering, or to photon absorption. A brief overview of the many interaction types can be a useful guideline through the next sections. Heavy particles (such as protons) lose their energy mainly through electromagnetic collisions with atomic electrons in the material, whereas electrons lose energy both with collisions with the electrons in the material and via radiative emission (bremsstrahlung) caused by accelerations induced by the electric field from the nuclei. Photons at low energy interact via the photoelectric effect with bound electrons in matter, at medium energies via the Compton effect with quasi-free electrons in the outer atomic shells, and finally at high energies (above several MeV) via electron-positron pair production in the nuclear electric field.
1. Photon processes a) Coherent, elastic or Rayleigh scattering The coherent or Rayleigh process occurs between photons and bound electrons, without energy being transferred to the atom. The resulting scattered photons have therefore the same energy as before the interaction. The angular differential cross section can be expressed as a function of the scattering angle ϑ and the atomic form factor F (q, Z ) : dσ Rayl
2 2 1 + cos ϑ [F (q, Z )]2 (7) = re dΩ 2 where q is the momentum transfer. At low energies (up to a few keV) the form factor is approximately independent of scattering angle, with a real part that represents the effective number of electrons that participate in the scattering, so that the total Rayleigh cross section reduces to: 8 σ Rayl = πre 2 Z 2 . (8) 3 At higher energies, the scattering factor falls off rapidly with scattering angle and can be found tabulated [Hu79].
III-7
Figure 3 Diagram of photon coherent scattering. The photon is scattered but its energy is unchanged.
b) Photoelectric The photoelectric effect dominates at low energies (< 100 keV) and consists in an interaction between the photon and the atoms (not with individual electrons). A free electron cannot absorb a photon and also conserve momentum so the interaction always involves bound electrons (the majority of the interactions involve the K-shells). As a consequence of the interaction, an electron (“photoelectron”) is ejected with a kinetic energy E pe = E γ − Eb , where Eb is the electron “binding energy”, and the vacancy is filled by electrons from the outer shells, with related emission of fluorescence (mostly X-rays) and/or Auger electrons. An approximation of the cross section for the non-relativistic photoelectric effect is given by [Ha36] for energies far from the K-shells. 7/2
m c2 σ Phot = 4 2 Z α e σ T hom (9) h ν At lower energies, the cross section contains discontinuities (“edges”) related to the atomic shell structure. 5
4
Figure 4 Diagram of photoelectric effect. The incident photon is absorbed and induces the emission of an electron.
c) Compton The Compton effect dominates the interactions at medium energies, from 100 keV to several MeV (the actual energy range is material dependent). It is produced by the scattering of the incident photon with quasi-free electrons in the outer atomic shells. The process is “incoherent” because each atomic electron acts as an independent scattering center. By conservation of energy and momentum, and considering the electron free and at rest, the energy of the diffused photon is given by the following expression, symmetric about the incoming photon direction: III-8
E (ϑ ) =
E 1 + 02 mc An approximated [Kn89]: dσ Compt 1 2 = r0 2 dΩ
E0
(10)
(1 − cos ϑ ) angular differential cross section has been obtained by Klein-Nishina p2 p0
2
(
p0 p + − sin 2 ϑ ) p p0
(11)
e2 is the classical electron radius (~2.8 10-15 m). The 2 4πε 0 me c approximation of free and at-rest target electrons fails at lower incident photon energies (below a few tens of keV), where important deviations can be observed [Ri75] and Doppler broadening and binding effects must be introduced. per target electron, where re =
Figure 5 Diagram of Compton effect. A photon is scattered transferring part of its energy to an electron, which is ejected from the atom.
d) Pair production At higher energies, photon interactions with matter are dominated by pair production, which is the absorption of a photon and the creation of an electron-positron pair (“external pair production”). The process generally occurs as an interaction with the electric field of an atomic nucleus. In case it interacts with an electron, an extra electron is added to the final state (“triplet production”). The process also occurs as “internal pair conversion”, with electron-positron pairs emitted from nuclei, decays or collisions between charged particles. The energy of the photon is directly converted into the mass of the two particles and therefore it must exceed approximately twice the rest mass of the electron: Ethr = 2me c 2 . Above the threshold energy the pair production cross section slowly rises with energy and in practice the probability of an interaction remains very low until a gamma energy of several MeV. The following formula gives the asymptotic cross section (for high gamma energies): 7 183 1 σ Pair = 4 Z 2αre ln 1 / 3 − (12) 9 Z 54 whereas for intermediate energies the cross section depends on the incident photon energy: 7 2 E 109 − σ Pair = 4Z 2αre 2 ln (13) 2 9 m c e 54
III-9
Figure 6 Diagram of pair and triplet production effect. (Top) Pair production from a photon interacting with the nucleus field. (Bottom) Triplet production from a photon interacting with the electron field.
III-10
e) Photon interactions in different materials As a conclusion of the sections on photon interactions with matter, it is useful to present the probability for the different effects to occur in different materials. Figure 7 summarizes the cross section for photoelectric effect, Compton effect and pair production in Silicon (left) and Tungsten (right) as examples of low-Z and high-Z materials. One can notice the great difference in the shape of the photoelectric cross section, due to the atomic shell structures, and in the energy ranges at which each process dominates over the others.
Figure 7 Photon interaction cross-section data for Silicon (left) and Tungsten (right). Data obtained from the NIST XCOM database [Be90].
III-11
2. Charged particles: Electrons and Positrons a) Elastic collisions Elastic collisions are due to the Coulomb interactions of electrons with the atom nucleus field, screened by the atom electrons. They are responsible for changes in the electron and positron direction of motion, but not for energy loss.
Figure 8 Diagram of electron elastic collision. The electron is scattered by the atomic nucleus, keeping its energy approximately unchanged. Small-angle electron scattering corresponds to distant collisions. It is dominated by the ionic Coulomb potential and can be described by the Mott relativistic extension [Mo29] [Mo65] of the Rutherford cross section, which can be approximated as: dσ Mott ( Ze 2 ) 2 ϑ (14) = (1 − β 2 sin 2 ) . dΩ 2 2 2 4 ϑ (4πε 0 ) (4 E kin ) sin 2 This reduces to the Rutherford formula as β → 0 . The approximate McKinley-Feshbach [Mc48] formulae make the detailed results by Mott for relativistic electrons easier to use for computational purposes. Large-angle scattering corresponds to a deeper probing of the atomic structure near the distance of closest approach and is much more sensitive to correlation, exchange, bound-state resonances, and interference effects, especially at the largest scattering angles.
b) Inelastic collisions Together with bremsstrahlung emission (introduced on page number II-13) inelastic collisions are responsible for energy loss of electrons (and positrons) in matter. They constitute the main energy loss mechanism at low and intermediate energies (up to several tens of MeV). Inelastic scattering is the result of Coulomb interactions between the incident electron and atomic electrons. Part of the energy and momentum of the incident electron is transferred to the target system, and the interaction final state may not include only single-electron excitation or atomic ionization (with electron-hole pair production), but can involve many atoms in the solid (“plasmon excitation”). Bethe [Be59] first obtained a quantum mechanical calculation of inelastic collisions, derived on the basis of the Born approximation, which is essentially an assumption of weak scattering. The original Bethe formulation was extended to treat electron interactions in condensed matter [Fa63][Ev55]. Calculations dedicated to the special case of incident electrons and positrons led III-12
to the specialized Møller [Mø32] and Bhabha [Bh36] formulations of the energy loss theory, for electron-electron and electron-positron inelastic scattering respectively. The energy loss for electrons/positrons can be expressed as dE Z 1 τ 2 (τ + 2) C + − − ln ( τ ) δ 2 − =K F (15) dx A β 2 2( I / mec 2 ) 2 Z
where τ is the kinetic energy of the particle in units of mec 2 , δ is the density correction and C gives the shell correction. F (τ ) is different for electrons and positrons. For a detailed treatment of the energy loss of electrons and positrons, see also [Fe86][Le94]. The energy loss by ionization can also be compared to the radiative component of the electron energy loss, treated in the following section, so that the relative importance of the two energy loss mechanisms can be assessed as a function of the electron energy.
Figure 9 Diagram of electron inelastic collisions leading to atomic excitation or ionization.
c) Bremsstrahlung Electrons and positrons passing through matter undergo acceleration (deceleration) in the interaction with the electrostatic field of the atoms and emit radiation. The process is called bremsstrahlung, or “braking radiation” and is depicted in Figure 10. The electron cloud screening of the field of the atomic nucleus is high for high incident electron energies; it can be neglected at low energies. As a consequence, at low energies the energy loss can be approximated as 2E 1 dE (16) − = KZ 2 E ln − − f ( Z ) 2 dx Brem m c 3 e At high incident energies the energy loss can be expressed as dE 183 1 (17) − = KZ 2 E ln 1 / 3 − − f ( Z ) dx Brem 18 Z If the logarithmic term at low energies is neglected, the energy loss rate can be approximated as proportional to the energy of the incident electron:
III-13
−
dE 1 = E dx Brem X 0
(18)
where X 0 is called “radiation length” and represents the distance over which radiative emission reduces the initial projectile energy by a factor 1 / e . A complete differential cross section can be given by the Bethe-Heitler formula [Be34] corrected and extended by screening of field of nucleus by atomic electrons, bremsstrahlung from atomic electrons, Coulomb corrections to Born approximation, dielectric suppression (matter polarization), LPM suppression (multiple scattering of electron while still in formation zone).
Figure 10 Diagram of electron bremsstrahlung emission process. The electron emits a photon as a consequence of acceleration induced but the atomic electrostatic field.
d) Positron annihilation A positron can annihilate in the interaction with an atomic electron, with the emission of two photons. The process cross section is higher for lower positron energies, so that the process happens from a system almost at rest with the emission of two photons “back-to-back”.
Figure 11 Diagram of positron annihilation process. A positron annihilates with an atomic electron with the emission of two photons.
e) Cherenkov effect Charged particles passing through a dielectric at a speed greater than the speed of light in the material emit light, undergoing a phenomenon analogous to the emission of mechanical shock waves moving faster than the speed of sound. The effect is named after P.A. Cerenkov, who first III-14
predicted it in 1934. Light is emitted at a fixed angle ϑ to the direction of particle motion, such that c cos ϑ = (19) vn and consists of a continuous spectrum, with a cut-off wave length determined by the frequency-dependent refraction index in the equation above. Cerenkov light is often used in particle detectors, giving identification of particle species from direct information on the particle velocity.
Figure 12 Diagram of Cerenkov effect in a thin (transparent) target, with the formation of the characteristic ring of photons.
3. Charged particles: Protons and Ions The effects of the passage of protons and ions in matter can be summarized, at least for the electromagnetic component of the interactions, in the slowing down (energy loss) and deflection of the particles. The main interactions responsible for these effects are elastic collisions with nuclei and inelastic collisions with atomic electrons in the material.
a) Elastic collisions Elastic scattering of protons and ions in matter happens as a consequence of the interaction with the material nuclei, screened by the electron cloud. The process is similar to the electron elastic scattering previously described, the main difference being the non-negligible mass of the projectile. While the total kinetic energy is conserved, part of it is transferred from the projectile to the target nucleus.
Figure 13 Diagram of the elastic electromagnetic interaction of an incident ion with a target atom, screened by the electron cloud, with transfer of part of the energy to the target nucleus. III-15
b) Inelastic collisions Ion electromagnetic inelastic collisions are interactions of the incident ions with the field of the atomic electrons in the material. The result of the interaction is the excitation or the ionization of the target atom.
Figure 14 Diagram of inelastic electromagnetic interaction of incident ions with electrons of the target atoms, inducing atomic excitation or ionization.
4. Energy loss A major part of the energy loss of electrons and ions in a material is generally due to the inelastic interaction of the projectiles with the field of the electrons in the target nucleus. The amount of energy transferred to the electrons in each collision is a very small fraction of the projectile energy, but the number of collisions in media of normal density is very high, giving as a result a significant loss of kinetic energy. Most of the collisions induce limited transfer of energy, and are denoted as “soft”. More seldom “hard” collisions occur, which induce atomic excitation with the ejection of a fast electron (often referred to as δ -ray). Bethe, Bloch and others first gave a correct quantum mechanical description of the energy loss phenomena. The resulting stopping power can be approximated by the following formula dE Z 1 2meγ 2v 2Wmax C − 2 β 2 − δ − 2 + zL1 + z 2 L2 − = Kz 2 ln (20) 2 2 dx Aβ I Z where Wmax is the maximum energy transfer in a single collision and I (main parameter in the formula) is the mean excitation potential of the target material. Two correction terms are usually considered: the density correction δ and the shell effect given by C / Z . The additional terms zL1 + z 2 L2 represent the Barkas and the Bloch correction. For a detailed description of the energy loss phenomena, the reader is referred to [Zi85].
5. Straggling The energy loss process is not continuous, but statistical in nature, as it derives from a series of collisions. As a consequence, the measurement of the range of a set of identical particles will result in a distribution of ranges, centered about some mean value, a phenomenon referred to as “range straggling”. When the number of collisions is high and the total energy loss along the path is given by the sum of a large number of small contributions, the global fluctuations follow a gaussian distribution. On the contrary, when the total number of collisions along the particle path is low (for example in thin absorbers, or in gases) the fluctuations are large and follow the Landau asymmetrical distribution.
III-16
The continuously slowing down approximation (CSDA) range neglects scattering and straggling, and therefore differs from the real projected range, which measures the average depth of penetration measured along the initial particle direction. Figure 15 shows the measurement of the proton range from the intensity curve of a mono-energetic proton beam passing through an absorber material. The “mean range” R0 of the beam is indicated for the case of gaussian range fluctuations. The “extrapolated range” Rext is obtained by extrapolation of the line of maximum gradient in the curve. The “straggling parameter” S, which is a measure of the range fluctuations, can be obtained from the difference between these two ranges: S = Rext − R0 .
Figure 15 Typical range curve for a mono-energetic proton beam in an absorber. The fluctuations (straggling) in the energy loss process can be quantified with the extrapolated range.
F. Nuclear Interactions Particles coupled to the strong force (such as protons, neutrons, π mesons, nuclei) are subject to nuclear interactions (also commonly referred to as “hadronic”). This force has a typical range that is much shorter than the electromagnetic one, so it becomes effective for charged particles only when the energy involved in the interaction is higher than the Coulomb barrier generated by the nucleus charge. An approximate expression for this barrier can be obtained from the charges of the incident and target hadrons 1 Z i ZT U Coul = (21) 4πε 0 rh where rh ≈ 10−15 m, typical range for hadronic interactions (see Table 1).
1. Nucleon-nuclei processes a) Elastic Elastic nuclear collisions conserve the total kinetic energy of the nucleon-nucleus system, and thus do not modify the target nucleus species nor its excitation state. Due to the mass of the incident nucleon, the target nucleus recoil energy after the interaction is not negligible, and leads to local intense ionization along the recoil path.
III-17
Figure 16 Diagram of elastic nucleon-nucleus collision, with transfer of part of the projectile energy to the recoil nucleus.
b) Inelastic In inelastic hadronic collisions, part of the total kinetic energy is transferred to the excitation or the break-up of the target nucleus. Excited states may later decay by gamma ray or other forms of radiative emission, or further break-ups. Due to the complexity of the interactions, no single theory of hadronic collisions exists, which applies to all energy ranges. On the contrary, a collection of models can be used, which are complementary in the description of the particle species and energy range. As in the elastic case, recoil nuclei and fragments are the cause of local intense indirect ionization.
Figure 17 Diagram of nucleon-nucleus inelastic interaction, with break-up of the target nucleus and emission of secondary particles.
G. Interplay of Processes 1. Electromagnetic showers A typical case of interconnection between fundamental processes involving different particle species in a range of different energies is given by the interaction of electrons, positrons and gammas in matter. As an example, an energetic electron can dissipate its energy with the emission of a bremsstrahlung gamma and continue its path through the material. This gamma, if its energy is high enough, will possibly undergo pair production process generating an electronpositron pair, which will dissipate energy radiatively (with the emission of bremsstrahlung photons). This iterative particle production mechanism continues until the typical gamma energy
III-18
falls below the critical energy for pair production. The process is represented in Figure 18. The same process can of course be initiated in a similar way by a positron or a gamma. The macroscopic phenomenon of production of this large number of electrons, positrons and photons is known as “electromagnetic shower”.
Figure 18 Diagram representing the development of an electromagnetic shower. (Right) Simplified cascade description, based on the similarity of pair production and bremsstrahlung cross sections, from which some quantitative estimates can be drawn on particle multiplicity and shower profile. A realistic description can only be obtained through detailed simulations.
2. Electrons and positrons in matter The final effect of the interaction of electrons with matter in space applications can vary significantly because it results from the interplay of electron spectrum, shielding materials and thickness, device sensitivity, and the dominant physics processes in the radiation effect analysis. Electron interactions with material produce mainly excitation and ionization and results in energy loss and scattering, which can lead to highly convoluted electron paths and bremsstrahlung emission. Figure 19 shows the continuous slowing down approximation (CSDA) range and stopping powers for electrons in several materials (hydrogen, aluminum, tungsten and polyethylene) available from the US National Institute of Standards and Technology (NIST) website [Ni98]. 7 W Al Polyethylene H
CSDA range [g/cm2]
6 5 4 3 2 1 -
1
2
3
4
5
6
7
8
9
10
electron energy [MeV]
Figure 19: Electron CSDA range in different materials (from low-Z to high-Z, plus polyethylene as an example of a hydrogen-rich material) as a function of energy. Data from NIST ESTAR database [Ni98].
III-19
The CSDA range corresponds to the integral pathlength traveled by electrons assuming no stochastic variations between different electrons of the same energy. Because of multiple scattering, the effective range in matter is in general shorter than the CSDA range (depending upon the electron energy and material). Shielding effectiveness results from the combination of range along the path and scattering, which diverts the electron trajectories. In the case of thin shielding the latter dominates, making high-Z materials more effective. Radiative emission is more important for high electron energies and thick shielding, making low-Z materials more convenient. Multi-layered shielding (with low-Z materials first) usually provides a good solution, gradually slowing down and diffusing the projectile particles.
3. Bremsstrahlung and ionization from low penetrating radiation Total energy loss of electron and positrons is given by the sum of the loss from ionization and bremsstrahlung emission. The bremsstrahlung contribution dominates above a certain energy value, which is indicated as “critical energy” Ec . This energy can be obtained from an approximation of the single contributions [Ba96], which for liquids and solids is: 610 [ MeV ] (dE / dx )Ioni ≈ (dE / dx )Brem ⇒ Ec ≈ (22) Z + 1 . 24 High-energy photon energy loss then mainly comes from bremsstrahlung emission. Emitted photons interact (as described in the sections on photon interactions) through a number of processes (Rayleigh scattering, photoelectric effect, Compton effect and pair-production), resulting in the loss or scattering of the incident photon, and for the three latter processes, the production of electrons or positrons that may induce further ionization or bremsstrahlung [Ec06]. Through this mechanism, bremsstrahlung production allows energetic electrons to deposit energy significantly beyond the range of electrons in materials due to the longer average ranges of the photons.
Figure 20 Simulations of 1 MeV electrons impinging (from the left) on a 0.1 mm Silicon detector, protected by a 1 mm Aluminum shield. While all electrons are stopped in the first layer or backscattered, a bremsstrahlung photon is emitted in the shielding and reabsorbed in the sensitive volume (simulations obtained with GRAS/Geant4 [Sa05]). III-20
4. Proton range and straggling Inelastic collisions with atomic electrons are the main mechanism responsible for the energy loss of protons and ions in matter. Due to their large mass, proton and ion projectiles experience a much less significant scattering with respect to the case of incident electrons. However, as introduced in the description of the basic processes, some deflections of the proton track do occur. In addition, the energy loss process is not in fact continuous, but statistical in nature. As a consequence, the measurement of the range of a set of identical particles will result in a distribution of ranges, centered about some mean value, a phenomenon referred to as “range straggling”. The continuously slowing down approximation (CSDA) range neglects scattering and straggling, and therefore differs from the real projected range, which measures the average depth of penetration measured along the initial particle direction. For the total energy loss of the incident proton, a higher Z/A in the Bethe-Bloch formula for low-Z materials, combined with limited scattering, gives a more efficient shielding per unit mass compared to high-Z ones, as shown in the figure. 1E+03
projected range [cm]
1E+02 1E+01 1E+00 1E-01 1E-02
s
1E-03 Hydrogen
1E-04
Aluminum
1E-05 1E-06 1E-02
Tungsten 1E-01
1E+00 1E+01 proton energy [MeV]
1E+02
1E+03
Figure 21 Mean projected range of protons (neglecting hadronic interactions) in Hydrogen, Aluminum and Tungsten, scaled to an equal density of 1 g/cm3. The data in the plot were produced with SRIM [Sr03]. For applications such as analyses of devices sensitive to single event effects and for the biological effects of radiation in human missions, in addition to the inelastic interactions with electrons, one must consider the small but significant probability of interaction of protons with the nuclei in the target material, treated in the next section.
5. Knock-on particles and fragments from nuclear interactions The role of nuclear interactions in radiation effects to components depends on the type and energy spectrum of the radiation source, the shielding configuration and device susceptibility. Effects of hadronic interactions include local indirect intense ionization through nuclei fragments and increased background and secondary interactions from neutrons and gammas from excited nuclei. As a result, secondary particles from nuclear interactions contribute to total dose,
III-21
biological effects in human missions (the effect originating both from local interactions and neutrons production in heavy spacecraft structures or planetary shelters) and transient effects. While qualitative estimates of nuclear reactions can be produced with approximate models, quantitative assessment of the role of nuclear interactions to transient effects require precise description of secondary particle production double differential cross sections. As an example of the importance of direct versus indirect ionization by recoil nuclei from nuclear interactions, see [Tr04a][Ko05a][Wa05a].
Figure 22 After [P.J McNulty, Notes from 1990 IEEE NSREC Short Course]
6. Shower of lattice-atom displacements A particle collision in matter can transfer to atoms sufficient energy to displace them [Zi85]. Hit atoms can then move in the lattice and thus create vacancies and stop in interstitial positions. The concentration of resulting effective recombination or trapping centers, responsible for performance degradation in semiconductor devices such as bipolar transistors, is proportional to the concentration of vacancy-interstitial pairs (also known as Frenkel defect). The basic physics processes involved in the displacement collisions are particle and energy dependent. Coulomb elastic scattering dominates for electrons and low energy protons, elastic hadronic scattering for low energy neutrons, whereas at higher energies (above 10-20 MeV) inelastic processes are most important for both protons and neutrons. The displacement mechanism is often a cascade event, and numerous models have been developed to estimate the number of induced Frenkel pairs, such as Kinchin and Pease [Ki55].
III-22
Figure 23 (left) Diagram of a Frenkel pair cascade. (After [Space radiation effects on microelectronics, NASA JPL].) (right) Calculations of proton induced NIEL in GaAs [Ju03], compared to Summers et al. (After [Su93].)
III-23
II. Radiation Transport Techniques The previous section introduced the interaction of particle radiation with matter from the point of view of the underlying physics. The level of detail and realism in the geometry description and the precision required in the description of the basic physics processes depend on the radiation effects under study and is often very different for space missions in an early phase of conceptual design compared to advanced and precise verification for payload and subsystem. Therefore, when modeling the transport of radiation from the outer space to the spacecraft interior for the assessment of the radiation effects to the sensitive devices, the computation can rely on a range of techniques of different complexity and precision, and based on drastically diverse approaches. The following sections introduce some of the issues related with radiation transport models.
A. Analytical / Monte Carlo Transport methods using the “analytical” or “deterministic” method solve the integro-partialdifferential Boltzmann transport equations that describe how radiation fields are transformed when passing through a given mass thickness. This is usually done in a one-dimensional straightahead approximation. As a consequence of the mathematical approach in the analytical solutions, deterministic methods are in general fast (as an example, the HZETRN tool allows field mapping within the International Space Station in tens of minutes using standard finite element method geometry) but approximated. Deterministic tools usually provide solutions to 1-D configurations only, but recent developments extended the range of applicability to 3-D models [Wi04]. The Monte Carlo approach, which aims at simulating the particle transport process as it happens in nature, will be described in more detail later in the paper. While analytical methods provide solutions to the equations that describe the radiation transport, in MC methods the particle propagation is simulated directly, and there is no need to write down the transport equations [Co95]. Depending on the details required in the physics description and in the geometrical models, and on the characteristics of the radiation sources, MC calculations may indeed be in particular cases very demanding in terms of computational resources. This limitation in MC tends to become less important with the advent of modern computers, together with variance reduction techniques and mixed forward/reverse simulations.
B. Single Particle / Collective Effects In environments such as cosmic rays, solar protons or ions and trapped radiation charged particles at relatively high energy are characterized by low charge density. As a result, their motion can, at a first approximation, be modeled with a single-particle approach. This means that collective effects, which are very important for example in plasma behavior, can be neglected. This approximation, which greatly simplifies the simulation techniques, starts to show its limitations in certain extreme conditions, at lower energies and higher densities. Examples of deviation from single-particle modeling include scintillating detectors, whose photon yield often presents a non-linearity at high energy deposition rates, and charge collection in semiconductor devices, where the modeling of electron drift is greatly affected by the presence of high charge density. In all the areas where the assumption of independent particles starts to fail, a precise and complete description of the phenomena requires interfacing to dedicated tools, providing III-24
algorithms such as Particle In Cell (PIC) plasma simulation or charge transport in finite element models.
Figure 24 Example areas where simulations need to account for non-negligible collective effects in a radiation transport. (Left) electric potential map in the vicinity if a satellite as a consequence of the activation of electric propulsion thrusters obtained with SPIS [Fo05] (image after [Ro05].) (Right) High-density ionization charge deposition from fragments emitted from nuclear interactions, used as input to detailed TCAD simulations (After [Ba06].)
C. Look-Up Tables / Sectoring analysis Particle transport tools utilizing a look-up table approach are wide spread in the space radiation domain, partly because of the serious limitations in available computing power in the early days of computers. Especially suited for shielding studies, these tools provide fast analysis, but on the other hand they are limited to simple geometries and to a given set of shielding and detector materials. Among look-up table tools, SHIELDOSE (later described in more detail) is probably the one with the widest usage in space shielding applications. Conversely, Monte Carlo simulations, which will be described in detail in the next sections, come much closer to describing an authentic space environment but require more time and computing power.
Figure 25 Ray tracing techniques can complement 1-D shielding studies to produce approximate shielding assessment in complex geometries.
III-25
Figure 26 Dose analyses can be obtained in mono-energetic source, 1-D shielding configurations with accurate transport models, and later used to produce dose-depth curves for arbitrary spectra (Plot after SHIELDOSE-2 [Se94] implementation in SPENVIS [He00].)
To analyze doses in more complex geometries, it is common in the engineering process to perform “sectoring” of the actual shielding and establish the amount of shielding encountered by a large number or linear rays traced from the target point to space. The ray shielding is used to look up the dose in the “dose vs. depth” curve produced by external tools such as SHIELDOSE2, and summed with the appropriate weighting for the solid angle. This process, while giving an engineering approximation of the dose, has a number of shortcomings (which derive from the limitations of the look-up table tools), in particular concerning the lack of treatment of shielding of different materials (which are generally converted to a thickness of equivalent aluminum) and the lack or treatment of radiation scattering and secondary production. Ray-tracing techniques have been implemented in many engineering tools dedicated to the analysis of the space environment, such as ESABASE [Es94] and SYSTEMA. The Sector Shielding Analysis Tool (SSAT) [Sa03], which will also be described later, implements the above technique using Geant4 for ray tracing of non-interacting “geantino” particles.
D. Forward / reverse transport Within the classical “forward” approach to the particle transport models, particles in a given state of position and momentum are followed as they interact with matter (as described in the first part of these notes) dissipating their energy and possibly generating secondaries. III-26
While providing an intuitive and realistic description of the particle processes, this approach shows in some applications, clear disadvantages. As an example, radiation effects assessment in heavily shielded systems presents practical computational problems in the accumulation of statistically significant estimates, due to the low transmission probabilities. Similarly, microdosimetry in big spacecraft structures is potentially affected by a low geometrical efficiency in isotropic radiation environments. Biasing techniques (later described in more detail) can provide more efficient calculation while keeping the forward Monte Carlo approach. The ray-tracing technique previously introduced partially overcomes these problems, but at the cost of approximations in the physics modeling. The “adjoint” technique, used by the reactor physics community already in the 1960’s [Ka68], proposes instead a different approach. We can introduce the classical transport equation in integral form P S P K P' P P ' dP ' (23) where S(P) is the source density and K(P’Æ P) the density of collisions at P, and the functional F P f P dP . (24) Kalos introduces a new transport equation, “adjoint” to the conventional “forward” one: J P f P K P P ' J P ' dP ' (25) It follows that: F J P S P dP (26) This means that the solution of the new J equation permits the estimation of F in a way analogous to the method used in the forward problem, but the transport is such that successive points are higher in energy, earlier in time. In addition, the J equation is suitable for Monte Carlo calculations. Such computations, starting at the detector and scoring at the source, offer several advantages, including the possibility of computing doses at a point. In addition, the adjoint function can be used as optimum importance function for biasing in forward Monte Carlo. Among others, the NOVICE tool [Jo76][No00], and AMC [Di96] include algorithms for adjoint calculations in 1-D and 3-D geometries.
Figure 27 Diagram of the reverse Monte Carlo technique: particles are tracked, starting at the detector, backward in time and with increasing energy, until they reach the external source, where the scoring takes place.
III-27
III. In depth: Monte Carlo techniques A. General Concepts Monte Carlo, also known as stochastic simulation, implements a generic computation method that makes use of random numbers in its algorithm for the solution to a mathematical problem. Statistical computation methods have been used since the 18th century. Buffon (1707-1788), obtained an estimate of the π constant by observing the random position of a needle dropped on a grid.
For d
,
N throw π = N cross 2
Figure 28 Drawing showing the stochastic algorithm used by Buffon in his experiment for the determination of the constant. With respect to the computational discretization methods, which are typically applied to ordinary or partial differential equations that describe underlying physical or mathematical system, in MC methods the physical process is typically simulated directly, and there is no need to even write down the differential equations that describe the behavior of the system. The only requirement is that the physical (or mathematical) system be described by probability density functions (PDF's) [Co95]. The modern MC method was first developed by Enrico Fermi in the 1940’s to study the moderation of neutrons, and further developed by Stanislaw Ulam at the Los Alamos Laboratory for the development of the hydrogen bomb after World War II. The name originates from Nick Metropolis, who suggested it for the similarity to the randomness of the results in the gambling casinos in Monaco. MC methods are often used to describe the behavior of stochastic systems, as in statistical physics, where it helps to overcome the complexity given by the large number of degrees of freedom. However, MC methods are also used for the calculation of deterministic processes, when the complexity of the mechanisms involved makes analytical solutions impossible or computationally unrealistic.
1. Monte Carlo for elementary particle transport In radiation transport, real particle processes can be described by distribution functions that represent the probability for an interaction to occur, and the features of the physical state of its
III-28
outcome. The approach of MC algorithms for solving the transport problem is to draw random samples from the distribution functions to describe single particle interactions with matter and to choose among the allowed states after each interaction. Key element in Monte Carlo particle simulations is the correct balancing in the occurrence of each interaction type by random sampling. What follows is a simplified introduction to the application of the Monte Carlo technique to the particle transport in matter. The detailed implementation of the algorithms can significantly differ from one Monte Carlo code to another. Given the probability density function (PDF) F(x) for a given particle giving the probability of interaction in the interval (x, x+dx) [Le94] as previously described, one can introduce the cumulative distribution function (CDF) Pint ( x) :
Pint ( x) = ∫ F(x' )dx'= 1 − exp( −wx)
(27)
and generate an interaction using the inverse method: ln( 1 − η) (28) η = 1 − exp( − wx) ; x = − w is uniformly distributed in the interval [0,1]. where In heterogeneous geometry models, the interaction cross sections, and thus the final interaction probability, depend on the material. The mean free path can be used then to obtain a materialindependent sampling: x (29) = xw = −ln( 1 − η) λ is the number of interaction length in the given material. The quantity x Random sampling is also used for the selection of the final state after interaction, when this is not predetermined by the physics constraints (e.g. conservation laws).
2. Random generators MC algorithms rely on the availability of random number sequences: in deterministic computations randomness is essential to cover evenly the parameter phase space, whereas in the simulation of stochastic processes (such as in particle transport or quantum physics calculations) they mimic the stochastic behavior of the state functions. Generation of long random number series is a very active research field (see for example [Ja90] for a review on random engines). It is worth noticing that in practice for many MC algorithms absolute randomness of the series is not a strict requirement, and on the contrary the use of predictable pseudo-random numbers (which can be reproduced knowing the generation algorithm and its input parameters or seed state) helps for the reproducibility and debugging of simulation results. The quality of the generator can be measured in a variety of ways, although its simple repeat interval is a useful index for the applicability of the engine. Quoting Robert R. Coveyou (Oak Ridge National Laboratory) "The generation of random numbers is too important to be left to chance."
B. Variance Reduction Depending on the details required in the physics description and in the geometrical models, and on the characteristics of the radiation sources, MC calculations may indeed be in particular cases very demanding in terms of computational resources. Variance reduction techniques (VRT) III-29
aim to reduce the computing time, keeping constant the mean value of an estimator and reducing its variance. In “analog” simulations the possible outcomes of measurements to the estimator of an observable occur with the same frequencies as they do in nature. On the contrary, in “biased” simulations, the contributions that are important to the estimator are sampled more often than the less important ones, and weights are associated to tracks to compensate. Variance reduction techniques can be classified in four groups [Br00]: a) Truncation methods (e.g. energy, time, geometry cutoff); b) Population control methods (such as geometry splitting and Russian roulette, energy splitting/roulette, weight cutoff, weight window): many samples of low weight are tracked in important regions, while few samples of high weight in unimportant regions; c) Modified sampling methods (exponential transform, implicit capture, forced collisions, source biasing): to sample from any arbitrary distribution rather than the physical probability as long as the particle weights are then adjusted to compensate; d) Partially deterministic methods (next event estimators, controlling the random number sequence): to control the normal random walk process through deterministic-like sequence.
Figure 29 Variance reduction techniques: diagram showing “splitting” and “Russian roulette” geometry biasing algorithms, applied to example particles traveling from a region m to a region n with greater importance, or from region n to region m with lesser importance.
C. Interfaces As briefly described in the first sections, radiation transport tools must be able to model a number of particle species over a wide range of energies, and realistic geometry modeling is required for a correct description of effects to spacecraft components, which are due to the local radiation environment. In addition, usability requirements often include visualization capabilities, friendly command interfaces and easy-to-use computation results. The challenge of developing a MC tool meeting all requirements is normally alleviated by modular designs allowing easy interfaces to internal and external packages for physics and geometry modeling, pre- and post- processing tasks or visualization. Several examples of such interfaces can be found in the literature for the accomplishment of several tasks including physics modeling [Ko03][Fa01], geometry [Ch01], and pre- and post- processing (such as TCAD interfaces [Ho05]) III-30
Depending on the software design and on the software technology used for the implementation of the MC tool, interfacing to external packages can require very different amount of resources.
D. Output: tallies Depending on level of access to the source code, on software design and on transport algorithms used in the tools, different methods and options are available for extracting from the simulations information about the radiation transport itself or the effects in modeled sensitive devices. In general, the accuracy of the MC results depends on the accurate description of the entire chain of the simulation, including geometry model, radiation source, transport models and response modeling. In addition, as discussed in several sections of the paper, statistics can in specific cases limit the precision of the tallies. Some MC tools implement algorithms providing an estimate of the precision of the simulation results [Br00]. Common MC output for studies of performance degradation in scientific detectors, commercial payload components or service elements such as solar arrays include cumulative quantities such as total ionizing dose (TID) or non-ionizing energy loss (NIEL) and fluence spectra. NIEL analyses can make use of local microscopic energy deposition tallies or of fluence-based estimates, relying on external conversion tables. Microscopic NIEL calculation is only possible in the cases where a complete implementation of all relevant particle physics interactions with atomic nuclei is available (such as single screened Coulomb scattering, or elastic and inelastic neutron nuclear collisions). For the macroscopic approach, which produces NIEL based on local radiation fluence, several NIEL coefficient tables are available in the literature. They have been obtained with different methods and for different semiconductor technologies by the CERNRD48 (ROSE) collaboration for protons, neutrons, electrons and pions. Recently new curves for damage estimates based on calculations in several semiconductor materials (including Si, GaAs, InP) [Su93] and a new collection of NIEL curves computed for typical device materials for protons and neutrons [Ju03] have been made available.
Figure 30 Fluence-to-NIEL coefficients are available from the literature for different incident particles and different materials. (Left) Calculations for protons after [Ju03] (Right) On-line compilation of coefficients for neutrons [Va00] based on [Gr92][Ko92][Hu93].
III-31
The same two options (microscopic and macroscopic) are available for the calculation of LET spectra from locally transported radiation fields, through the evaluation of the local energy deposit in the sensitive devices or via conversion from fluence based on particle, material and energy dependent LET tables. Requirements from exploration mission radiation analysis include the possibility of obtaining biological-related tallies such as dose-equivalent, equivalent-dose, effective-dose or other quantities that can be then further elaborated in risk-assessment studies. Dose-equivalent [Ic91][Ic03] calculations take into account the Relative Biological Effectiveness (RBE) of radiation as a function of particle type and energy through the use of quality factors (QF) applied to local ionizing energy deposition. The Q L relationship between the QF and the LET can be implemented based on the ICRP 60 recommendations [Ic91]. Equivalent-dose estimates requires more complex tallying algorithms, as global Weighting Factors ( w R ) are applied depending on the external incident field type and energy. The values adopted in [Ic91] are being re-appraised and new factors have been proposed in [Ic03]. Effective-dose requires dose tallying in several sensitive organs in human phantoms, which is often unpractical because of limitations in the MC tool geometry capabilities or simply because of the requirements on computing resources to achieve sufficient statistical significance of the results. To overcome this problem, local transported radiation fluence spectra can be convolved with pre-computed effective-dose conversion coefficients [Pe00], in a way similar to the NIEL macroscopic algorithm previously presented. Specific tallies may also be used in advanced, dedicated tools for example for the direct evaluation of effects to components (e.g. SEE, or damage coefficients) with interfaces to external models.
III-32
IV. Radiation Transport Tools The following sections describe some of the many Monte Carlo tools that have been implemented since Fermi’s pioneering developments. The selection was based on the advancement in the description of the physics processes and of the geometry models, and in addition on existing applications in the space domain
A. Historical tools While later in this section we give a brief description of some of the MC tools nowadays in use, it is worth mentioning again E. Fermi, who in the 1930's used Monte Carlo in the calculation of neutron diffusion, and later designed the Fermiac, a Monte Carlo mechanical device used in the calculation of criticality in nuclear reactors. Also of great importance were the studies by von Neumann, who in the 1940's developed a formal foundation for the Monte Carlo method, establishing the mathematical basis for probability density functions (PDFs), inverse cumulative distribution functions (CDFs), and pseudorandom number generators. The work was done in collaboration with Stanislaw Ulam, who realized the importance of the digital computer in the implementation of the approach.
Figure 31 (Left) The Fermiac, a Monte Carlo mechanical device used in the calculation of criticality in nuclear reactors. (Right) Von Neumann standing in front of the Institute computer (courtesy of the Archives of the Institute for Advanced Study, Princeton).
1. ETRAN / ITS The ETRAN code [Be73] implements the result of the earlier studies by Berger (see for example [Be68a] [Be68b]) on the Monte Carlo simulation of multiple Coulomb scattering of fast charged particles, to solve electron and proton transport problems. The “condensed-history” algorithm was developed as an alternative to the direct simulation of single physical scattering processes, occurring in very large number even in short path-lengths. Electrons and protons interacting with matter undergo an enormous number of collisions resulting in small energy losses and deflections, and a relatively small number of “catastrophic” collisions in which they may lose a major fraction of their energy or may be turned through a large angle [Be63]. The ETRAN code provides the resulting algorithms for the description of this complex process of diffusion and energy degradation.
III-33
Details of the Monte Carlo model and of the cross sections used can be found in Berger and Seltzer [Be68a, Be68b, Be70, Be74, Be88], along with numerous comparisons to experimental results. The ETRAN Bremsstrahlung results, based on the use of a set of empirically corrected Bethe-Heitler Bremsstrahlung cross sections, were adjusted to reflect the exact calculations of the Bremsstrahlung production cross section of Pratt et al. [Pr77]. The expertise in physics modeling of ETRAN was used in the integration of the Integrated TIGER Series (ITS) Monte Carlo tool [Ha92]. ITS can be applied to the solution of linear timeindependent coupled electron/photon radiation transport problems, also in presence of non uniform electric and magnetic field with variance reduction techniques, user-friendly interface and several predefined output options. As the development of ITS is continuing, the tool could deserve a place also in the section on “Present” transport tools.
Figure 32 (Left) Typical particle trajectories in foil [Be63] (Right) Energy-pathlength plot of hypothetical enectron case history. Solid curve corresponds to a Monte Carlo model of Class II with catastrophic collisions, resulting in the occurrence of secondary knock-on electrons (delta rays). The dotted curve corresponds to the continuous-slowing-down approximation [Be63][Sc59]
2. SHIELDOSE SHIELDOSE-2 [Se80][Se94] is probably the most widely used tool for the estimation of radiation dose behind various shielding in spacecraft. The tool utilizes a lookup table approach. The data for electrons were calculated with the Monte Carlo code ETRAN, described earlier in this section. The treatment of protons was limited to Coulomb interactions but neglected nuclear interactions. The error incurred by this simplification is generally no greater than 10-20% for shields up to about 30 g cm-2. The proton calculations were done in the straight-ahead, continuous-slowing-down approximation using the stopping power and range data of Barkas and Berger [Ba64]. Alsmiller et al. [Al69] have shown that neglecting angular deflections and range straggling is negligible in spare-shielding calculations. SHIELDOSE-2 [Se94] differs from SHIELDOSE mainly in that it contains new cross sections and supports several new detector materials, and has a better treatment of proton nuclear interactions. The electron calculations obtained with the Monte Carlo code ETRAN include: 1. electron energy loss, including energy loss straggling (fluctuations) due both to multiple inelastic scattering by atomic electrons and to the emission of bremsstrahlung photons;
III-34
2. angular deflections of electrons due to multiple elastic scattering by atoms; 3. penetration and diffusion of the secondary bremsstrahlung photons; 4. penetration and diffusion of energetic secondary electrons produced in electron-electron knock-on collisions (delta rays) and in the interaction of bremsstrahlung photons with the medium (pair, Compton, and photoelectrons). The two versions of SHIELDOSE are implemented in the SPENVIS [He00] web based framework. As a summary, SHIELDOSE is fast and accurate for relatively thin shields, but with some major limitations: it handles essentially one dimensional (spherical, planar) geometries; electron transport is based on planar one dimensional simulations; proton induced secondary particle effects introduced by the shield are not explicitly treated; and it is only applicable to aluminum shielding and certain types of detector material.
Figure 33 SHIELDOSE geometry configurations (finite-thickness slab, semi-infinite medium, solid sphere, hollow sphere)
B. Present 1. GEANT4 GEANT4 [Ag03] is an open source object-oriented simulation toolkit that offers a wide set of electromagnetic and hadronic physics models, good performance of the particle transport in complex geometry models and the possibility of interfacing to external packages such as simulation engines and visualization or analysis tools. While developed in the context of High Energy Physics (HEP) experiments, the same attention has been paid to nuclear physics, space applications, medical physics, astrophysics and radio-protection. Geant4 will be discussed in more in later sections.
III-35
2. MCNPX MCNPX [Br00] [Pe05], a general Monte Carlo N-Particle transport code, represents a major extension of the MCNP code, allowing for the ability to track all types of particles. This code enables a relatively easy specification of complex geometries and sources. The default cut-off energies are 1 keV for photons and electrons. For photons, coherent scattering is considered. Secondary electron transport is considered with the “Thick-Target-Bremsstrahlung model” (TTB) that entails immediately annihilating the secondary electrons and tracking the Bremsstrahlung photons locally produced. For the electron transport, MCNPX uses the “condensed history” Monte Carlo method from ITS 3.0.
3. FLUKA FLUKA [Fa01] is a general purpose tool for calculations of particle transport and interactions with matter, covering an extended range of applications spanning from proton and electron accelerator shielding to target design, calorimetry, activation, dosimetry, detector design, Accelerator Driven Systems, cosmic rays, neutrino physics, radiotherapy etc. The physics description includes models for ions at high energy with an interface to the DPMJET code (>5 GeV/n) and to the Relativistic Quantum Molecular Dynamics (RQMD) code at lower energies. The access to the source code is limited.
4. PHITS The Particle and Heavy Ion Transport code System (PHITS) [Iw02] is a relatively recent development. It is based on NMTC/JAM [Ni01a] and can simulate hadron-nucleon collisions up to 200 GeV, nucleus-nucleus collisions up to several GeV/nucleon, and transport of heavy ions, all hadrons including low energy neutrons, and leptons. Cross sections of high-energy hadron-nucleus reactions are calculated by the hadronic cascade model, JAM (Jet AA Microscopic Transport Model) [Na01], which explicitly treats all established hadronic states and resonances. The JQMD (JAERI Quantum Molecular Dynamics) [Ni95] model was integrated in the code to simulate nucleus-nucleus collisions. In the particle transport simulation, the SPAR code [Ar73] is adopted for calculating the stopping powers of charged particles and heavy ions. PHITS can also deal with the low energy neutron, photon and electron transport based on evaluated nuclear data libraries in the same manner as in the MCNP4C [Br00] code.
5. NOVICE The NOVICE code system [No00][Jo98][Jo76] calculates radiation effects in threedimensional models of space systems. NOVICE can also be used for other radiation transport and shielding analyses not related to space activities. The algorithms contained in NOVICE have been proven in more than three decades of applications. In fact, some algorithms were developed in their original form in the early 1960's. One of the main features of the NOVICE system is the possibility of running reverse (“adjoint”) Monte Carlo transport of electrons, Bremsstrahlung, protons, and other heavy ions. Outputs include dose, charging, current, and any user supplied response functions. A major option provides for calculation of pulse height spectra, with coincidence/anti-coincidence logic. These data can be used for upset/latchup predictions in arbitrary sensitive volume geometries. Geometry models from CAD tools can be imported and
III-36
used for radiation transport. The tool has a relatively wide users’ community in research institutes and Space Industry, despite a limited access to the source.
6. PENELOPE PENELOPE [Sa01][Se97][Se03] is a general-purpose Monte Carlo code system for simulation of coupled electron-photon transport in arbitrary materials and in the energy range from a few hundred eV to about 1 GeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. In addition, a geometry package called PENGEOM permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres, cylinders, etc.
7. EGS The EGSnrc system [Ka00] is a package for the Monte Carlo (MC) simulation of coupled electron-photon transport. Its current energy range of applicability is considered to be 1keV - 10 GeV. EGSnrc is an extended and improved version of the EGS4 package [Ne85] originally developed at SLAC. It incorporates many improvements in the implementation of the condensed history technique for the simulation of charged particle transport and better low energy cross sections.
8. BRYNTRN / HZETRN Commonly used particle transport programs using the analytical or deterministic approach include the NASA BRYNTRN and HZETRN codes [Wi89][Wi95], which have been extensively used for manned space applications. The present version of the HZETRN code (which incorporated the galactic cosmic ray transport code GCRTRN and the nucleon transport code BRYNTRN) is capable of HZE ion simulations in either the laboratory or the space environment. The computational model consists of the lowest order asymptotic approximation followed by a Neumann series expansion with non-perturbative corrections. The physical description includes energy loss with straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshift [Tw05]. Recent papers present the validation of the ion transport against measurements with iron ions [Wa05b]. The development of an extension to three dimensions has been recently presented [Wi04].
9. SRIM / TRIM SRIM [Zi85][Sr03] is one of the reference tools for the calculation of atom displacement induced by ions. It is in fact a group of programs that calculate the stopping and range of ions (up to 2 GeV/amu) into matter using a quantum mechanical treatment of ion-atom collisions. Physics models include screened Coulomb collision, with exchange and correlation interactions between the overlapping electron shells. The ion has long-range interactions creating electron excitations and plasmons within the target. These are described by including a description of the target's collective electronic structure and inter-atomic bond structure when the calculation is setup. The charge state of the ion within the target is described using the concept of effective charge, which includes a velocity dependent charge state and long range screening due to the collective electron
III-37
sea of the target. TRIM (the Transport of Ions in Matter), included in SRIM, can be applied to complex targets made of compound materials with up to eight layers, each of different materials, to calculate the final 3D distribution of the ions and all kinetic phenomena associated with the ion's energy loss: target damage, sputtering, ionization, and phonon production. All target atom cascades in the target are followed in detail.
C. Future Predicting the evolution of the field of radiation transport techniques is a difficult task, as it is linked to many factors. These include both scientific progresses on transport processes, as there are several areas in which the present knowledge is certainly incomplete, and technological aspects, mainly related to computational resources and methods. The increased available computing power on single machines, combined with distributed computing (GRID), will make MC calculations affordable in locations, fields and subjects where previously only analytical, often approximated approaches could give an answer in a reasonable time. The application of MC techniques to the medical field is opening the field to entirely new, large user communities. This will certainly inject new requirements and fresh resources for the development of next generation MC-based transport tools. Among the needs of the medical community, accurate near-Real-Time dosimetry has a high priority for the optimization of radiotherapy techniques and protocols. Interfaces in radiation transport modules based on advanced MC tools to and from CAD / TCAD geometry and analysis models, and integration with pre- and post- processing modules within friendly (Graphical-) User Interfaces will have an impact on the usability of the MC techniques, which are based on highly advanced fundamental science, in the engineering community. First interesting examples are offered by the user success of the MULASSIS integration in the SPENVIS framework [He00] and the recent development of the RADSAFE framework [Ho05][Wa05a]. Also from this point of view, MC tools based on advanced software technologies have a clear advantage. Transport methods with non Monte Carlo techniques, such as the deterministic solution of the Boltzmann transport equations in finite element models [Bo05], will probably continue to coexist with the stochastic simulations, despite continuous dramatic increase in available computing resources. Finally, the next years will definitely see the extension of the applicability of particle MC tools to areas not traditionally covered by single-particle transport applications, at the border or partially overlapping with fields such as plasma simulations or bio-molecular dynamics. In this respect, the challenging developments in the physics models will need to find a right balance between the description of processes from basic principles and data-driven models, and will need to interface to external dedicated resources. Examples of this are extreme-low-energy transport in condensed matter impact on DNA molecular dynamics [Ni01b], or intra-nuclear processes where the knowledge is incomplete and ad-hoc solutions are proposed on a case-by-case basis.
III-38
V.GEANT4 applications and physics validation for space environment analyses A. Introduction and Kernel Geant4 [Ag03][Al06] is a toolkit for the simulation of elementary particles passing through and interacting with matter. Geant4 is the successor of Geant3 [Br87], the world-standard toolkit for high energy physics (HEP) detector simulation, and it is one of the first successful attempts to redesign a major package of HEP software for the next generation of experiments using ObjectOriented technologies. It has a rich experience of detector simulations in past decades. Since the beginning of its development, a variety of requirements have also been taken into account from heavy ion physics, CP-violation physics, cosmic ray physics, astro- and astroparticle- physics, space science and engineering, and medical applications. The Geant4 simulation toolkit provides comprehensive geometry and physics modeling capabilities embedded in a robust but flexible kernel structure. The Geant4 kernel offers: a) particle tracking, b) geometry description and navigation in any kind of field, c) abstract interfaces to physics models, d) event management with a costless stacking mechanism for track prioritization, e) a variety of scoring options and flexible detector sensitivity description, f) several event biasing (variance reduction) options, g) command definition tools with powerful range checking capabilities, and h) interfaces to visualization and GUI systems. Geant4 offers physics models that cover a diverse set of interactions over a wide energy range, from optical photons and thermal neutrons to high-energy reactions at the Large Hadron Collider (LHC) and in cosmic ray experiments. For many cases, alternative models covering the same physics interaction with the same energy range are offered for the users’ choice depending on their requirements of physics accuracy and CPU performance. Thanks to the polymorphism mechanism of Object-Orientation, the user can easily add or alternate some of the physics models without affecting the other models. The Geant4 simulation toolkit is developed and maintained by the international Geant4 collaboration. All of the Geant4 source code, documents, examples for various levels of users from novice to most advanced users and associated data files may be freely downloaded from the collaboration’s web page. The Geant4 collaboration offers an extensive user-support process, including users’ workshops and tutorials, the “HyperNews” forum e-mail services, requirement tracking, problem reporting, and public users’ meetings named Technical Forum for the formal collection of user requirements.
III-39
B. Geometry The Geant4 kernel has a wide variety of built-in solid shapes, from the simplest Constructed Solid Geometry (CSG) shapes such as boxes and tube segments, through more complicated CSG shapes such as twisted trapezoids and tori. It also has Boundary-Represented (BREP) solids so that the user can define shapes with arbitrary surfaces including planar, 2nd or higher order, Spline, B-Spline and NURBS (Non-Uniform B-Spline). Furthermore, boolean operations are provided to combine CSGs into more complex shapes. To place solids, in addition to the simple placement, Geant4 offers various options to reduce the size of memory required for the most complicated and realistic geometries. These options include the so-called parameterized volume, in which just one object of this class may represent many volumes of different positions, rotations, sizes, shapes and materials. The Geant4 geometry navigator automatically optimizes the user’s geometry to ensure the best navigation performance [Co03]. In addition, through the abstract interfaces, the user can easily customize the navigator to fit the user’s particular geometry. Figures show sample geometries implemented by Geant4 for space applications, ranging from planetary scale, to detailed satellite structure, down to micro-geometries of semiconductor devices.
Figure 34 Geant4 geometry model of interplanetary space: particle trajectories in the Earth's magnetosphere (IGRF & Tsyganenko89 models, January 1st 1982) simulated by Geant4 [De03] [Ma05]. Courtesy Laurent Desorgher, University of Bern
III-40
Figure 35 Geant4 geometry model of satellite structures and payloads: International Space Station and detailed ESA Columbus models [Er04].
Figure 36 Geant4 geometry model of micro-electronic components: 3-D schematic diagram illustrating the candidate SRAM memory technology used in simulations, complete with overlayers (Al interconnects, polysilicon, tungsten plugs, and bulk silicon). [Ba06] Geant4 calculates the curved path of a particle trajectory in a field by integrating the equation of motion. Through abstract interfaces, Geant4 offers magnetic, electric and electromagnetic fields and equations of motion for these fields. Through the same abstract interfaces the user can also implement other types of field (such as gravitation). Saving the description of a geometrical setup is a typical requirement of many experiments, which makes it possible to share the same geometry model across various software packages. The geometry description markup language (GDML) [Ch01] and its module for interfacing with Geant4 have been extended to facilitate a geometrical description based on common tools and standards. A new module enables the user to save a Geant4 geometry description, which is in memory, by writing it into a text file by extensible markup language (XML) [Po06].
III-41
C. Physics Processes In the same way that users may build complex geometries from basic shapes, Geant4 allows detailed physics suites to be assembled from basic, independent physics processes. These processes are classified into three broad areas: electromagnetic, decay, and hadronic.
Figure 37 Simplified diagram of the coverage of the Geant4 physics models, compared to the energy range of the main radiation sources and species in the space environment.
1. Electromagnetic The Geant4 electromagnetic processes cover a range of interaction energies from 100 eV up to about 1 PeV. Most important for space radiation transport are the processes involved in shower production and propagation. These include multiple scattering, ionization, Bremsstrahlung, pair production, photoelectric effect and Compton scattering. Other processes offered by Geant4 include Rayleigh scattering, Cherenkov radiation, scintillation, transition radiation, and synchrotron radiation. The multiple Coulomb scattering model used in Geant4 belongs to the class of condensed simulations. Rather than simulate in detail each individual scattering and displacement, the net effect of several scatterings is modeled for a given step length. The modeling is based on the Lewis charged particle transport theory [Le50], and the final displacement and angle are calculated for each tracking step. Multiple scattering may be applied to all charged particles. Ionization and Bremsstrahlung processes have been developed especially for electrons and positrons. Both processes contribute to energy loss, which is divided into “discrete” and “continuous” regimes. Above a given energy (corresponding to the secondary production threshold discussed below), bremsstrahlung proceeds by the emission of hard gammas. Below this energy, continuous energy loss simulates the exchange of low-energy virtual photons. Similarly for ionization, energy loss for particles above a given energy proceeds by Møller [Mø32] or Bhabha [Bh36] scattering from atomic electrons. Below this energy, continuous
III-42
energy loss is calculated. Similar ionization and bremsstrahlung processes have been developed for muons. They have been specialized for the effects the greater muon mass and use different parameterizations for energy loss. For space applications, the ionization of hadrons and ions is especially important. Geant4 has a specialized ionization process for hadrons such as protons and pions, and one for ions. Both processes use the Bethe-Bloch model for energy loss until the energy of the Bragg peak is approached. At that point they employ a specific Bragg model. The Bragg model is tuned differently for the hadron and ion ionization processes in order to take into account the large differences in projectile masses and charges.
a) Delta ray production and tracking thresholds When simulating electromagnetic showers, or any process, which produces large numbers of secondary particles, the user must always decide at which energy to stop tracking particles. For many Monte Carlo codes, this energy is used as a cutoff at which all particles, including the primary, are stopped. The remaining energy is then deposited at the stopping point. This can lead to unphysical peaks in energy deposition for highly segmented or small-dimension applications. The optimum cutoff energy will also vary from material to material, which can be inconvenient if there are many materials in the simulation. Geant4 addresses both these problems by introducing a secondary production threshold instead of a cutoff. The threshold is a distance, or range, which is the same throughout the simulation geometry, regardless of the material. In each material this distance is converted to an equivalent energy, which is then used to decide if secondary particles are produced or not. If a primary particle can produce a secondary, which travels more than this distance in a medium, the secondary is produced and the primary continues with reduced energy. If the produced secondary would not be able to go this far, no particle is produced. In this case the primary is still tracked, but mean energy loss is used to determine how the particle loses energy. In this way the primary is tracked down to zero energy, when it stops.
b) Low energy processes In many cases, details like atomic shell structure, screening and ionization potentials are of critical importance. This is especially true for incident particles of 1 MeV and below. For this reason Geant4 provides two sets of electromagnetic processes. One is meant to cover energies from 1 keV up to about 1 PeV. The other, called the low energy processes, covers the range 250 eV up to 100 GeV for electrons. Both sets take into account atomic shell structure and ionization potentials, but the low energy processes include more detail, which in large part is taken from the evaluated data libraries EPDL97 [Cu97], EEDL [Pe97a], and EADL [Pe97b]. Low energy versions of the following processes are available: electron ionization, electron Bremsstrahlung, hadron ionization, Compton scattering, photoelectric effect, pair production, and Rayleigh scattering. Another option for electromagnetic physics between 100 eV and 1 GeV is the Penelope [Sa01] processes. These are valid for positrons, electrons and gammas. Ion energy loss, which is of relevance for single event effects in space applications, is described by specialized models in the low energy package, based on ICRU-49, Ziegler 1977 [Zi77], or Ziegler 1985 [Zi85] (scaled by the effective ion charge using the Brandt–Kitagava model [Br82]).
c) Optical Photons Although technically part of the electromagnetic sector, optical photons are treated uniquely in Geant4 because of their long wavelengths: wavelike behavior must be simulated in terms of
III-43
particles. As a result, quantities like polarization can be treated, but not the overall phase of the wave. Optical photons can be used as incident particles or be generated as secondaries from the scintillation and Cherenkov processes listed above. Four processes may be assigned to an optical photon: reflection/refraction, absorption in bulk material, Rayleigh scattering and wavelength shifting. Each of these processes requires some information to specified about the material in which the photons travel, such as index of refraction, absorption length, the spectrum and timing of scintillation light, and various surface properties.
2. Decay Geant4 provides for weak and electromagnetic decays of long-lived, unstable particles either at rest or in flight. Decay modes are chosen according to branching ratios in the decay table for each particle. The final states of the decay modes are calculated according to several models, including VA, Dalitz theory or simple phase space. Users may assign specialized decay channels and lifetimes to these particles. Of more importance to radiation transport and effects is the radioactive decay process. It handles b- , b+ and a decay of nuclei as well as electron capture and isomeric transitions. It may be assigned to ions and may occur in flight or at rest. This process is closely connected to the hadronic processes discussed below and employs some of the nuclear evaporation models.
3. Hadronic processes and models Geant4 provides four basic processes for hadronic interactions: elastic and inelastic scattering, capture and fission. The user may easily implement each of these four types with several available models. In space applications, where the energy range of interest covers many orders of magnitude, more than one model will be required for inelastic scattering and perhaps for elastic scattering as well. This is because there is no single theory of hadronic interactions, which applies to all energy ranges. Also, in some energy ranges there is more than one model available, so that the user may choose the one, which works best in his application.
Figure 38 Summary of Geant4 models for hadronic collisions and nuclear de-excitation (after [Tr04b])
III-44
a) Models for elementary particle projectiles These models can be classified roughly according to the energy range to which they apply. At the highest energies, from about 25 GeV up to a few TeV, three models are available: high energy parameterized (HEP), quark-gluon string (QGS) and Fritiof fragmentation. At high energies, projectiles are sensitive to small-scale details within a nucleus. The projectile hadron is therefore likely to interact with a single proton or neutron. During this interaction, quark antiquark pairs (quark strings) may be excited and then decay to produce more hadrons. Some of these hadrons will be produced outside the nucleus. Others may initiate a cascade with other particles within the nucleus. The QGS and Fritiof models are both theory-based and differ in the way that the quark strings fragment to produce hadrons. The HEP model is based on the GHEISHA model of Geant3, and for the most part uses parameters taken from fits to data, rather than theory. There is a transitional region between 510 GeV and 25 GeV, which is too low in energy for the high energy and string fragmentation models to apply, yet to high for the cascade models described below. Geant4 fills the gap by using the low energy parameterized (LEP) model. Like the HEP model, this is derived from the GHEISHA model of Geant3 and gets most of its parameters from fits to data rather than from theory. The energy range from 100 MeV to 10 GeV is covered by the binary cascade [Ge06][Be00][We04b], Bertini cascade [Gu68][Be71] and low energy parameterized models [Ge06][Tr04b]. These models describe the propagation of a hadron through the nuclear medium, which produces secondary hadrons, which in turn produce tertiary hadrons, and so on, until the primary particle energy is dissipated. At the end of this cascade, the nucleus is left in a highly excited state, which must be de-excited by another model. For incident hadrons of energy 200 MeV and below, there is not enough energy to generate a cascade, but the nucleus can nevertheless reach a highly excited state through the formation of particle-hole states. The pre-compound model was designed for this purpose. It causes the particle-hole states to decay, leaving the nucleus in a cooler, equilibrium state. Several equilibrium models are then automatically invoked, such as gamma, neutron, proton and fragment emission, which take the nucleus to its ground state. All of above models may be used for incident protons and neutrons and most may be used for pions. The HEP, LEP, QGS, Fritiof and Bertini models are also valid for kaons. The LEP, HEP and Bertini models may be used for long-lived hyperons, and the LEP and HEP models may be used for anti-nucleons and anti-hyperons.
Figure 39 Schematic presentation of the Bertini model for intra-nuclear cascades. A hadron with 400 MeV energy is forming an INC history. Crosses present the Pauli exclusion principle in action.
III-45
b) Models for ion projectiles Other models of special importance to radiation effects deal with the hadronic interactions of ions. The LEP model, discussed above, may be applied to deuterons, tritons and alphas at incident energies of 100 MeV and below. For higher incident energies (below 10 GeV/n), a version of the binary cascade model has been developed which may be applied to incident ions with A ≤ 12. It may also be used for incident ions with A > 12 if the target consists of nuclei with A ≤ 12. This model produces final state fragments based on a statistical model and does not currently include projectile-target correlations. Another model in the same energy range is Wilson abrasion (0-10 GeV), which is a simplified macroscopic model for nuclear-nuclear interactions based largely on geometric arguments rather than detailed consideration of nucleonnucleon collisions. As such the speed of the simulation is found to be faster than models such as the binary cascade. Geant4 also provides interfaces to the external models JAM and JQMD [Ko03], which simulate nucleus-nucleus interactions at higher energies. The EM dissociation model (0-100 TeV) can be used to simulate nucleus-nucleus collisions in which the exchanged virtual photons become hard enough that nucleons and nuclear fragments are ejected.
c) Models for low energy neutrons Neutron induced reactions, including capture, fission, inelastic and elastic scattering can produce gammas, charged particles and heavy nuclear fragments, which may affect electronic devices. In some cases a detailed treatment of neutron interactions and cross sections down to thermal energies is required. For this, Geant4 provides the high precision neutron package. The cross sections, channels and final state distributions are taken almost entirely from tabulations of neutron data from several different databases [En91], [Fe98], [Je95]. As a result the simulated reactions agree very well with the cases where the final states have been well measured.
D. Physics Validation The above processes are compared to data whenever possible in order to validate the physical models and their implementations. A sampling of these comparisons is included below for both electromagnetic and hadronic processes.
1. Electromagnetic Figure 40 shows the multiple scattering of 6.56 MeV protons from 92.6 um of Si [Ag03]. Data points are shown in gray, and the solid-line histogram is the result of Geant4. Agreement with data is uniformly good at all measured angles. Figure 41 compares Geant4 Bremsstrahlung with data from 50 MeV electrons passing through a thin Be target (top) as well as a thick Be/W target [Iv04]. Black dots represent the data and the histogram is the result of Geant4. The top plot shows relative dose versus radius from the beam and the bottom plots shows energy deposition versus radius from the beam. For the thin target the agreement is excellent, while for the thick target, Geant4 produces excess energy deposition at larger radii.
III-46
Figure 40 Multiple scattering of 6.56 MeV protons on 92.6 um of Si. (Plot after [Ag03].)
Figure 41 Dose in thin and thick targets due to Bremsstrahlung of 50 MeV electrons. (Plot after [Iv04].)
III-47
Figure 42 shows proton ionization loss in Fe, plotted versus the log of the proton energy. The dashed line represents the current, improved ionization model and agrees very well with the black data points [Bu05].
Figure 42 Proton ionization loss in iron. (Plot after [Bu05].)
2. Hadronic Figure 43 compares the result of the Geant4 Binary cascade model with the neutron yield from 256 MeV protons incident upon Be, Al, Fe and Pb (Be top, Pb bottom) [Iv03]. Agreement with data is generally good above 50 MeV incident proton energy. At lower energies Geant4 produces an excess of neutrons. This excess however, is small for heavy nuclei like Pb. Figure 44 shows the neutron fluence resulting from 400 MeV/nucleon Fe incident on a thick Cu target. The data (black dots) are compared to the prediction of the Binary cascade light ion model for 0, 7.5, 15, 30, 60 and 90 degrees. [Ko05b][We04a]. The horizontal axis id the detected neutron energy and the vertical axis is the neutron fluence (n/MeV/sr). Near the endpoint of the spectra, the agreement with data is good in most cases. At lower neutron energies and angles Geant4 under-estimates the data. However at 60 and 90 degrees, agreement is good at all energies. Validation of models and processes in Geant4 continues as new data appear and as new models are developed. The small sample shown above indicates that agreement with data is generally better for the electromagnetic processes than the hadronic ones. This is at least in part due to the existence of a comprehensive and well-tested electromagnetic theory. The agreement of hadronic process with data can be expected to improve as the models improve and as more data are taken in previously untested energy ranges.
III-48
Figure 43 Neutron yield from 256 MeV protons on Be, Al, Fe and Pb at various angles. (Plot after [Iv03].)
Figure 44 Neutron fluence at various angles as a result of 400 MeV/nucleon Fe bombarding a thick Cu target. (Plot after [Ko05b].) III-49
E. Geant4-based radiation analysis and tools Being a toolkit, GEANT4 does not offer ready-to-use executables to simulate the radiation transport in matter, nor does it provide tools for the analysis of the radiation effects, and it is the responsibility of the user to develop an appropriate tool on top of the GEANT4 libraries. The majority of published work performed with Geant4 up to now is based on private customised applications utilizing the Geant4 libraries for the particle transport. The European Space Agency (ESA) has recently been strongly focused on making Geant4 readily accessible to a variety of engineering applications and WWW-based radiation effects studies through the development of easy-to-use interfaces, advanced models, and auxiliary software. The tools presented in this section therefore represent only partly the analyses performed, and are given as examples of the capabilities of the toolkit, together with some significant results obtained in recent years. For several tools there is open access to the source code, and in some cases a public web interface is provided for simple applications.
1. Sector Shielding Analysis Tool (SSAT) The Sector Shielding Analysis Tool (SSAT) is among the first ready-to-use tools based on Geant4 supported by ESA programs. SSAT [Sa03] performs ray tracing from a user-defined point within a Geant4 geometry to determine shielding levels (i.e. the fraction of solid angle for which the shielding is within a defined interval) and shielding distribution (the mean shielding level as a function of look direction). To achieve this the tool utilizes the fictitious geantino particle, which undergoes no physical interactions, but flags boundary crossings along its straight trajectory. Knowledge of the positions of these boundary crossings together with the density of the material through which the particle has passed can be used to profile the shielding (in g/cm2) for a given point within the geometry. It produces distributions of shielding material and thickness as viewed from a given point within the configuration as a function of direction from that location. This approach is highly useful for calculating the absorbed radiation dose, and for finding optimal shielding geometries. The tool has recently been upgraded with the direct calculation of doses based on externally provided dose-depth curves.
III-50
Figure 45 The SSAT ray-tracing tool [Sa03] provides shielding profiles by material and dose estimated based on external dose-depth curves. An application is shown to the radiation analysis for the ESA ConeXpress mission
2. PLANETO-COSMICS The PLANETO-COSMICS GEANT4-based tool, by Bern University, enhances and generalizes the previous MAGNETO-COSMICS and ATMO-COSMICS [De03][Ma05][De05] tools, which were developed for the transport of particles in the Earth magnetosphere and atmosphere. The new tool has been extended [Pl05] with a description of the local magnetic field on Mars [Gu05a] and of the dipole field on Mercury [Gu05b]. Models for the planetary atmospheres are also included. Among other features, MAGNETO-COSMICS can produce cosmic-ray cutoff rigidity at single points, along trajectories or on maps.
Figure 46 (Left) ATMOCOSMICS: atmospheric ionization rate induced by galactic cosmic rays during the minimum and maximum of solar activity, respectively [De05]. The dotted lines represent the measurements obtained by Neher from 1959 to 1965 [Ne54] [Ne58] [Ne67]. (Right) PLANETOCOSMICS: Magnetic field at the surface of Mars [Gu05a] III-51
3. Multi-Layered Shielding Simulation Software (MULASSIS) The Multi-Layered Shielding Simulation Software (MULASSIS) [Le02] is a successful example of ready-to-use analysis tool with user-friendly interface. It performs fluence, total ionizing dose (TID), pulse height spectrum (PHS), dose equivalent and NIEL dose analysis in 1D (or better 1.5-D) geometries made of several layers of arbitrary materials. A web interface to MULASSIS has been developed in SPENVIS [He00]
Figure 47 MULASSIS: The shield shown on the left here is made of three layers (titanium, aluminum and carbon fiber from left to right) and a thin silicon detector layer is placed behind the carbon fiber layer. Superimposed on top of the geometry are the interaction tracks of 1 GeV protons incident from the left. The image on the right represents a screenshot of the MULASSIS web interface inside the SPENVIS framework.
Figure 48 (Left) A comparison of the secondary proton and neutron spectra derived by MULASSIS/Geant4 4.1 (2002) and MCNPX simulations. Trapped protons are the incident particle. Overall agreement is achieved, for the protons in particular. There are some differences in the neutron spectra and this is a reflection of the different physics model used in the two codes, Pre-Compound model in the case of MULASSIS and HETC intra-nuclear cascade model in the MCNPX code. (Right) Total ionising doses for the Si detector behind Al shield of various values of the thickness. For comparison doses predicated by SHIELDOSE-2 for the same trapped proton source are also plotted.
III-52
4. GEANT4 Radiation Analysis for Space GRAS GEANT4 Radiation Analysis for Space (GRAS) [Sa05] is a tool for simulating the effects of the space radiation environment. To allow for flexibility, a modular approach has been followed for the geometry model, the physics description and the extraction of the radiation effect data. The main input to the GEANT4 simulation are the geometry model (for which the default format is GDML) and the physics models to be used. The GRAS tool provides a friendly interface for the input of the geometry model and the choice of the list of the physics models to be used. The user constructs the list of physics models for a given application via a scripting interface with a modular approach. In addition, GRAS offers ready-to-use modules for the analysis of the effects of radiation in user defined sensitive devices. Numerous kinds of analysis are offered, such as cumulative ionizing and NIEL dose, equivalent dose and dose equivalent, LET, charge-deposit and fluence in 3D geometry models. The software design allows easy integration of new geometry interfaces, physics models and analysis capabilities. The scripting interface avoids the need of C++ programming and eases the integration of the tool into external frameworks.
Figure 49 (Left) The GRAS framework for the radiation effect analysis modules. (Right) Simulation of the HERSCHEL PACS photoconductor ground proton beam test. (After [Sa05])
5. Monte Carlo Radiative Energy Deposition (MRED) – Vanderbilt Monte Carlo Radiative Energy Deposition (MRED) [Ho05][Wa05a] is a unique radiation effects research tool developed at Vanderbilt University based on the Geant4 libraries [Ag03], and the Synopsys (formerly ISE) TCAD tools [Sy04] for on-orbit predictions and for technology evaluation. MRED adds to the Geant4 physics a model for screened Coulomb scattering of ions [Me05], and includes tetrahedral geometric objects [Ko05a], a cross section biasing and track weighting technique for variance reduction, and a number of additional features relevant to semiconductor device applications. The Geant4 libraries frequently contain alternative models for the same physical processes and these may differ in level of detail and accuracy. Generally, MRED is structured so that all physics relevant for radiation effects applications is available and selectable at run time. This includes electromagnetic and hadronic processes for all relevant particles, including elementary particles that live long enough to be tracked.
III-53
Figure 50 The RADSAFE concept, based on the integration of the GEANT4 libraries in a wide radiation effect analysis framework for On-orbit predictions and for Technology Evaluation. The framework includes the Synopsys (formerly ISE) TCAD tools.
Figure 51 Effect on electronics components from ion nuclear interactions, as obtained with the Vanderbilt RADSAFE/MRED tool. The raw counts spectrum of deposited charge for 523 MeV Ne in Silicon. The solid curve represents both direct and indirect ionization processes whereas the diamond-dotted line represents direct ionization from the primary. [Wa05a] (Right) Comparison of the MRED based cross-section predictions for a circuit Q of 1.21 pC. Q was derived by a visual best-fit of the integral cross-section curves for all ions. [Wa05a]
III-54
Conclusion Particle transport has an important role in the complex process of the analysis of the effects of radiation to devices in space, as it helps in two crucial tasks: the propagation of the radiation from the outer space to the environment local to the devices inside the spacecraft, and the detailed interaction of the local environment with the devices. As simulations tackle directly the fundamental science behind the interaction of the particle radiation with the spacecraft devices, they can thus play a major role both in understanding of the system performance in space and in helping design of flight components. The paper introduced the basic concepts behind the radiation transport algorithms, focusing in particular on the features of the Monte Carlo method. A particular attention has been given at the physics modeling of GEANT4 and related validation results, with the hope to give an overview of the capabilities of modern MC tools, and thus increase both the confidence in their results and the awareness of the critical areas in which the understanding of the underlying phenomena is still incomplete.
References S. Agostinelli et al., "Geant4 – A Simulation Toolkit", Nucl. Instrum. Meth. A 506, 2003, p. 250. URL: http://cern.ch/geant4 [Al06] J. Allison et al., “Geant4 developments and applications”, IEEE Trans. Nucl. Sci. 53, 1 (2006) 270-8 [Al69] Alsmiller, R. G., J. Barish, and W. W. Scott, Nucl. Sci. and Enrg., 35, 1969. [Ar73] Armstrong T.W.and Chandler K.C. “Stopping Powers and Ranges for Muons, Charged Pions, Protons, and Heavy Ions”, Nucl. Instrum. Methods 113, 1973, 313. [Ba04] L.P. Barbieri and R.E. Mahmot, “October-November 2003’s space weather and operations lessons learned”, Space Weather, Vol.2, 15-29, 2004. [Ba06] D.R. Ball, K.M. Warren, R.A. Weller, R.A. Reed, A. Kobayashi, J.A. Pellish, M.H. Mendenhall, C.L. Howe, L.W. Massengill, R.D. Schrimpf, and N.F. Haddad "Simulating Nuclear Events in a TCAD Model of a High-Density SEU Hardened SRAM Technology." RADECS 2005 and accepted for publication in Trans. Nuc. Sci., 2006.Ball, RADECS 2005 [Ba64] Barkas, W. H., and M. J. Berger, NASA Publ. SP-3013, 1964. [Ba96] R. M. Barnett et al., “Review of Particle Physics 1996”, Phys. Rev., D 54 (1996) 1708 [Be00] L. Bellagamba, A. Brunengo, E. Di Salvo, and M. G. Pia, “Object-oriented design and implementation of an intra-nuclear transport model,” INFN, Rep. INFN/AE00/13, Nov. 2000. [Be34] HA Bethe, W Heitler, Proc. Roy. Soc. (London), 1934 [Be59] H. A. Bethe and J. Ashkin, “Passage of radiation through matter,” in Experimental Nuclear Physics, Vol 1, Editor E Segrè, Published by Wiley, New York, 1959. [Be63] M. J. Berger, “Monte Carlo Calculation of the penetration and diffusion of fast charged particles”, in: B. Alder, S. Fernbach und M. Rotenberg (Eds.), Methods in Comput. Phys., Vol 1 (Academic, New York, 1963) pp. 135 - 215. [Be68a] Berger, M. J., and S. M. Seltzer, NASA Publ. SP-169, 1968a.
[Ag03]
III-55
[Be68b] Berger, M. J., and S. M. Seltzer, Computer Code Collection 107, Oak Ridge Shielding Information Center, 1968b. [Be70] Berger, M. J., and S. M. Seltzer, Phys. Rev., C2, 621, 1970. [Be71] H. W. Bertini and P. Guthrie, “Results from Medium-Energy Intranuclear-Cascade Calculation”, Nucl. Phys.A169, (1971). [Be73] M. J. Berger, “Improved point kernels for electron and beta ray dosimetry”, NBS Report NBSIR 73-107 (1973). [Be74] Berger, M. J., and S. M. Seltzer, Nucl. Instr. and Meth., 119, 157, 1974; Seltzer, S. M., National Bureau of Standards Publ. NBS-IR 74457, 1974. [Be88] S. M. Seltzer, “An overview of ETRAN Monte Carlo methods”, in Monte Carlo Transport of Electrons and Photons, edited by T. M. Jenkins, W. R. Nelson, A. Rindi, A. E. Nahum, and D. W. O. Rogers, pages 153 - 182, Plenum Press, New York, 1988. [Be90] M.J. Berger, J.H. Hubbell, S.M. Seltzer, J. Chang, J.S. Coursey, R. Sukumar, and D.S. Zucker, “XCOM: Photon Cross Sections Database”. URL: http://physics.nist.gov/PhysRefData/Xcom/Text/XCOM.html [Bh36] H.J. Bhabha, “The scattering of positrons by electrons with exchange on Dirac’s theory of the positron”, Proc. R. Soc. A 154, 1936, 195-206 [Bo01] J. Bogart, D. Favretto, R. Giannitrapani, “XML for Detector Description at GLAST”, CHEP01 Conf. Proceedings. [Bo05] E. Boman, J. Tervo, M. Vauhkonen, “Modelling the transport of ionizing radiation using the finite element method”, Phys. Med. Biol. 50 (2005) 265-280 [Br00] J F Briesmeister, Ed., "MCNP - A General Monte Carlo N-Particle Transport Code, Version 4C," LA-13709-M, 2000. [Br82] W. Brandt and M. Kitagawa, Phys. Rev. B25 (1982) 5631 [Br87] R. Brun, F. Druyant, M. Marie, A. C. mcPherson, and P. Zanarinin, “GEANT3,” CERN DD/EE/84-1, Revised 1987 and subsequently. [Bu05] H. Burkhardt et al., “GEANT4 Standard Electromagnetic physics package”, Proceedings of MC2005, Chattanooga, Tennessee, April 17-21, 2005, on CD-ROM, Americal Nuclear Society, LaGrange Park, IL (2005). [Ch01] R. Chytracek, “The Geometry Description Markup Language”, CHEP01 Conf. Proceedings. URL: http://cern.ch/gdml [Co03] G. Cosmo, “Modeling Detector Geometries in Geant4”, Proceedings of the 2003 IEEE NSS/MIC/RTSD Conference, Portland (Oregon, USA), October 2003 [Co95] Computational Science Education Project, “Introduction to Monte Carlo Methods”, 1995 [Cu97] D. Cullen, et al., EPDL97: the Evaluated Photon Data Library, 97 version, UCRL50400, Vol. 6, Rev. 5, 1997. [Da04] E.J. Daly, “Outlook on space weather effects on spacecraft”, in “Effects of space weather on technology infrastructure”, pages 91-108, Kluwer Academic Publishers, 2004. [Da88] Daly E.J., "The Evaluation of Space Radiation Environments for ESA Projects", ESA Journal 12, 229 (1988). [De03] L. Desorgher, E.O. Flueckiger, M.R. Moser, R. Buetikofer, “GEANT4 applications for simulating the propagation of Cosmic Rays through the Earth’s magnetosphere and atmosphere”, Geophysical Research Abstracts, Vol. 5, 11356, 2003 III-56
[De05]
L. Desorgher, E.O. Flueckiger, M. Gurtner, M.R. Moser, R. Buetikofer, “ATMOCOSMICS: A GEANT4 code for computing the interaction of Cosmic Rays with the Earth’s atmosphere”, International Journal of Modern Physics A, Vol. 20, No. 29 (2005) 6802-6804 [De06] DESIRE web page: http://gluon.particle.kth.se/desire [Di96] F.C. Difilippo, M. Goldstein, B.A. Worley and J.C. Ryman, “Adjoint Monte Carlo methods for radiotherapy treatment planning”, Trans. Am. Nucl. 74, 1996, 14–6 [Ec06] European Cooperation for Space Standardization, ECSS-E-10-12 Working Group, “Space engineering: Methods for calculation of radiation received and its effects, and a policy for design margins”, 2006 [En91] ENDF/B-VI, Cross Section Evaluation Working Group, ENDF/B-VI Summary Document, BNL-NCS-17541 (ENDF-201) National Nuclear Data Center, Brookhaven National Laboratory, Upton, NY, USA, 1991. [Er04] T. Ersmark, P. Carlson, E. Daly, C. Fuglesang, I. Gudowska, B. Lund-Jensen, R. Nartallo, P. Nieminen, M. Pearce, G. Santin, N. Sobolevsky, “Status of the DESIRE project: GEANT4 physics validation studies and first results from Columbus/ISS radiation simulations,” IEEE Trans. Nucl. Sci., 51, Issue: 4, 1378–1384 (2004). [Es94] ESABASE Reference Manual, ESABASE/GEN-UM-061, Issue 2, March 1994 [Ev55] R.D. Evan, “The Atomic Nucleus”, McGraw-Hill, New York, 1955. [Fa01] A. Fassò, A. Ferrari, J. Ranft, P.R. Sala , “FLUKA: Status and prospective for hadronic applications”, invited talk in the Proceedings of the MonteCarlo 2000 Conference, Lisbon, October 23--26 2000, A. Kling, F. Barao, M. Nakagawa, L. Tavora, P. Vaz eds., Springer-Verlag Berlin, p. 955-960 (2001). [Fa63] U. Fano, Ann. Rev. Nucl. Sci. 13, 1, 1963. [Fe48] Note on census-taking in Monte Carlo calculations E. Fermi and R.D. Richtmyer 1948. A declassified report by Enrico Fermi. From the Los Alamos Archive. [Fe86] R.C. Fernow, “Introduction to experimental particle physics”, Cambridge University Press, 1986 [Fe98] FENDL/E2.0, The processed cross-section libraries for neutron-photon transport calculations, version 1 of February 1998. Summary documentation H. Wienke, M. Herman, Report IAEA-NDS-176 Rev. 0 (International Atomic Energy Agency, April 1998). Data received on tape (or: retrieved on-line) from the IAEA Nuclear Data Section. [Fo05] J. Forest, J.-F. Roussel, A. Hilgers, B. Thiebault, and S. Jourdain, “SPIS-UI, a new integrated modeling environment for space applications”, Proceedings of the 9th Spacecraft Charging Technology Conference, Tsukuba, Japan, 2 - 9 April 2005. JAXA. [Ge06] Geant4 Physics Reference Manual (2006, June). [Online]. Available: http://pcitapiww.cern.ch/geant4/G4UsersDocuments/UsersGuides/PhysicsReference Manual/print/PhysicsReferenceManual1.pdf [Gi99] S. Giani, V. N. Invanchenko, G. Mancinelli, P. Nieminen, M. G. Pia, and L. Urban, “GEANT4 simulation of energy losses of ions,” INFN, Rep. INFN/AE-99/21, Nov. 1999. [Gr92] P.J. Griffin et al., SAND92-0094 (Sandia Natl. Lab.93) [Gu05a] M. Gurtner, L. Desorgher, E.O. Flueckiger, M.R. Moser, “Simulation of the interaction of space radiation with the Martian atmosphere and surface”, Advances in III-57
Space Research 36 (2005) 2176–2181. [Gu05b] M. Gurtner, L. Desorgher, E.O. Flueckiger, M.R. Moser, “A Geant4 application to simulate the interaction of space radiation with the Mercurian environment”, Advances in Space Research, 2005. [Gu68] M. P. Guthrie, R. G. Alsmiller and H. W. Bertini, Nucl. Instr. Meth, 66, 1968, 29. [Ha36] H.Hall, Rev. Mod. Phys. 8, 358 (1936) [Ha64] Hammersley, J.M., and D.C. Handscomb, 1964, Monte Carlo Methods, Methuen, London. [Ha92] J A Halblieb, R P Kensek, T A Mehlhorn, G D Valdez, S M Seltzer, M J Berger, “ITS Version 3.0: The Integrated Tiger Series of coupled electron/photon Monte Carlo transport codes,” SAND91-1634, Sandia National Laboratories, 1992. [Ha94] Hammond, B.L, W.A. Lester, Jr., and P.J. Reynolds, 1994, Monte Carlo Methods in Ab Initio Quantum Chemistry, World Scientific, Singapore. [He00] D. Heynderickx, B. Quaghebeur,, E. Speelman, E. Daly, “Space Environment Information System (SPENVIS): a WWW interface to models of the space environment and its effects”, AIAA-2000-0371, 2000. SPENVIS web-site: http://www.spenvis.oma.be/spenvis/ [He68] Hess W.N., "The Radiation Belt and the Magnetosphere", Blaisdell Publ. Co. (1968). [He86] Heermann, D.C., 1986, Computer Simulation Methods, Springer-Verlag, Berlin. [Hi87] W. Daniel Hillis. 1987. The connection machine. Scientific American, June, pp. 108– 1 15. [Ho05] C. L. Howe, R. A.Weller, R. A. Reed, R. D. Schrimpf, L.W. Massengill, K. M. Warren, D. R. Ball, M. H. Mendenhall, K. A. LaBel, and J. W. Howard, “Role of heavy-ion nuclear reactions in determining on-orbit single event rates,” IEEE Trans. Nucl. Sci., Vol 52, No.6, Dec. 2005. [Hu79] Hubbell J. H. and Øverbø I. (1979) Relativistic atomic form factors and photon coherent scattering cross sections, J. Phys. Chem. Ref. Data 8, 69-105. [Hu85] C. C. Hurd. 1985. A note on early Monte Carlo computations and scientific meetings. Annals of the History of Computing 7:141–155. [Hu93] M. Huhtinen and P.A. Aarnio, NIM A 335 (1993) 580 [Ic03] “Relative biological effectiveness (RBE), quality factor (Q), and radiation weighting factor (wR)”, Annals of the ICRP, Vol. 33, No. 4, 2003. [Ic91] “1990 Recommendations of the International Commission on Radiological Protection”, Annals of the ICRP, Vol. 21, No. 1-3, 1991. [Ic93] “Stopping powers and ranges for protons and alpha particles,” Int. Commission on Radiation Units and Measurements (ICRU), Rep. 49, 1993. [Iv03] V. Ivanchenko et al., “The Geant4 Hadronic Validation Suite for the Cascade Energy Range”, 2003 Conference for Computing in High Energy and Nuclear Physics, LaJolla, California, March 2003. [Iv04] V. N. Ivanchenko, “Geant4: physics potential for instrumentation in space and medicine”, Nucl. Instr. Meth. A, 525 , pp. 402-405, 2004. [Iw02] Iwase H., Niita K. and Nakamura T. (2002) J. Nucl. Sci. Technol. 39, 1142. [Ja90] F. James, "A review of pseudorandom number generators", Computer Physics Communications 60 (1990), 329-344. [Je95] T. Nakagawa et al., JENDL-3 Japanese Evaluated Nuclear Data Library, Version 3, Revision 2. J. Nucl. Sci. Technol. 32 (1995), p. 1259. Abstract-INSPEC | AbstractIII-58
Compendex | Order Document | Abstract + References in Scopus | Cited By in Scopus [Jo76] T M Jordan, “An adjoint charged particle transport method,” IEEE Trans Nucl Sci., 23, p1857, 1976. [Jo98] Thomas M. Jordan, “NOVICE, Introduction and Summary”, Experimental and Mathematical Physics Consultants, 1998. [Ju03] Insoo Jun, Michael A. Xapsos, Scott R. Messenger, Edward A. Burke, Robert J. Walters, Geoff P. Summers, and Thomas Jordan, “Proton nonionizing energy loss (NIEL) for device applications”, IEEE Trans. Nucl. Sci., Vol. 50, No. 6., Dec. 2003 [Ka00] I. Kawrakow, “Accurate condensed history Monte Carlo simulation of electron transport. I. EGSnrc, the new EGS4 version”, Med. Phys. 27, 485 - 498 (2000) [Ka68] M. H. Kalos, “Monte Carlo integration of the adjoint gamma-ray transport equation”, Nuclear Science and Engineering, 33, 1968, 284-290 [Ka86] Kalos, M.H., and P.A. Whitlock, 1986, Monte Carlo Methods, Volume 1: Basics, John Wiley & Sons, New York. [Ki55] G.H. Kinchin and R.S. Pease. The Displacement of Atoms in Solids by Radiation. Reports on Progress in Physics, 18:1--51, 1955. [Kn89] G.F. Knoll, “Radiation detection and measurement (2nd edition)”, John Wiley and Sons, 1989 [Ko03] T. Koi, M. Asai, D. H. Wright, K. Niita, Y. Nara, K. Amako, T. Sasaki, “Interfacing the JQMD and JAM Nuclear Reaction Codes to Geant4”, CHEP03 Conf. Proceedings [Ko05a] A. S. Kobayashi, D. R. Ball, K. M. Warren, M. H. Mendenhall, R. D. Schrimpf, and R. A.Weller, “The effect of metallization layers on single event susceptibility,” IEEE Trans. Nucl. Sci., Vol 52, No.6, Dec. 2005. [Ko05b] T. Koi et al., “Ion Transport Simulation using Geant4 Hadronic Physics”, Proceedings of MC2005, Chattanooga, Tennessee, April 17-21, 2005, on CD-ROM, Americal Nuclear Society, LaGrange Park, IL (2005). [Ko92] A. Konobeyev, J. Nucl. Mater. 186 (1992) 117 [Le02] F.Lei et al., “MULASSIS: a Geant4-based multilayered shielding simulation tool”, IEEE Trans. Nucl. Sci. 49 (2002) 2788-93 [Le50] H. W. Lewis, “Multiple scattering in an infinite medium,” Phys. Rev.,vol. 78, no. 5, pp. 526–529, 1950. [Le94] W.R. Leo, “Techniques for Nuclear and Particle Physics Experiments”, SpringerVerlag, 1994 [Lo66] Los Alamos Scientific Laboratory. 1966. Fermi invention rediscovered at LASL. The Atom, October, pp. 7-11. [Ma05] Magnetocosmics web site: URL: http://cosray.unibe.ch/~laurent/magnetocosmics [Mc48] W.A. McKinley and H. Feschbach, Phys. Rev. 74, 1759 (1948). [Me05] M. H. Mendenhall and R. A. Weller, “An algorithm for computing screened Coulomb scattering in Geant4,” Nucl. Instrum. Meth. B 227, pp. 420–430, 2005. [Me49] N. Metropolis and S. Ulam. 1949. The Monte Carlo method. Journal of the American Statistical Association 44:335-341. [Me87] Nick Metropolis, “The Beginning of the Monte Carlo Method”, Los Alamos Science, 15, 1987, p. 125. [Mo29] N.F. Mott, Proc. Roc. Soc. A, v. 124, 1929, 425 III-59
[Mø32]
C. Møller, “Zur Theorie des Durchgang schneller Elektronen durch Materie”, Ann. Phys., Lpz. 14, 1932, 531-585. [Mo65] N.F. Mott and H.S.W. Massey, “The theory of Atomic Collisions”, Oxford University Press, London, 1965, 3rd ed., pp. 53-68. [Na01] Nara Y., Otuka N., Ohnishi A., Niita K. and Chiba S. (2001) Phys, Rev. C61, 024901. [Na99] Nara et al., Phys. Rev. C56 (1999) 4901. [Ne54] Neher H.V., and Anderson, H.R., 1954, J. Geophys. Res., 69, 807 [Ne58] Neher,H.V., Peterson, V.Z., and Stern, E.A., 1958, Phys. Rev., 90, 655 [Ne67] Neher, H.V., 1967, J. Geophys. Res., 72, 1527 [Ne85] W. R. Nelson, H. Hirayama and D. W. O. Rogers, “The EGS4 code system", SLACreport-265, 1985. [Ni01a] Niita K., Takada H., Meigo S. and Ikeda Y. (2001) Nucl. Instr. and Meth. B184, 406. [Ni01b] P. Nieminen and M.G. Pia, Il progetto Geant4-DNA, AIRO Journal, March 2001 [Ni95] K. Nita et al., Phys. Rev. C52 (1995) 2620. [Ni98] NIST stopping power and range tables for electrons, protons, and helium ions: http://physics.nist.gov/PhysRefData/Star/Text/contents.html [No00] “NOVICE: A radiation transport/shielding code, user's guide”, Experimental and Mathematical Physics Consultants, 2000. [Pe00] M. Pelliccioni, “Overview of Fluence-to-Effective Dose and Fluence-to-Ambient Dose Equivalent Conversion Coefficients for High Energy Radiation Calculated Using the FLUKA Code”, Radiat. Prot. Dosim. 88(4), 279-297 (2000) [Pe05] D. B. Pelowitz, ed., “MCNPX user’s manual version 2.5.0,” Los Alamos National Laboratory report, In press (February 2005). [Pe97a] S.T. Perkins, et al., Tables and Graphs of Electron Interaction Cross Sections from 10 eV to 100 GeV Derived from the LLNL Evaluated Electron Data Library (EEDL), UCRL50400, Vol. 31, 1997. [Pe97b] S.T. Perkins, et al., Tables and Graphs of Atomic Subshell and Relaxation Data Derived from the LLNL Evaluated Atomic Data Library (EADL), Z=1100, UCRL50400, Vol. 30, 1997. [Pl05] Planetocosmics web site: http://cosray.unibe.ch/~laurent/planetocosmics [Po06] W. Pokorski, R. Chytracek, J. McCormick, G. Santin, “Geometry Description Markup Language and its application-specific bindings”, CHEP06 Conf. Proceedings [Pr77] Pratt, R. H., H. K. Tseng, C. M. Lee, and L. Kissel, At. Data and Nucl. Data Tables, 20, 175, 1977. [Ra55] A Million Random Digits with 100,000 Normal Deviates, The RAND Corporation, Glencoe, IL: The Free Press, 1955. Now available, a 2001 edition of this RAND Classic. [Ri75] R. Ribberfors, Phys. Rev. B 12 (1975) 2067. [Ro05] J.-F. Roussel, F. Rogier, D. Volpert, J. Forest, G. Rousseau, and A. Hilgers, “Spacecraft plasma interaction software (SPIS) : Numerical solvers - methods and architecture”, Proceedings of the 9th Spacecraft Charging Technology Conference, Tsukuba, Japan, 2 - 9 April 2005. JAXA. [Sa01] F. Salvat, J.M. Fernandez-Varea, E. Acosta and J. Sempau, “PENELOPE, A Code System for Monte Carlo Simulation of Electron and Photon Transport”, Proceedings of a Workshop/Training Course, OECD/NEA 5-7 November 2001, III-60
[Sa03] [Sa05] [Sc59] [Se03] [Se79] [Se80] [Se94] [Se97] [Sr03] [Su93] [Sy04] [To93] [Tr04a]
[Tr04b] [Tw05] [Ul47] [Ul50] [Va00]
NEA/NSC/DOC(2001)19. ISBN:92-64-18475-9 G. Santin et al., "New Geant4 based simulation tools for space radiation shielding and effects analysis", Nuclear Physics B (Proc. Suppl.) 125, pp. 69-74, 2003 G. Santin, V. Ivanchenko, H. Evans, P. Nieminen, E. Daly, “GRAS: A generalpurpose 3-D modular simulation tool for space environment effects analysis”, IEEE Trans. Nucl. Sci. 52, Issue 6, 2005, pp. 2294 – 2299. DO Schneider and DV Cormack, Radiation Res. 11, 418 (1959) J. Sempau, J.M. Fernandez-Varea, E. Acosta and F. Salvat, “Experimental benchmarks of the Monte Carlo code PENELOPE”. Nuclear Instruments and Methods B 207 (2003) 107-123. Seltzer, S. M., Electron, Electron-Bremsstrahlung, and Proton Depth-Dose Data for Space-Shielding Applications, IEEE Trans. Nuclear Sci., 26, 4896, 1979. Seltzer, S. M., SHIELDOSE, A Computer Code for Space-Shielding Radiation Dose Calculations, National Bureau of Standards, NBS Technical Note 1116, U.S. Government Printing Office, Washington, D.C., 1980. S.M. Seltzer, “Updated calculations for routine space-shielding radiation dose estimates: SHIELDOSE-2”, NIST Publication NISTIR 5477, Gaithersburg, MD, 1994. J. Sempau, E. Acosta, J. Baro, J.M. Fernandez-Varea and F.Salvat, “An algorithm for Monte Carlo simulation of the coupled electron-photon transport”, Nuclear Instruments and Methods B 132 (1997) 377-390. SRIM computer code available from: http://www.srim.org G.P. Summers, E.A. Burke, P. Shapiro , S.R. Messenger, R.J Walters, "Damage correlations in semiconductors exposed to gamma, electron and proton radiations, IEEE Trans. Nucl. Sci., Vol. 40, No. 6., Dec. 1993 TCAD Tools, Synopsys, Fremont, CA, 2004. Lawrence W Townsend, John W Wilson, Ram K Tripathi, John W Norbury, Francis F Badavi, and Ferdou Khan, “HZEFRG1, An energy-dependent semiempirical nuclear fragmentation model,” NASA Technical Paper 3310, 1993. P. Truscott, F. Lei, C.S. Dyer, A. Frydland, S. Clucas, B. Trousse, K. Hunter, C. Comber, A. Chugg and M. Moutrie, “Assessment of Neutron- and Proton-Induced Nuclear Interaction and Ionization Models in Geant4 for Simulating Single Event Effects”, IEEE Trans. Nucl. Sci. 51 (2004), 3369-74 P.R.Truscott, “Nuclear-nuclear interaction models in Geant4”, QINETIQ/ KI/ SPACE/ SUM040821/1.1, 2004 J. Tweed, S.A. Walker, J.W. Wilson, F.A. Cucinotta, R.K. Tripathi, S. Blattnig, C.J. Mertens, “Computational methods for the HZETRN code”, Adv Space Res. 2005;35(2):194-201 S. Ulam, R. D. Richtmyer, and J. von Neumann. 1947. Statistical methods in neutron diffusion. Los Alamos Scientific Laboratory report LAMS-551. This reference contains a letter from von Neumann. S. Ulam. 1950. Random processes and transformations. Proceedings of the International Congress of Mathematicians 2:264-275. A. Vasilescu (INPE Bucharest) and G. Lindstroem (University of Hamburg), “Displacement damage in silicon, on-line compilation”. URL: http://sesam.desy.de/members/gunnar/Si-dfuncs.html III-61
[Wa05a] K. M. Warren, R. A. Weller, M. H. Mendenhall, R. A. Reed, D. R. Ball, C. L. Howe, B. D. Olson, M. L. Alles, L.W. Massengill, R. D. Schrimpf, N. F. Haddad, S. E. Doyle, D. McMorrow, J. S. Melinger, and W. T. Lotshawand, “The contribution of nuclear reactions to single event upset cross-section measurements in a high-density seu hardened sram technology,” IEEE Trans. Nucl. Sci., Vol 52, No.6, Dec. 2005. [Wa05b] S.A. Walker, J. Tweed, J.W. Wilson, F.A. Cucinotta, R.K. Tripathi, S. Blattnig, C. Zeitlin, L. Heilbronn, J. Miller, “Validation of the HZETRN code for laboratory exposures with 1A GeV iron ions in several targets”, Adv. Space. Res. 2005;35(2):202-7 [We04a] H.P. Wellisch et al., “Ion transport simulation using Geant4 hadronic physics”, Computing in High Energy and Nuclear Physics (CHEP04) Conference Proceedings, Interlaken, Switzerland, 2004. [We04b] J.P. Wellisch et al., “The binary cascade,” Eur. Phys. J. A, vol. 21, p.407, 2004. [Wi04] J.W. Wilson, R.K. Tripathi, G.D. Qualls, F.A. Cucinotta, R.E. Prael, J.W. Norbury, J.H. Heinbockel, J. Tweed, “A space radiation transport method development”, Advances in Space Research 34 (2004) 1319–1327 [Wi89] J. W. Wilson, L. W. Townsend, J. E. Nealy, S. Y. Chun, B. S. Hong, and W. W. Buck et al., “BRYNTRN: A Baryon Transport Model,” NASA TP-2887, 1989. [Wi95] J. W. Wilson, F. F. Badavi, F. A. Cucinotta, J. L. Shinn, G. D. Badhwar, and R. Silberberg et al., “HZETRN: Description of a free-space ion and nucleon transport and shielding computer program,” NASA TP-3495, 1995. [Zi77] J. F. Ziegler, “The Stopping and Ranges of Ions in Matter”, New York, Pergamon, 1977, vol. 4. [Zi85] J. F. Ziegler, J. P. Biersack, and U. Littmark, “The Stopping and Range of Ions in Solids”, New York: Pergamon, 1985, vol. 1.
III-62
2006 IEEE NSREC Short Course
Section IV: Device Modeling of Single Event Effects
Prof. Mark Law University of Florida
Approved for public release; distribution is unlimited
Single Event Upset in Technology Computer Aided Design Prof. Mark E. Law, University of Florida NSREC 2006 Short Course I Introduction to TCAD ...................................................................................................... 2 A Overall Issues Driving TCAD..................................................................................... 3 II Numerical Approximations ............................................................................................. 4 A Time Discretization..................................................................................................... 4 B Spatial Discretization .................................................................................................. 8 III Physical Approximations – Process Modeling ............................................................ 10 A Process Flow ............................................................................................................. 10 III Physical Approximations - Device Modeling.............................................................. 15 A Tool Types ................................................................................................................ 15 B Basic Approximations ............................................................................................... 16 IV - Models for Device Simulation .................................................................................. 18 A Low Field Mobility ................................................................................................... 19 B Surface Scattering ..................................................................................................... 21 C Velocity Saturation.................................................................................................... 22 D Quantum Corrections to Inversion Layers ................................................................ 23 V Simulation Studies ........................................................................................................ 23 VI Conclusions.................................................................................................................. 29 VII Acknowledgements .................................................................................................... 29 VIII References................................................................................................................. 29
IV-1 IV-1
I Introduction to TCAD Technology Computer Aided Design (TCAD) is the simulation of manufacturing processes and device performance. TCAD has been in use for more than 25 years and are in widespread use to help design, verify, and debug processes and devices. This short course reviews the state-of-the-art for TCAD and how it can be applied to simulating radiation events. TCAD tools fall under several broad categories. Process simulators allow the user to input process recipes and predict the device structure and doping. They have many complicated components, since there are many process steps. Some tools can simulate the etch, deposition, and lithography step[Ul88], [Ad95]. Others focused on simulation of the implant profile as a function of dose and energy (UT-Marlowe [Kl92]). Others focused on simulation of the thermal steps – anneals and oxide growth (SUPREM-IV[La88]). The recent trend has been to combine all these capabilities in one tool, e.g. Sentaurus-Process. Device simulators use the output of a process simulator as input. They are given the structure and doping profile. By solving the transport equations for electrons and holes and the electrostatics, they predict the operating conditions of the device. Classic examples of these tools are PISCES[Pi83] and MINIMOS [Se80]. Device simulators usually can solve for DC operating point, AC small-signal, RF harmonic balance for large signal, and switching transients. Modern simulators include special modules for power devices and things like single event upset (Sentaurus-Device). Mixed-mode simulators [Ro88] also allow the addition of small circuits around the main simulated device structure. In some cases, codes have been prepared to handle multiple device structures linked with circuit elements. These are useful for examining how a device performs in a larger environment (for example [Do95]). This can be a critical component for single-event simulation, since the device transient currents can be computed and used as part of the overall SRAM cell to see if an upset occurs. Circuit simulators (SPICE [Mc71]) take abstracted device models and solve for the behavior of larger circuits. Although they sacrifice accuracy in the device modeling, they make up for it in vastly increased computational throughput. It is possible to run very large circuits in circuit simulators that are infeasible with today’s mixed-mode simulators. Radiation events can be simulated across all of these tools. A radiation strike in the simplest case generates mobile charges that can flow in the device. The terminal currents can be analyzed with existing device simulators. Small circuits can be examined with a mixed-mode tool to see if the logic state changes with a radiation strike. A device simulator can be used to calibrate the response of a device so it can be inserted into a large circuit simulation. This portion of the short-course will focus on device simulation. Since device simulation requires inputs from process simulation, this topic will also be covered. No device simulation can be considered valid if the structure being simulated is not an accurate representation of the actual device structure. “Garbage-in, Garbage-out” applies strongly
IV-2
here. The author has a lot of experience with “bad device simulation” that was really just a poor approximation of the actual device structure.
A Overall Issues Driving TCAD The widely quoted International Technology Roadmap for Semiconductors (ITRS) [IT06] enumerates critical device dimensions (physical gate lengths, oxide thickness, junction depths, etc.) needed to meet performance goals of the future. The device of the future is almost certain to have an alternate gate dielectric beyond today’s nitrided oxides. It is possible the device could have a midband gap workfunction metal. It also likely that planar bulk CMOS will be replaced with novel dual gate structures to better control shortchannel behaviors. SiGe strained layers and capping layers will also be engineered to produce strain to boost mobility. The likely gate stack will contain vastly different materials than today’s poly / oxide / doped silicon device. It will have meta-stable dopant concentrations in the source and drain to attempt to control parasitics. Mechanical stress and how it influences processing and device performance will be critical factors. At the nanometer scale, all materials are in close proximity and strain sources will have little room to relax. Manufacturing variation of devices could well be a limiting factor as the number of atoms in the device gets to be a limit. Longer-term device architectures include quantum wires, carbon nanotubes and perhaps eventually molecular devices. Each of these new architectures, along with extremely scaled Si MOSFETs, represent new challenges for both TCAD process and device modeling, challenges which no tool existing today comprehensively addresses. The challenge for modeling will be greatly compounded when both channel and junction dimensions are less than 10nm. In this regime, the fundamental concepts of continuum modeling, like diffusion, average concentration and mobility, lose their meaning because of the lack of statistically significant amount of ions, carrier scattering events, and even electrons [Ve93]. At these dimensions, individual ion and extended defects can profoundly affect device structure, quantum transport effects are amplified, and contact regions now dominate performance because the channel resistance is so small. We are seeing the effects of approaching these limits even today. Statistical fluctuation effects modulate threshold voltages due to the finite number of dopants in the channel [St98]. Carrier mobility modeling must be treated empirically for each technology, with ballistic, quantum, and other non-equilibrium effects all lumped in with bulk parameters [Wa04]. For the projected device, TCAD is likely to be more necessary than it was for predecessor planar bulk devices. For close to twenty years, we have been able to scale bulk CMOS in fairly straightforward ways. Today’s device looks much like those from the mid ’80’s. Since the future device will feature different dominant transport, new materials, and meta-stable processing the need for TCAD is greater than ever before. Experimental structures cost more with each generation and evaluating device options is more difficult and complicated than ever before. This is certainly true when attempting to understand radiation hardness. Process and Device TCAD in industry today has two primary functions. The first is to assist with direct process design, which requires the simulation tools to be calibrated to a
IV-3
reasonably well-characterized process flow. Once calibrated, the tools can be used to target specific process parameters, explain data and debug the process issues, and predict the performance of new process options. The objective is accurate quantitative prediction. The characterization effort required to calibrate process and device tools is a lengthy process requiring substantial use of SIMS of implant and anneal conditions, TEM device cross-sections (to verify dimensions), and comprehensive I-V and C-V data. There needs to be a tight partnership between the process and the TCAD engineers, the latter of which must have expertise with both process and devices models. Of course, the tools themselves must be able to model, at least empirically, all important phenomena that determine performance. The second important way tools are used today is to gain conceptual understanding. This understanding can take the form of examining idealized design trade-offs or investigating entirely new device or process concepts. In this role, detailed calibration of the tools is not needed, but the underlying physics must be correct in order to capture the right trends. In these new design spaces, empirical models cannot be relied upon. This is the primary mode that TCAD has been used in the radiation effects community. It has not been particularly feasible to run enough events to be statistically significant. Finally, as scaling challenges in traditional Si technology open the door for more serious consideration of new device concepts and materials, accurately assessing these new options with simulation will become of critical importance. Each of these new device architectures will require incredibly expensive changes in process tools, and simulation work that can reliably sift through the myriad of options can potentially save a huge amount of resources. The radiation tolerance of these process options will be need to evaluated. Clearly the TCAD community needs to prepare for this by working on an infrastructure flexible and physically detailed enough to evaluate these options and the inherent statistical variations involved with them.
II Numerical Approximations Underlying all TCAD tools are numerical approximations. These numerical approximations control the convergence, CPU time consumed, and error in the simulation results. Some approximations aid overall convergence and some hinder it. Most users of device simulation have had problems with convergence. We all want simulations to run fast and different numerical techniques offer different CPU trade-offs. Most importantly, numerical approximations control the calculation error. All simulations (Process, Device, or Circuit) contain errors from their numerical approximations. Understanding and controlling the sources of error are critical to getting desired results. In this section we’ll discuss error from time and spatial approximations.
A Time Discretization Obviously, radiation events at the device level are by their nature transient. It is absolutely necessary to simulate accurately the evolution of the charge and terminal currents in time. The simplest way to approach transient simulation in a device simulator is to recast the differential equation into integral form:
IV-4
t1
"C = F(C,t) "t
C(t1 ) " C(t 0 ) =
$ F(C,t)#t
t0
In these equations, C is a function of time and F is a function of C and t. No statement is being made about F as of yet – it could also be a differential operator. Most methods of ! solving the time dependent!equation relate to forming a polynomial approximation to F(C,t) and integrating it in time. In choosing an appropriate method, there are three main issues. First is the accuracy of the method. Second is the computation time required to solve the equation. Third is the stability of the method. Accuracy refers to the overall error in the approximations. Generally, a user wishes to set a tolerance for the computation. Stability refers to the ability to damp errors. Does the error accumulate or decrease as the timesteps go forward? If there is an error in C(t0), does it get larger or smaller in C(t1)? As an illustration, let’s use the simple backward Euler method:
C(t1 ) " C(t 0 ) = ( t1 " t 0 ) F(C(t 0 ),t 0 ) The value of the integral is approximated as the value at the beginning of the interval times the width of the time interval (Figure 1). This is a closed form expression, or ! explicit, since the value of the concentration at the end of the interval shows up only once in the expression. We can compute the value point by point throughout time. This can provide considerable benefit computationally. However, the error is not as well contained. If we use a Taylor’s series to approximate F(C,t), we can write that the error in the integral is:
C(t1 ) " C(t 0 ) = ( t1 " t 0 ) F(C(t 0 ),t 0 ) +
1 2 #F ( t1 " t0 ) 2 #t
The additional term represents the largest component in the Taylor’s series and is proportional to the time interval squared and the first derivative in time of the function F. ! This makes good qualitative sense. If the function is changing rapidly in time, assuming F is constant with the value at the beginning of the interval is a poor approximation. The error is proportional to the square of the size of the time interval. This is known as a first order accurate model (the first term of the Taylor’s series is in the calculation).
F(C,t)
F(C,t)
t0
t1
t0
t1
Figure 1 – Schematic of backward Euler and trapezoidal rule. The trapezoidal rule integration is shown on the left, Euler on the right. IV-5
Stability is usually analyzed with a test problem.
dy = "y dt with λ